A General Theory of Intelligence     Chapter 4. Self-Organizing Process

Section 4.2. The self-organization of goals

Goals of the system

The word "goal" in this book is used in two different senses: when talking about information systems informally, this word is used according to the definition given in Section 1.2, which is an internal regularity in a system that describes a stable state or developing orientation of the system. On the other hand, in technical discussions of NARS, a "goal" is a statement that the system attempts to realize. The first sense of the word includes the second as a special case, and it also includes the other types of task in NARS.

In terms of source, a goal can be either original or derivative, where the former is imposed on the system, and the latter is produced from the former by the system itself. In the current design of NARS, all input tasks are original goals, and all the derived tasks are derivative goals. In human or animals, all the (biological and evolutionary) innate drives are original goals, and all the other motivations, intentions, and objectives are derivative goals.

As explained in Section 1.2, the running process of an information system can be described as "achieving goals by taking actions". Concretely speaking, as described in Chapter 3, if a task is a judgment, "achieving it" means to derive all of its implications; if it is a question, "achieving it" means to find an answer for it; if it is a goal, "achieving it" means to make it true. With insufficient knowledge and resources, a goal is never fully achieved, but partially. Also, the same goal can appear again and again, and the environment may change, therefore the same goal may need to be repeatedly achieved.

Except in trivial situations, in an intelligent system at any moment there are always multiple goals coexisting, and it is their "resultant force" that drives the system to move to a certain direction in the space of possibilities. Since old goals are never fully achieved, and new goals appear constantly, the total number of goals tends to grow. On the other hand, given the resource restriction, low priority goals are constantly removed from the system, even though some of them may appear again in the future.

Relations among goals

The most direct relation between two goals is the derivation relation. As mentioned above, every derivative task is produced by another task, its "parent", which may be the "child" of yet another task. This "family tree" can be backtracked all the way to the original tasks, which are not derived by the system, but directly given to it.

Task derivation is necessary, because a goal, unless it is very simple, cannot be directly achieved by a single action. Usually a goal is achieved by a sequence of steps, each of them is conceptually a derivative goal, a means that serves an ends, the goal from which they are derived. This is especially true to the original goals, which tend to be complex — just think about how "to survive", "to reproduce", "to serve people", not to mention a "law" like "A robot may not injure a human being or, through inaction, allow a human being to come to harm." [Asimov] — none of these goals can be achieved or satisfied in a single step.

In an instinctive system, the derivation relations are predetermined. In an animal, a biological need is often satisfied in the same way. In a conventional computer system, an information-processing task is often fulfilled by the same program. In an intelligent system, on the contrary, the system has the flexibility of achieving the same goal in different ways, when it appears in different contexts. Consequently, each time a goal is being achieved, derivation goals are produced according to the current situation, which may be never happened before. As described in Section 3.5, in NARS for a given task at a given moment, the tasks to be derived depend on the belief that happens to be selected at the moment. If the parent task and the child task are A and B, respectively, the derivation is justified by the belief, which states that a solution of B will lead to a solution of A, to a certain extent.

Two goals can also be related indirectly. The achieving of one goal can make the achieving of another goal easier or harder. This situation happens when a earlier goal established or destroyed a precondition of a later goal, or when the former changed the memory structure in which the latter is processed. When the system becomes complex, there is no guarantee for the goals to be consistent with each other. It is quite often that a goal suggests the system to do something, while another goal suggests exactly the opposite.

Even when two goals have no relation in content, they may influence the processing of each other. Since the system usually has insufficient resources, spending more time and space on one goal means spending less in the others. Therefore, in principle the processing of any goal will have some impact on the processing of the others, and the other way around. In such a system, the processing of a goal can seldom be studied in isolation.

Goal and desire

In NARS, task derivation follows two different paths. If the derived task is a judgment or a question, then it is simply added into the corresponding concepts to process. However, if the task is a goal, then there is an additional step required.

As mentioned before, the technical term goal in NARS indicates a statement to be realized, by executing some operations of the system. Therefore, unlike the other two that are purely inferential, this type of task may include conflicting commands. It would be terrible if the system immediately starts to do something to reach one goal, then afterwards finds that the same operation just destroyed another more important goal. To avoid this situation, the system does not commit itself to very derived goal immediately. Instead, the newly derived goal is used to adjust the desire-value of the corresponding statement, as described in Section Section 3.4.

In this way, the desire-value of each event in the system measures the overall desirability of the event to the system, according to all the goals that have been considered. Only events with significant high desire-values will be turned into goals to be actually pursued by the system. Of course, the system may still find unexpected consequences of its operations, but the situation is already very different from indiscriminately pursuing every derived task.

In NARS, the desire-value of a statement S is the truth-value of statement "S is desired" (not the truth-value of S itself!). In this way, all calculations on desire-value are reduced into calculations on truth-value. When there are conflicting commands, the winner of this conflict is the goal with a higher desire-value. Still, the two measure different properties for a statement: while the truth-value indicates the evidential support for the statement, the desire-value indicates the system's feeling, or attitude, about it.

This desire-value can be similarly attached to every term (or concept) in the system, according to their associated statements. This will allow the inference control to take the system's preference into consideration, and provide the foundation for the emotional mechanism of the system. For example, everything else being equal, preference should be given to concepts for which the system has strong feeling.

There is an intellectual tradition to contrast intelligence/rationality with emotion/feeling, and treat them as separate processes. However, according to the theory presented in this book, emotion is a necessary aspect of an intelligent system, as far as the system is not too simple. Among other things, emotion, feeling, and desire give the system a way to generalize its relation with concepts. A system usually has many reasons to like (or dislike) various objects or events. However, whatever the reason is, the simple desire-value suggests how the object or event should be treated, which facilitates quick processing of the related tasks. For a system with insufficient knowledge and resources, this kind of feeling-based quick processing is absolutely needed.

Goal alienation

Given the complicated relations among goals in an intelligent system, it is normal for the logical relation during goal derivation to become merely historical. That is, at a certain moment a child goal is derived from a parent goal, as the means to achieve the former end. However, at a later moment it becomes an end itself, and its achieving actually has little to do with the achieving of the former, even prevents the former from being achieved. This is the "goal alienation" phenomena.

This process starts as "functional autonomy" [Allport, 1937]. After a child goal is derived from a parent goal, their derivation relation will not be permanently maintained, and the child goal will be processed independently, just like the parent goal.

What Allport described can be generalized to all intelligent systems. In NARS, the derived goals are not bounded to their parent goals. Instead, they are processed independently. This treatment is very different from how goal-derivation is handled in instinctive systems. For example, in a conventional computer system, a subroutine runs solely as part of the calling routine, and will not exist longer than the latter.

For a system running with insufficient resources, it is very inefficient, if not impossible, to always remember the derivation relations among goals, and to consider them when a goal is processed. Furthermore, very often there are adaptive advantages to pursue a child goal after its parent goal is no longer active (which may have been achieved, abandoned, or just temporarily dormant), because it may still be desired by other goals, or will be desired again in the future. With insufficient knowledge, the system has no sure way to decide whether there is still value to pursue a derivative goal merely depending on the status of the original goal(s) that brought it up in the first place.

Consequently, the goals a system has at a moment does not only depend on the original goals of the system, but also depend on the current derivative goals, which are derived from the original goals and the relevant beliefs. Since the beliefs are just summarized experience, not "logical truth", they cannot guarantee the logical consistency and relevance between a parent goal and a child goal. This situation raises great challenge to the control of intelligent systems, that is, how to make sure an AI system will work according to the expectation of its designer.

In the discussion on the ethics and morality of AI, many people explicitly or implicitly assume the key is to give an AI system proper "super goals", like Asimov's Three Laws of Robotics. The problem is that even if all other goals are indeed derived from some benign original goals, their consequences may turn out to be evil. Even if a goal A is desired, and the system believes that B is a proper stop in the path to A, the actual achieving of B may make A impossible. Unless we assume sufficient knowledge, we cannot completely rule out this possibility.

How about to only derive goals that can be proved to be consistent with given original goals? Though this can be done, it will only make systems without any intelligence, like conventional computers, which only do what the human users literally ask them to do. Even though it does cause many undesired consequences, "goal alienation" is also the source of many desired and admired properties of intelligent systems. It can be argued that all "human motivations" that distinguish us from other animals are the ones that are highly alienated from their biological and evolutionary roots. Goal alienation also explains why people are motivated to participate in activities that have little "practical" value, such as enjoying art and music, as well as playing recreational games. No matter what their original purposes were (they were usually means to serve other ends), people now do it just "for fun", meaning that the activity become a goal in its own. Furthermore, many high-achieving people in various fields (sciences, arts, sports, business, etc.) are the people who take pleasure for the the own sake of the activity, rather than taking it as a means to another end (making money, becoming famous, saving the world, etc.).

Consequently, like it or not, goal alienation is an inevitable process in any truly intelligent system. To ban it means to ban intelligence all together.

However, this conclusion should not be interpreted as saying that AI will eventually be out-of-control. An alienated goal still comes from a parent goal and a parent belief. To control the behaviors of an intelligent system, to limit the original goals are not enough, because the experience of the system is no less important. We will revisit this topic in Chapter 5.

Development of goal complex

In summary, the goals in an intelligent system, call it the "goal complex" of the system, goes through a self-organization process when the system is running.

Initially, there are only original goals in the system, which is imposed on the system, and the system has no general way to restrict its content. In this sense, intelligence is morally neutral, in the sense that it is neither decent nor evil by nature. Also, intelligence is value neutral, because the "value" of an object or an event is evaluated with respect to the system's goals, which is independent of the system's intelligence, but determined by its original goals and experience.

As soon as the system begins to interact with its environment, it forms beliefs according to its experience. New goals are derived from existing goals and beliefs. As a result, the system soon has many coexisting goals. These goals have different weights in deciding what the system will do. At any moment, one or a few goals may be dominant, while the others only make minor impacts. However, even in these situations, the dominant goals are not necessarily the original goals, and nor will their dominance continue forever.

Beside the difference in dominance (measured as priority in NARS), different goals also have different durability, in that some goals are long-term, while some others are short-term, relatively speaking. Some goals appear periodically, such as the biological drives in the human mind.

Though conflict and competition among goals are inevitable, the system will do its best to achieve a balance among the coexist goals, and try to satisfy as many of them to the extent as much as allowed by the available knowledge and resources. Promising and coherent goals are rewarded, while exhausting and deviating goals are penalized. In this process, it is inevitable that some goals will fail to be achieved, and some achieved only to very low extent. There is no sharp threshold for the system to reach or maintain in satisfying its goals.

In the long run, the system may successfully establish a relatively stable long-term goal complex, which serves as an important part as the system's "personality" that defines its enduring characteristics of behavior. In the complex we may find something like Maslow's "hierarchy of human needs", in terms of their originality, priority, and durability. However, none of the factor by itself will always make a goal dominant, nor is the system achieves its goals "level by level", no matter how the levels are defined. At any moment, the dominant goal, if there is one, gets into that position as the overall result of many events.