A General Theory of Intelligence     Chapter 3. Inference System

Section 3.5. NARS: memory and control

Function of memory and control

In theory, the overall capability of a system like NARS is fully specified by its representation language and inference rules. A problem the system can accept must be represented as a question or goal in Narsese, and the best solution of the system can find is determined by its beliefs, operations, and inference rules. If resource is not an issue, what the memory and control need to support is to systematically and exhaustively go through all possible solutions to find the best one.

However, as in all realistic systems, resource is an issue, and NARS is designed according to the assumption of insufficient knowledge and resources. Consequently, in NARS the memory and control mechanism must be responsible for resource management, which means to selectively turns some possibilities allowed by the language and rules into reality, by actually applying the corresponding rules on the premises, and at the same time to ignore some other possibilities, even though they are logically relevant.

Under the assumption of insufficient knowledge, the solutions NARS provides to problems cannot be guaranteed to be absolutely optimal, because the system does not have all the knowledge; under the assumption of insufficient resources, they are not even optimal relative to the available knowledge of the system. Instead, what can be expected in this situation is for them to be optimal relative to the available knowledge and resources.

As described in Section 3.4, in NARS "to process a task" means to let it, and its derived tasks, interact with the system's relevant beliefs, one per inference step. Since at each step, the task-belief combination completely decides which inference rules can use them as premises, and what kind of conclusions can be derived from them, the processing of a task is determined by the beliefs selected by the control mechanism to interact with it.

For a given task, how many beliefs should be taken into consideration? There are several conventional answers to this question, but none of them is proper in this case:

To work in such a situation, NARS dynamically allocate its resources among the tasks, which are processed in parallel, with different speed. What the control strategy aims at is the overall quality and efficiency of the system on all current tasks, rather that on a single task.

Controlled concurrency

As a system working in real time, at any moment NARS processes many concurrent tasks, which compete for time and space resources. The control mechanism of the system allocates processor time and storage space among the tasks in such a way that, according to the system's experience, will use the system's resources most efficiently.

To indicate the importance of a task, it is given a numerical priority value, which is roughly proportional to the amount of processor time the task will be given in the near future. This priority value is not an absolute deadline, or number of beliefs interacted with, but is relative to the priority values of the other concurrent tasks. Consequently, the same task with the same priority value may get very different amount of resources in different situations, in some of which the system is busy, while in the others, idle.

In this way, the priority of a task indicates its processing speed, relative to other tasks. By default, the longer a task exists in the system, the less valuable it becomes, so the priority decays over time, at a rate specified by the durability value of the task. Together, priority and durability specifies the current time budget of a task, that is, the amount of processing time the task is going to get in the near future.

The budget of a task is adjusted from time to time. Beside the gradual priority decay process, a priority can be significantly dropped when a best-so-far solution is found, since the task becomes less demanding, compared to the others. However, since there is no such a thing as a "perfect solution" in NARS, the budget of a "solved" problem is usually still non-zero, so as to allow the system to look for better solutions, though with less resources spent on it. On the other hand, the budget of a task can be increased if it is repeatedly derived, that is, there are constant drives for its solution.

The priority value of tasks are also used in space management. Under the assumption of insufficient resources, the storage space for tasks is finite. With the constant adding of new tasks (coming from the environment and derived by the system itself), there will eventually be a short of space. Whenever such a situation happens, the tasks with the lowest priority values will be removed to make space for new tasks.

Now we can see that there are two types of forgetting happening to tasks:

In traditional systems, a problem-solving process usually stops when a solution is found, or all possibilities have been explored. In NARS, the process stops when it lost the competition for resources, no matter what kind of solution it has found. In this way, the system will never be trapped by a task that turns out to be too expensive to be finished --- it will simply be forgot, unless there is enough reason to pursue it. Similarly, the system can allow open-ended process for goals that can never be fully achieved --- the system will just work on them as far as its resources permits.

Memory structure

The above "controlled concurrency" strategy can also be applied to the selection of beliefs. In each inference step, every relevant belief has a chance to be selected, because the system has no sure knowledge on which will work and which will not. On the other hand, the system does not treat them as equal, because according to its experience, some of them have been more useful than the others, so should be given a higher chance to be selected. The priority of a belief should also be determined by its quality (i.e., confidence or expectation) and relevance (i.e., whether are related to the active tasks).

To let all tasks and beliefs interact and compete within the whole system is neither efficient nor necessary. Since in NARS each inference step typically takes a task and a belief as premises, and the two must contain a common term, it is natural to introduce an intermediate unit for processing and storage, identified by a term.

In NARS, each term can name a concept, which holds all the existing tasks and beliefs containing that term. Consequently, in each inference step, the two premises must come from the same concept. For example, all sentences containing statement "robinbird" and statement "birdanimal" are collected in concept "bird". If in an inference step, one is the content of the selected task, and the other, the selected belief, then the derived task will have statement "robinanimal" as content, and it will be sent to both concept "robin", concept "animal", and concept "birdanimal" (a compound term can also name a concept).

At the level of concept, the same resource allocation mechanism is used. Each concept has a priority value and a durability value attached to indicate its current budget. Intuitively, the budget of a concept reflects the total budgets of the tasks in it, and preference is also given to the concepts that are relatively "well-defined", that is, its beliefs are more certain.

Now we can envision the memory of NARS as a two-layer structure: first, the memory can be seen as a collection of concepts, each named by a term, then, within each concept, there is a collection of tasks, and a collection of beliefs. On all three types of "collections", the items in it (concepts, tasks, or beliefs) have associated priority and durability values, as well as a quality measurement. Or, we can envision the memory of NARS as a network, with terms as nodes, and tasks and beliefs as links. In this image, a concept is a node with all the directly associated links. On each node and link, there is a priority-durability-quality triple attached.

Inference process

As an inference system, NARS runs by repeating the following working cycle, each carries out one inference step:
  1. Select a concept from the memory, where the probability for each concept to be selected is proportional to its priority value.
  2. Select a task and a belief from the selected task, also probabilistically.
  3. The combination of statements in the task and the belief decides which inference rules can be applied. Apply them to get the derived tasks.
  4. Process the result: New input tasks are also added in this way.
  5. Adjust the priority-durability-quality values of the selected items (concept, task, and belief), according to the immediate feedback obtained from the current result.
The above process is implemented by an algorithm, which takes a small constant time to finish. Of course, the actually implemented algorithm contains more details, though conceptually not too different from the above description.

The complete processing of a given task normally consists of multiple working cycles. As described above, the overall process and result depends on the task itself, the related beliefs, as well as other factors like the priority distribution in the system, and the existence of other tasks. From the task and the beliefs, the possible solutions can be determined in theory, though which ones will be actually produced, and in which order, are determined by the other factors.

When the environment (human user or another computer) assigns an input task to NARS, it can either use the default budget, or to assign specific priority and durability to influence the system's handling of the task. However, since these initial values will be modified many times by the system, it is the system itself that actually decide how much resources to spend on each task, according to the nature of the task, as well as the resource availability at the time. Consequently, there is no fixed "processing cost" for a task.

In summary, as discussed in Section 2.2, in NARS the processing of a task does not follow a predetermined algorithm, nor have a time-space complexity. Instead, tasks are processed in a case-by-case manner, each of them is handled according to the current available knowledge and resources. In the process, each step still follow an algorithm, but the steps are assembled into a problem-solving process in an experience-dependent and context-sensitive manner.

System parameters

The design of NARS leaves some parameters changeable from implementation to implementation.

For example, the constant k in the definition of confidence in Section 3.3 can take any positive value, though if the value is too large or too small, the system's behavior will look abnormal. Even so, it is hard to argue that there is an optimal value for this parameter in all possibly intelligent systems. Instead, it is more like a "personality parameter" of the system — different choices will let confidence increase at different rate as evidence accumulates, and therefore lead to different preference or bias in the system's behavior. Each choice may have some advantage and some advantage, so there is no best value, just like there is no "best character" for a person.

Similar cases happen in many places in NARS. For example, how much space a concept should occupy? How fast should a belief be forgot? What should be the default confidence value for input judgments? In each case, there is a rough notion of "normal values", though its boundary is fuzzy, and the choice within it is more or less arbitrary. On the other hand, to keep internal consistency and coherence, in each system it is better to have the parameters fixed, though different systems (though they are all NARS by definition) can choose different values.

Consequently, when multiple copies of NARS are implemented, each of them may have a different personality, determined by its "DNA", the values of the system parameters. Even if they are given exactly the same experience, the systems may behave more or less differently, though within a certain range.