A General Theory of Intelligence     Chapter 5. Experience and Socialization

Section 5.5. Education

Education and intelligence

Education, or training, is a special type of socialization. In this process the system being educated or trained is provided with a predetermined partial experience, to get a desired result, as the obtained goals, beliefs, and actions of the system. It is a semi-compulsory socialization, mainly to serve the aim of some other systems, or the society as a whole.

For a society, education is an efficient way to pass certain experience to new members in the society. This mechanism is necessary for a social system to adapt to its environment, while keeping internal consistency among its members, though very often various biases are spread by this process, too.

With the born of truly intelligent computer systems, "education of AI" will become a necessary step for the system to get the desired functionality. Now it is the time to summary the developing stages a NARS implementation will go through in its life cycle:

  1. Intrinsic core: This is basically what has been called "NARS" in this book, whose structure is described in Chapter 3, with a running process described in Chapter 4. This part of the system is coded in a programming language, and is not changed using the life cycle of the system. The design is domain-independent, and shared by all the implementations, except the value of a group of system parameters (see in Section 3.5).
  2. Executable operations: They are the sensorimotor mechanism or hardware/software tools that augment NARS into NARS+, as described in Section 5.1. These operations defines the possible forms of interaction between the system and its environment. They depend on the hosting system, can be different from implementation to implementation. They usually remain unchanged in the system's like cycle, except the tools, each of which is accessible to the system for a certain time.
  3. Initial memory: When the system's life cycle starts, its memory can either be empty, or loaded with certain content. This initial memory is equivalent to a certain experience, and its content can be revised by the future experience of the system. The initial memory may contain domain-specific content.
  4. Education time: In the early stage of the system's life cycle, the system's experience is under control, either by being feeded with predetermined materials, or by only interacting with a tutor system (a human or another computer).
  5. Exploration time: When the system is "mature" enough, it is released into an environment, and allowed to have unrestrained experience. It will adapt and learn on its own.
As mentioned previously, education and loaded memory are interchangeable — as soon as a system is properly educated, its memory can be copied into another system as its initial memory, so as to skip the education time. ALso, education time and exploration time can be blended and interwoven in various ways.

The system's behavior is influenced by the above factors in different ways. Roughly speaking, its "intelligence" comes from its intrinsic core, and is innate, fixed, general-purpose, and domain-independent; its "capability" comes from the other factors in the list, and is mostly acquired, growing, special-purpose, and domain-dependent. The two cannot be exchanged with each other. Even if a system is fully intelligent by design, it may still fail to achieve any useful capability if the education process is confusing and preposterous. On the other hand, an innate defect in the intelligence of a system usually cannot be made up by education.

Principles of education

The education of intelligent systems will to a large extent follows the same principles and procedures of human education.

For an AI system like NARS, just load a large number of tasks and beliefs into its memory is not the right way to educated, because the memory does not merely contain the tasks and beliefs that are directly expressible in the experience of the system. More accurately, we can say that the system's knowledge cannot be directly and efficiently acquires as a sequence of tasks and beliefs, for the following reasons:

Therefore, during education the system is not passively recording whatever the educator provides, but actively processing the teaching materials, and organize its goals, actions, and beliefs accordingly.

It means an educator should have an education plan, as well as a good understanding about the usual processing procedure in the student system to be educated. To achieve a given objective, the following factors should be taken into consideration:

According to the above general principle, the education process is very different from how the conventional "knowledge-based" systems and other similar systems get their knowledge, which is usually either by loading domain-specific "rules" or "cases" (as data) into a knowledge-base (like a database), or by repeatedly imposing the desired input-output data on an adaptive module, until it converges to a function specified by the training data. For an intelligent system, its memory is too dynamic to be considered as a conventional knowledge-base, and its experience-behavior relation is too flexible to be considered as a function.

Educating human-compatible AI

According to the theory presented in this book, an AI system does not have to have human-like sensorimotor and, therefore, perceiving the environment using human-like categories. However, there are many practical reasons to make some AI systems human-compatible, which means the system's goals, actions, and beliefs have overlap with those of a human being.

One advantage of developing human-compatible AI is that such a system can be educated with human knowledge, which already exist in various forms and with all kinds of content.

Since most of human knowledge is expressed in natural language, it will be convenient for the system to learn a natural language first (as described in Section 5.3), then it can be educated using materials in that language.

Another source of knowledge is the data and knowledge in various computer-processible formats, such as databases, spreadsheets, markup languages, etc. To acquire knowledge from these sources, a system like NARS can either directly learn how to convert each data item from its native format into Narsese, or use a special-purpose software tool for the conversion, even doing some data-mining or knowledge-discovery first, then only feed the system the result of this preprocessing, rather than the "raw data".

The human knowledge involved in this process can either be common-sense knowledge or expertise knowledge. Though these two types of knowledge often come from different sources, there is no reason to believe that they should be processed differently in an intelligent system. There is no separate mechanisms for "common-sense reasoning" and "expertise reasoning", though as knowledge, expertise is usually more accurate and less ambigioums than common-sense.

No matter how carefully the teaching materials are chosen and how education is carried out, usually an AI system won't have knowledge and its exactly like a typical human being — at least it usually does not have human biological experience and social experience, and simulation of them has a limit. It is unrealistic to expect an AI to behave exactly as a human, just like to expect people growing up in very different societies can fully agree with each other on everything. It is important to understand that such a difference cannot be used as reason to consider one system as "more intelligent" than another. Instead, they may be as intelligent as each other, but only have gone through different education and socialization process, so end up with different behaviors, while as far as the current discussion is concerned, it cannot say which of the systems is "better" — it is better to just consider them as "different".

AI Ethics

The ethics of AI is a topic that has raised many debates, both among researchers in the field and among the general public. Since many people see "intelligence" as what makes human the dominate species in the world, they worry AI will take that position, and the success of AI will actually lead to a disaster.

This concern is understandable. Though many advances in science and technology have solved many problems for us, they also create various new problems, and sometimes it is hard to say whether a specific theory or technique is beneficial or harmful. Given the potential impact of AI on the human society, we AI researchers do have the responsibility of carefully anticipating the social consequence of their research results, and do our best to bring the benefits of the technique, while preventing the harms from it.

Previously the factors influencing the system's behavior have been listed. Among them, the core intelligence, as represented by NARS, is morally neutral, that is, the degree of intelligence of a system has nothing to do with the system is considered as beneficial or harmful, either by a single human or by the whole human species, because the intelligence mechanism is independent of the content of the system's goals, actions, and beliefs, which are determined mainly by the system's experience.

Therefore, to control the behavior of an intelligence means to control its experience, that is, to educate the system. We cannot design a human-friendly AI, but have to teach an AI to be human-friendly, by using carefully chosen materials to shape its goals, actions, and beliefs. Initially, we can load its memory with certain goals and beliefs, in the spirit of Asimov's Three Laws of Robotics, as well as many more detailed requirements and regulations.

The difficulty of this topic comes from the fact that for a sufficiently complicated intelligent system, it is practically impossible to fully control its experience. Or, put it in another way, if a system's experience can be fully controlled, its behavior will be fully predictable, however, such a system cannot be fully intelligent. As explained in Section 4.2, the derived goals of an intelligent system are not always consistent with their parents goals. Similarly, the system cannot fully anticipate all consequences of its actions, so even if its goal is benign, the actual consequence may still turn out to be harmful, to the surprise of the system itself.

As a result, the basic fundamental ethical and moral status of AI is the same as most other science and technology — neither beneficial in a foolproof manner, nor inevitably harmful. The situation is similar to what every parent has learned: a friendly child is usually the product of education, not bioengineering, though this "education" is not a one-time event, and one should always be prepared for unexpected events. The AI researchers have to always keep the ethical issues in mind, and make the best selections at each design stage, without expecting to settle down the issue once for all, or to cut off the research all together just because it may go wrong — that is not how an intelligent species deals with uncertain situations.