How "intelligent" can Artificial Intelligence get?

Tuesday, December 13, 2016

How "intelligent" can Artificial Intelligence get?



By Dr Richard Benjamins, VP for External Positioning and Big Data for Social Good at LUCA.


This post is the second in a series of three posts, each of which discuss the fundamental concepts of Artificial Intelligence. In our first post we discussed AI definitions, helping our readers to understand the basic concepts behind AI, giving them the tools required to sift through the many AI articles out there and form their own opinion. In this second post , we will discuss several notions which are important in understanding the limits of AI.



Artificial Intelligence
Figure 1: How intelligent can Artificial Intelligence get?

Strong and weak AI

When we speak about how far AI can go, there are two “philosophies”: strong AI and weak AI. The most commonly followed philosophy is that of weak AI, which means that machines can manifest certain intelligent behavior to solve specific (hard) tasks, but that they will never equal the human mind. However, strong AI believes that it is indeed possible. The difference hinges on the distinction between simulating a mind and actually having a mind. In the words of John Searle, "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind.

The Turing Test

Turing test
Figure 2. The set up of the original Turing Test.

The Turing Test was developed by Alan Turing in the 1950s and was designed to evaluate the intelligence of a computer holding a conversation with a human. The human cannot see the computer and interacts with it through an interface (at that time by typing on a keyboard with a screen). In the test, there is a person who asks questions and either another person or a computer program responds. There are no limitations as to what the conversation can be about. The computer passes the test if the person cannot distinguish whether the answers or the conversation comes from the computer or the person. ELIZA was the first program that challenged the Turing Test, even though it unquestionably failed. A modern version of the Turing Test was recently features in the 2015 movie Ex Machina, which you can see in the video below. So far, no computer or machine has passed the test.



The Chinese Room Argument

A very interesting thought experiment in the context of the Turing Test is the so-called "Chinese Room Experiment" which was invented by John Searle in 1980. This experiment argues that a program can never give a computer the ability to really "understand", regardless of how human-like or intelligent its behavior is. It goes as follows: Imagine you are inside a closed room with door. Outside the room there is a Chinese person that slips a note with Chinese characters under the door. You pick up the note and follow the instructions in a large book that tells you exactly, for the symbols on the note, what symbols to write down on a blank piece of paper. You follow the instructions in the book processing each symbol from the note and you produce a new note, which you slip under the door.  The note is picked up by the Chinese person who perfectly understands what is written, writes back and the whole process starts again, meaning that a real conversation is taking place.

Chinese Room menta experiments
Figure 3. The Chinese Room mental experiment. Does the person in the room understand Chinese?

The key question here is whether you understand the Chinese language. What you have done is received an input note and followed instructions to produce the output, without understanding anything about Chinese. The argument is that a computer can never understand what it does, because - like you - it just executes the instructions of a software program. The point Searle wanted to make is that even if the behavior of a machine seems intelligent, it will never be really intelligent. And as such, Searle claimed that the Turing Test was invalid.

The Intentional Stance

Related to the Turing test and the Chinese Room argument, the Intentional Stance, coined by philosopher Daniel Dennett in the seventies, is also of relevance for this discussion. The Intentional Stance means that "intelligent behavior" of machines is not a consequence of how machines come to manifest that behavior (whether it is you following instructions in the Chinese Room or a computer following program instructions). Rather it is an effect of people attributing intelligence to a machine because the behavior they observe requires intelligence if people would do it. A very simple example is that we say that our personal computer is thinking when it takes more time than we expect to perform an action. The fact that ELIZA was able to fool some people refers to the same phenomenon: due to the reasonable answers that ELIZA sometimes gives, people assume it must have some intelligence. But we know that ELIZA is a simple pattern matching rule-based algorithm with no understanding whatsoever of the conversation it is engaging in. The more sophisticated software becomes, the more we are likely to attribute intelligence to that software. From the Intentional Stance perspective, people attribute intelligence to machines when they recognize intelligent behavior in them.

To what extend can machines have "general intelligence"?

One of the main aspects of human intelligence, is that we have a general intelligence which always works to some extent. Even if we don't have much knowledge about a specific domain, we are still able to make sense out of situations and communicate about them. Computers are usually programmed for specific tasks, such as planning a space trip or diagnosing a specific type of cancer. Within the scope of the subject, computers can exhibit a high degree of knowledge and intelligence, but performance degrades rapidly outside that specific scope. In AI, this phenomenon is called brittleness (as opposed to graceful degradation, which is how humans perform). Computer programs perform very well in the areas they are designed for, outperforming humans, but don't perform well outside of that specific domain. This is one of the main reasons why it is so difficult to pass the Turing Test, as this would require the computer to be able to "fool" the human tester in any conversation, regardless of the subject area.

In the history of AI, several attempts have been made to solve the brittleness problem. The first expert systems were based on the rule-based paradigm representing associations of the type if X and Y then Z; if Z then A and B,etc.  For example, in the area of car diagnostics, if the car doesn't start, then the battery may be flat or the starter motor may be broken. In this case, the expert system would ask the user (who has the problem) to check the battery or the check the starter motor.  The computer drives the conversation with the user to confirm observations, and based on the answers, the rule engine leads to the solution of the problem. This type of reasoning was called heuristic or shallow reasoning. However, the program doesn't have any deeper understanding of how a car works; it knows the knowledge that is embedded in the rules, but cannot reflect on this knowledge. Based on the experience of those limitations, researchers started thinking about ways to equip a computer with more profound knowledge so that it could still perform (to some extent) even if the specific knowledge was not fully coded. This capability was coined "deep reasoning" or "model-based reasoning", and a new generation of AI systems emerged, called "Knowledge-Based Systems".

In addition to specific association rules about the domain, such systems have an explicit model about the subject domain. If the domain is a car, then the model would represent a structural model of the parts of a car and their connections, and a functional model of how the different parts work together to represent the behavior of the car. In the case of the medical domain, the model would represent the structure of the part of the body involved and a functional model of how it works. With such models the computer can reason about the domain and come to specific conclusions, or can conclude that it doesn't know the answer.

The more profound the model is a computer can reason about, the less superficial it becomes and the more it approaches the notion of general intelligence.

There are two additional important aspects of general intelligence where humans excel compared to computers: qualitative reasoning and reflective reasoning.

Differences from computers
Figure 4: Both qualitative reasoning and reflective reasoning differenciate us from computers.

Qualitative reasoning

Qualitative reasoning refers to the ability to reason about continuous aspects of the physical world, such as space, time, and quantity, for the purpose of problem solving and planning. Computers usually calculate things in a quantitative manner, while humans often use a more qualitative way of reasoning (if X increases, then Y also increases, thus ...). The qualitative reasoning area of AI is related to formalism and the process to enable a computer to perform qualitative reasoning steps.

Reflective reasoning

Another important aspect of general intelligence is reflective reasoning. During problem-solving people are able to take a step back and reflect on their own problem-solving process, for instance, if they find a dead-end and need to backtrack to try another approach. Computers usually just execute a fixed sequence of steps which the programmer has coded, with no ability to reflect on the steps they make.  To enable computers to reflect on their own reasoning process, it needs to have knowledge about itself; some kind of meta knowledge. For my PhD research, I built an AI program for diagnostic reasoning that was able to reflect on its own reasoning process and select the optimal method depending on the context of the situation.

Conclusion

Having explained the above concepts, it should be somewhat clearer that there is no concrete answer to the question posed in the title of the post, it depends on what one wants to believe and accept. By reading this series, you will have learned some basic concepts which will enable you to feel more comfortable talking about the rapidly growing world of AI. The third and last post will discuss the question whether machines can think, or whether humans are indeed machines. Stay tuned and visit our blog soon to find out. 

No comments:

Post a Comment