Lecture 8:The Chinese Room Argument and Replies to it

[I, Sandra LaFave, did NOT write this. I copied it off the web a couple of years ago and now I can't find it any more. If you know who wrote this, could you email me so I can give the author proper credit? Thanks!]

 

Key Ideas of this Lecture

Searle argues against what he calls Strong AI. On his view:

  1. Strong AI is the view that to have a mind is to run the right computer program
  2. Weak Artificial Intelligence is the view that minds may be simulated using computers

Searle says he is arguing against Strong AI and can accept Weak AI which only claims to be able to simulate minds. However there are many in the AI camp who would argue against Searle that machines can have minds, but that in order to have a mind you need the physical capacity to interact with an environment, and this requires more than merely the right program. You need the right hardware as well. In our case the right hardware (or wetware!) is the brain, which is capable of taking input from our sense organs and controlling our speech, movements etc. There may be other kinds of hardware that will support sensory interaction and intelligent control of motor movement. Given sophisticated enough hardware a robot may be capable of having a mind.

In his Chinese Room Argument Searle points out that a human can simulate a computer running a program to (e.g.) pass the Turing test. Thus someone ignorant of Chinese could follow the rules of a program for communicating in Chinese. Even if the program were capable of passing the Turing Test, the human would not understand Chinese as a result of following the program. The program would just be about manipulating Chinese symbols.

Against Searle it is argued that the human with the program would not pass the Turing Test, nor would any other computer which ran the program as slowly as a human would. It is also argued that the appropriate test is the Super Turing Test (see last lecture). But what is required to pass that is a robot not a mere computer, and its electronic brain must be capable of supporting sensory interaction, controlling limb movement etc. Fodor is an example of someone who argues for the so-called Robot Reply to Searle.

The difference between Searle and his opponents is that he thinks only an organic brain can sustain thought , whereas his opponents think that other physical systems may also support thought. They can think this without supporting Strong AI as he envisages it.

Searle and the Chinese Room Argument

1. The position Searle is arguing against

Thirty years after Turing put forward his Turing Test proposal the philosopher J.R.Searle published ( Behavioural and Brain Sciences,1980) an apparently powerful argument against Artificial Intelligence. Searle distinguishes between:

  • Strong Artificial Intelligence: the view that to have a mind is to run the right computer program (ie. one or other out of vast set of programs for possible minds)

    and

  • Weak Artificial Intelligence: the view that minds may be simulated using computers, but to have a mind a being has to do more than just run the right program.

Searle says he is arguing against Strong Artificial Intelligence. So you might think that Searle's conclusion directly contradicts Turing's claim that, if a machine could pass the Turing Test, then, for scientific purposes, it would be the equivalent of a thinker. But is Turing's view a form of Strong Artificial Intelligence (as Searle defines it) or not? The answer is no, for two reasons:

  1. In his paper Turing does not argue that his test will prove that a machine has a mind, but only that it will be the equivalent of a thinker for scientific purposes.
  2. The Turing test requires more than that a machine runs the right program.

As far as the first point was concerned Searle would probably say that what he shows is that Turing's replacement for the question, "Can machines think?", cannot possibly be regarded as a precise scientific version of the original question. On Searle's view all that a machine passing the Turing Test would prove is the Weak AI view that minds may be simulated using computers.

But what Searle does not recognise is that there are a range of Artificial Intelligence Positions each of which differs from Strong AI, and from the mere claim that computers can be used to simulate thought. For example there are the positions:

  1. AI: Alternative 1: to have a mind is to run the right program on a physical system that is capable of interacting with the environment in the right sort of way.

  2. AI: Alternative 2: to have a mind is to run the right program on a physical system that has the right history and that is capable of interacting with the environment in the right sort of way.

Turing's view is a version of Alternative 1. He requires that the machine be able to interact with its human interlocutors by producing the right conversational interactions and by producing these answers at the right speed. Thus a computer could be "running the right program", but if it ran it too slowly or too quickly then it would not fool its human interlocutors. In order for a given program to work it would have to run on a fast enough processor. In case you think that the speed at which you can answer a question is irrelevant to intelligence, then think of what we do in intelligence tests and examinations. We require not only that people can answer the questions, but that they be able to answer them quickly enough. Indeed the phrase "slow on the uptake" is used to mean "unintelligent".

In the last lecture we considered the idea that a better test would be the Super Turing Test, in which objects could be passed in and out of the Turing Test room, and the two beings in the room would be asked to perform actions on them (ranging from painting to scientific experiments). The Super Turing Test would require that the machine in the room be able to perform all these tasks and thus be capable of robotic capacities: using sensors and motor control to interact with the world. We also considered the idea that an intelligent machine would need to be a learning machine. Ideally it would have to be able to do all the learning which a human baby is able to do. If this is correct then the machine would have to be capable of learning to use sensors and motor control. If the machine were to be human-like in these respects then it would have to be able to perform all these tasks at the right speed. If we required that the machine must have had the right sort of history of learning, then we would have the AI position Alternative 2.

Searle's Chinese Room Argument

Consider a man in a room who gets messages consisting of symbols passed into him and who follows a book of rules that determine messages for him to pass out of the room. The man has no idea what the messages passed in mean or the messages passed out. Searle's idea is that the man is doing what a digital computer does. Let us suppose that what the man is doing is taking orders (the Chinese words for No.21 etc.) for a Chinese takeaway and passing on messages in Chinese to the kitchen. But the man does not know this and thus is not thinking about food or even about the Chinese language. Instead of being in a Chinese takeaway the man could be playing the part of a digital computer in a version of Turing's "game". In that case the rules for proceeding from messages input (which would be in some language [L] unknown to the man) to messages output (also in [L]), would doubtless be very complex, and the man extremely slow, compared with modern electronic computers, in executing them. But, leaving speed aside, the man could execute any programs that a digital computer could execute. If the man was executing, unbeknownst to him, a program for playing the part of A in the "game", he would not be thinking about what B was thinking about. (Suppose the question was, "What is the highest mountain in the world?", then B would be thinking about mountains to answer it, and so would any human being straightforwardly playing A, but the man following the program would only be thinking about shuffling symbols). Thus even if the man, in his role as a digital computer, mimicked B successfully, for the purposes of the "game", this would not show that he and B were thinking about the same things, or that he and a human straightforwardly playing A were thinking about the same things. Therefore (according to Searle) success in the "game" cannot show that a digital computer "understands" what people are thinking about, but if it cannot show this, then surely Turing's question is not a more precise version of the question, "Can machines think?".

This is a good argument against Strong AI as Searle carries it, but is it a good argument against AI Alternative 1 or 2 above? Someone arguing for AI Alternative 1 could argue that the man has not got the right sort of processor to run the language understanding program quickly enough. It is only a being with the right sort of physical equipment to run the program quickly enough that is a candidate for understanding it. If the enthusiast for AI Alternative 1 also believes that the Super Turing Test is preferable to the Turing Test and believes in the importance of learning then he/she may argue against Searle that the man in the Chinese room will have no capacity to interact with objects on the basis of the "language understanding" program (the man couldn't do the cooking for example!), and will have no capacity for learning.

It is worth noting that Searle, unlike Descartes or Plato, is not a substance dualist. He does not believe in souls that are independent of brains. What he believes is that the special organization of organic matter has the power to cause special "higher level states" of matter to come into being. These states have intentionality, that is aboutness (meaning or semantics) e.g. if I have a belief that Everest is the highest mountain in the world, then it is a belief about Everest, about the world, about mountains. Searle does not think that electronic computers do have such higher level states. The enthusiast for AI Alternative 1 is arguing that the organization of certain "electronic brains" - ones with a great enough processing power and relevant physical control capacities - might also enable such "higher level" states to come into being.

Fodor's Robot Reply

There have been many responses to the Chinese Room Argument from Cognitive Scientists, A.I. researchers and philosophers and psychologists who support A.I. One of these is Fodor's "robot reply". The basic idea of the reply is that Searle may be correct about mere computers, but that a robot, which interacts with the world, can come to think about what it interacts with.

Fodor agrees with the following claim which he attributes to Searle

"... instantiating [fulfilling] the same program as the brain is not, in and of itself, a sufficient condition for having those propositional attitudes [beliefs, desires etc.] characteristic of the organism that has the brain."

But Fodor argues that a robot does more than merely instantiate a program. A robot interacts with the world. Thus we could check that a robot "knows what a mountain is" by asking it to climb one. If it agrees to do this, but climbs up a tree instead then we start to question whether it understands the word "mountain" or understands about mountains. If all we had was a computer interacting through a teletype, as in the Turing Test, then we could ask it whether mountains and trees were the same. But this would just get it to generate more sentences. We could never get it to go beyond the symbols and interact with mountains. But the robot can have sensors and motor units which enable it to obtain visual input, tactile input, etc. from mountains, and motor control units which enable it to navigate around mountains. If the robot is not pre-programmed to interact with mountains but is a learning machine then it will need to learn to associate its sensory input with sentences about mountains, it will need to learn to move around mountains etc.

Fodor's core argument

Fodor argues that if a system has:

(a) the right kind of symbol (syntax) manipulation program

(b) the right kind of causal connections with the world (this is what the robotic system is supposed to have)

then

(c) its symbols have a semantics (meaning)

and hence

(d) it has representational states (it represents things to itself)

and hence

(e) it has intentionality (aboutness).

Fodor argues that we have no reason to think that (a) and (b) can only be true of organic systems.

Searle's Response to the Robot Reply

Searle argues that the robot reply does not demonstrate that robots can have intentional states (e.g. beliefs, desires etc.). He considers a computer controlling the robot. He argues that a man in a room could follow the program of that computer. Thus his "man in the Chinese Room" could be inside the head of a robot. His symbol input would come from the robot's perceptual organs and the output would be what was controlling the robot's movements.

The response to Searle is that the program that would be required to run the robot would be massive. There is no way in which a human could follow the rules of such a program quickly enough to control the robot's limbs. The human couldn't perform the role of the robot's electronic brain.

Searle's Conclusions

Searle's would like to think that his argument concludes with a resounding "No" to the three questions I posed in the last lecture. These were:

  1. Is it possible to build a machine that can generate its own meanings, meanings other than those of its programmer or designer?

  2. Is it possible to build a machine with its own perspective on the world, which we need to try to understand from the inside?

  3. Is it possible to build a machine that can have its own feelings?

The Chinese room argument concentrates on the first issue, but on Searle's view one has consciousness if and only if one has the power to generate one's own meanings. Only a being with consciousness can have its own perspective on the world and have feelings, sensations etc. So in answering the first question Searle is answering the other two as well.

A supporter of AI Alternative 1 or 2 need not think that in answering "yes" to question (i) he/she is necessarily committed to (ii) or (iii), although a supporter of the Robot Reply may think that a sufficiently sophisticated robot could have its own perspective on the world.

Empathy and the Chinese Room Argument

In this lecture series the concept of empathy and Collingwood's concept of reenactment have been discussed. Perhaps the argument could be restated in terms of these concepts. Then the point would be that whereas we can have a degree of understanding of what it might be like to be another human being, an understanding of another human's consciousness, we can have no degree of understanding of what it might be like to be an electronic robot. We cannot rule out the idea that another being is conscious just on the grounds that we cannot imagine its consciousness: can you imagine the consciousness of a cat, a whale or a bat? If you cannot, does that rule out the attribution of consciousness to these other species?

But, even if we add Fodorian "interaction with the world", and construct robots rather than computers, and even if the robots are able to communicate with us, when we set them tasks of moving around the world, moving and changing objects and so on, this is not enough to establish that these robots have any empathetic ability, even to empathise amongst each other. A possible line of analysis is that a being with consciousness (like human consciousness) is a being with empathetic ability. But then other animals, such as cats and whales, will only count as conscious, if they do have empathetic ability. Can cats empathise with each other?

N.B. This lecture relates to chapters two and three of the set book for this course: Vernon Pratt The Philosophy of the Social Sciences. It also relates to Searle's Reith Lectures which are available under the title, Minds, Brains and Science. The Turing Test and Searle's Chinese Room Argument are still causing great controversy.

Reading: * Pratt Chs 2-3 * John Searle Minds, Brains, and Science

UPHS Lecture on Chinese Room 6


WVC Philosophy Home Page | WVC Home Page
Questions or comments about the WVC Philosophy Department? sandy_lafave@wvmccd.cc.ca.us