The argument proceeds by the following thought experiment. Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.
The larger structure of the argument can be stated as a derivation from three premises.
1. Implemented programs are by definition purely formal or syntactical. (An implemented program, as carried out by the man in the Chinese Room, for example, is defined purely in terms of formal or syntactical symbol manipulations. The notion "same implemented program" specifies an equivalence class defined purely in terms of syntactical manipulations, independent of the physics of their implementation.)Why does the man in the Chinese Room not understand Chinese even though he can pass the Turing test for understanding Chinese? The answer is that he has only the formal syntax of the program and not the actual mental content or semantic content that is associated with the words of a language when a speaker understands that language. You can see this by contrasting the man in the Chinese Room with the same man answering questions put to him in his native English. In both cases he passes the Turing test, but from his point of view there is a big difference. He understands the English and not the Chinese. In the Chinese case he is acting as a digital computer. In the English case he is acting as a normal competent speaker of English. This shows that the Turing test fails to distinguish real mental capacities from simulations of those capacites. Simulation is not duplication, but the Turing Test cannot detect the difference.
2. Minds have mental or semantic contents. (For example, in order to think or understand a language you have to have more than just the syntax, you have to associate some meaning, some thought content, with the words or signs.)
3. Syntax is not by itself sufficient for, nor constitutive of, semantics. (The purely formal, syntactically defined symbol manipulations don't by themselves guarantee the presence of any thought content going along with them. This was shown the by Chinese Room example.)
Conclusion: Implemented programs are not constitutive of minds. Strong AI is false.
There have been a number of attempts to answer this argument, all of them, in the view of its author, unsuccessful. Perhaps the most common is this:
The Systems Reply. "While the man in the Chinese Room does not understand Chinese he is not the whole system. He is but the central processing unit, a simple cog in the large mechanism that includes room, books, etc. It is the whole room, the whole system, which understands Chinese, not the man."
Answer to the systems reply: The man has no way to get from the SYNTAX to the SEMANTICS but neither does the whole room. The whole room also has no way of attaching any thought content or mental content to the formal symbols. You can see this by imagining that the man internalizes the whole room. He memorizes the rulebook and the data base, he does all the calculations in his head, and he works outdoors. All the same, neither the man nor any subsystem in him has any way of attaching any meaning to the formal symbols.
The Chinese Room has been widely misunderstood as attempting to show a lot of things it does not show.
1. The Chinese Room does not show that "machines can't think." On the contrary, the brain is a machine and brains can think.
2. The Chinese Room does not show that "computers can't think." On the contrary, something can be both a computer and can think. If a computer is any machine capable of carrying out a computation, then all normal human beings are computers and they think. The Chinese Room shows that COMPUTATION, as defined by Alan TURING and others as formal symbol manipulation, is not by itself constitutive of thinking.
3. The Chinese Room does not show that only brains can think. We know that thinking is caused by neurobiological processes in the brain, but there is no logical obstacle to building a machine that could duplicate the causal powers of the brain to produce thought processes. The point, however, is that any such machine would have to be able to duplicate the specific causal powers of the brain to produce the biological process of thinking. The mere shuffling of formal symbols by themselves is not sufficient to guarantee these causal powers, as the Chinese Room shows.
COMPUTATIONAL THEORY OF MIND
-- John Searle
Searle, J. R. (1980). Minds, brains and programs. Behavioral and Brains Sciences Vol 3 (together with 27 peer commentaries and author's reply).