Chinese Room Argument

JohnSearle's ChineseRoom argument.

Basically denies StrongAi by posing a hypothetical room in which an English-speaking person sits with a hypothetical book that lists reasonable Chinese responses for any reasonable Chinese question. The person accepts questions submitted through a slot, finds the correct response from the book, and writes out the answers which are returned through the slot.

Does the person understand Chinese? No, Searle says. Replace the person with a computer, and the book with a simple algorithm, and you see what he's saying about AI.

The ChineseRoomArgument is impressive in that it is quite convincing for most people (the above paraphrased argument may not be, but you have to read the original). There are those, however, who disagree with the interpretation.

Bottomline, the debate between StrongAi and WeakAi hasn't been resolved, even though both sides are convinced they are right. The ChineseRoomArgument doesn't so much solve anything as it does more strongly polarize the two sides.

More accurately, the chinese room argument is quite convincing to many people who don't understand AI, strong, weak, or otherwise. Experts who deny strong AI also realise that Searle's argument operates on a flawed premise (the key above is that the person can be replaced with a machine, sure, but the system in question is not the person. Or the machine.). This doesn't mean they are convinced strong AI is possible.


Well, I'd agree the person doesn't understand Chinese. But I would say that the system composed of "person+book+input slot+paper output" *does* understand Chinese. I think that the persuasive force of the ChineseRoomArgument stems from the fact that it puts a person in the works who doesn't understand the system. This makes the whole thing seem very counter-intuitive. Where is the mind? There's a mind in the system already (the person) and it doesn't understand what's going on!

But what about that "book"? The book, after all, must either have some system of rules and a scratchpad (for parsing Chinese phrases), or be very large - far larger than the word "book" suggests. So maybe we should think of a "program" rather than a "book".

And what about that "person"? The person would have to be extremely fast at looking things up. For an acceptable response time, too fast to be a person at all, really. Maybe the person uses a computer. Hmmm. Maybe we should think of a "CPU" rather than a "person"?

I found the ChineseRoomArgument to be very convincing the first time I came across it. But now I feel that the force actually comes from the choice of words Searle uses to describe the components of the system. In my opinion they point one's intuition in a particular direction that is not in fact justified.

If you ignore the "man in the middle" (which confuses things more than anything else, as far as I can see) the ChineseRoomArgument doesn't really add anything to either side. You are left with an algorithmically driven system that appears to display intelligence. Which is where we started.


If I have a habit of responding to similar questions with similar responses, does that mean that I don't understand English? (it does - see below.)

The above question follows a pattern or form that I often use. Maybe the rote application of patterns is fundamental to the operation of the human mind, and hence as valid an "understanding" as the "understanding" that real humans experience. -- JeffGrigg

There was a boy in an English school who was trained by rote. "The Nile is the longest river in the world." was the answer to "What is the Nile?". When asked "Which river is the longest?" he was utterly dumbfounded. You could repeat the process. It's safe to say that he did not understand (large sections of) English.

The point is that the appearance of competence in the face of a finite collection of questions is an inadequate indication of actual competence. Actual competence with a subject requires the ability analyze and synthesise in the face of novel circumstances in a problem domain. For the record, the problem domain called "Being Human" is extremely large and varied.

What this means is that a table of "reasonable question" to "reasonable answer" cannot exist as such. It cannot be precomputed. It can only be a process (and not a function).

Conclusion? The ChineseRoomArgument is moot because it depends on an unreasonable transition in supposition. The StrongAi question is probably also moot. AI is good enough when it serves to analyse and synthesize in a problem domain. The real trick is to extend that problem domain around some useful goal. In the case of animal life (like us) the goal is reproduction and/or bliss. (Some would say that reproduction is blissful....)


The ChineseRoomArgument appears to me to be based on an all-mechanical design in which a person is used to replace one of the components. One then observes that the person doesn't understand Chinese. But the job of looking the question up in the book and transcribing the answer seems to me to be a very mechanical position. One might as well have replaced the slot with a person (have a delivery boy carry slips of paper back and forth). In that case, the delivery boy clearly doesn't need to understand Chinese, he just carrys around slips of paper. In fact, all of the "hard part", the "understanding" of Chinese is encoded in that magical "book" which has an answer for any reasonable question. Rather than shedding light on the StrongAi versus WeakAi question, this argument simply encapsulates the Ai in a "book", and then blabbers about whether humans or mechanicals are to be part of the user interface.


EditText of this page (last edited April 14, 2005) or FindPage with title or text search