John Searle’s Chinese Room Argument by David Leech Anderson (Illinois
State University)
Copyright 2005 PART TWO: The Robot ReplyOne popular response to Searle's Chinese Room Argument goes like this:
This response presupposes a particular semantic theory. Semantics deals with the meanings that words, sentences, and other linguistic expressions have and how it is that they come to have those meanings. The semantic theory presupposed by the "Robot Reply" is often called, semantic externalism. This is the view that words, especially words that refer to objects in the world, come to have the meanings that they do, and come to refer to the objects that they refer to, by virtue of the causal connections that obtain between the words spoken and the objects that the words name. Therefore, a robot (like Commander Data from Star Trek the Next Generation) that is able to go to a farm, tell the difference between the pigs, the horses, and the chickens, and call the pigs by name demonstrates by its actions that it genuinely understands what a pig is. This is possible because the robot has connected the word 'pig' with real pigs. Light is reflected off of the real pig, sent through the robot's video camera eye, and ultimately produces the utterance, "I see a pig." There is a causal chain from the pig, through the camera, to the computer processing center, and finally to the speech-center that produces the utterance, "I see a pig." With the robot we have a "word-world relation" established that confers on the word 'pig' its meaning -- which, in this case, is nothing other than the physical pig. Thus, it is argued that the Chinese Room (like a computer sitting on the desk) lacks "word-world" relations and therefore Searle was justified in drawing the conclusion that he did. But a robot would be different. The same conclusion would NOT be justified if the machine in question were a complex robot. What follows, then, from this line of argument? First, one who offers the "Robot Reply" agrees with Searle that the Turing Test is not a reliable test for understanding a language nor, presumably, for intelligence either. The kind of behavior exhibited in the Turing Test is not sufficient to demonstrate linguistic comprehension. Where the Robot Reply parts company with Searle is in its rejection of Searle's view that the Chinese room argument succeeds in showing that ALL digital computers are equally susceptible to Searle's argument. Those who offer the Robot Reply believe that the right kind of digital computer, controlling a robot that is sufficiently complex, would indeed be intelligent and understand a language. So, what does Searle say to this argument? Does he agree that his argument is not effective against the right kind of robot? No way!! He argues that placing the computer inside a robot will make no difference whatsoever. His argument is an interesting one. SEARLE REJECTS THE ROBOT REPLY Searle is not convinced by the robot reply. To see that the addition of a robotic body fails to make a difference, Searle says that one simply needs to extend the thought experiment by placing the Chinese Room inside a robotic body. (Okay, it has to be a pretty big robot, but what difference does that make?) Now, all the computational processing that goes on inside the robot will be accomplished by Searle in the now modified, "Chinese Room". In addition to symbols coming into the room in the form of questions, now there will also be symbols coming into the room from the video cameras that are receiving visual information about pigs in the barnyard. Searle believes that this new thought experiment defeats those who think that a causal connection between a physical pig and 'pig'-utterances is sufficient for "understanding a language." But why does he think so? To see why, let us consider what is at issue in the current debate. Remember, we started with a computer on a desk, taking as input strings of symbols of a language and giving as output strings of symbols of that same language. If we want a computer to pass the Turing Test in English, it must be able to take as input questions in English and give us output answers in English. However, digital computers do not "recognize" (i.e., do not perform computations directly on) the symbols that make up English words and sentences (e.g., P. . i . . g). They must first convert those symbols into symbols of the only language that computers directly "understand" (i.e., the only language on which they can perform any operations): the binary language that we represent as strings of 0's and 1's (e.g., 0011010, 11101011, . . ). Let's say that there is a computer on a desk that is currently passing the Turing Test. I, the interrogator, am just now asking the computer the question: "What is a pig?" The computer must take that string of letters and convert them into strings of 0's and 1's that are associated with those letters of the alphabet. There is an established convention for doing this, and here it is:
The Alphabet in Binary Code
We now must imagine a separate room, call it the "Binary Converter Room" which takes as input strings of symbols from the Arabic alphabet and gives as output strings of binary symbols. So the symbols, W-h-a-t- -i-s- a- p-i-g-? will be converted to the symbols 0-1-0-0-0-1-1-1-0 . . . Now, we imagine that this string of 0's and 1's is sent into Searle's room where he has books that tell him how to manipulate these binary symbols. In the room, Searle will have a second set of books that instruct him what to do not with Chinese Symbols, but with binary symbols -- 0's and 1's. In one of the books, there will be a sentence written in English that says: "If you receive this string of shapes: 01010111011010000110000101110100, So now we have a second version of the "Chinese Room". But now we will call it the "Binary Room". Searle's situation is much the same. Before, shapes came into the room that he didn't recognize. They were just shapes. He didn't necessarily know that were symbols. He only knew that when certain shapes came into the room, he was required to send certain other shapes out of the room. Before the shapes were Chinese symbols. Now they are binary symbols. In both cases, the symbols that came into the room asked questions which you would understand if you spoke that language -- Chinese or Binary, respectively. But Searle speaks neither language. He merely knows how to consult the books and "manipulate" the symbols based on their shape and position. THE ROBOT REPLY & THE BINARY ROOM Notice, that with the Binary Room, we simply duplicated the original Chinese Room. The only thing that is different is the language. In both cases, they are languages that Searle does not speak so the effect is the same. So what would a person who offers the Robot Reply have to say about the Binary Room? The same thing that they said about the Chinese Room.
So far so good. Those who offer the Robot Reply say that what will make the difference, what will imbue those symbols with genuine meaning and thereby invest the speaker with genuine understanding is the ability to causally interact with real barnyard animals. We are now in a better position to understand why Searle thinks that a new thought experiment, putting the Chinese Room inside the head of the robot -- or, for our new example, putting the Binary Room inside the robot's head -- might offer a refutation of the Robot Reply. We now have a robot that is taking in information about the world in two ways. The robot, like the PC on the desk, continues to be able to process what we call "linguistic" information. Let's consider a question written on a piece of paper. Just as a PC that passes the Turing Test could answer the question: "What is a pig?" So, too, can our robot. [Think of] Searle in the Binary Room, processing the question "What is a pig." However, because the sentence is written in binary, Searle doesn't know that it is a question. It is just a meaningless string of 0's and 1's. Searle sends a string of 0's and 1's out of the room, not knowing their significance. That string is sent through a binary converter which produces the English words, "A pig is a barnyard animal" that the voice box is instructed to utter. Searle would argue that while it appears that the robot genuinely understands English and knows about pigs, that is not the case. It is Searle who is processing the question and giving the answer. If he doesn't understand what he's doing (and he doesn't), then the robot can't be said to understand what it is doing either. Stop one minute! The critic will insist. The defender of the Robot Reply will insist that we have left out the crucial ingredient. We still don't have the causal interaction with real pigs. Searle says: Okay add in the pig, and a vision system that "seems" to recognize pigs. But remember, says Searle, the visual information that comes from the video camera is just digital information too. It is just 0's and 1's. And who is doing the processing of that information? Searle, inside the Binary Room of course. Does the robot "see" a pig? Does it understand what it is seeing? Does it know that it is a pig? To answer those questions, Searle wants to reflect on his performance instead of the room. Inside the room, Searle does not see a picture which he recognizes to be a pig. The video camera has taken visual data from the pig that consists primarily of black pixels that draws the outline of the pig. But Searle doesn't even "see" that. Those pixels are converted to a binary code of 0's and 1's. The background is composed of 0's and the outline of the pig is captured by 1's -- as in this picture:
This data -- the 0's and 1's -- are sent into the room for Searle to process. It would be a mistake to assume that Searle ever sees all of the 0's and 1's displayed as we have displayed them here. He would just get one string after another. The first three rows are all 0's. So he would get a string something like this:
followed by two more the same:
then followed by a string with 0's and 1's in it, something like this (notice row #4 in Fig.4 above -- we've reduced the total number of 0's and 1's to make the string a more manageable length):
Searle only sees strings of 0's and 1's. He never sees anything that he could recognize as a pig. To him the 0's and 1's that come from the video camera are indistinghishable from (and thus just as meaningless as) the 0's and 1's coming from the linguistic input. So we might expect Searle to ask: How does adding a string of 0's and 1's from a video camera bring the robot any closer to understanding what a pig is? Searle in the room is the one who "processed" the information. But the kind of processing that Searle did is closer to a Cuisinart processing mixed vegetables than it is to genuinely understanding what a pig is. Searle's argument is this: So long as the only information processing that is going on consists entirely of the manipulation of symbols based solely on their formal properties then there is no "understanding" present and being causally hooked up to a pig gives us nothing more than uninterpreted 0's and 1's, which gives us nothing at all. With the behavior of the robot, the lights seem to be on but when we peek inside we discover that "nobody's home" at least when it comes to understanding what it is doing and saying. So. What do YOU think of the Robot Reply? Does Searle win this round? Or, can Searle's argument be defeated? We will turn next to objections to Searle's argument. Copyright © 2005-2006 The Mind Project |