Searle's "Chinese room" and the enigma of understanding

« previous post | next post »

In this comment to "'Neutrino Evidence Revisited (AI Debates)' | Is Mozart's K297b authentic?" (11/13/24), I questioned whether John Searle's "Chinese room" argument was intelligently designed and encouraged those who encounter it to reflect on what it did — and did not — demonstrate.

In the same comment, I also queried the meaning of "understand" and its synonyms ("comprehend", and so forth).

Both the "Chinese room" and "understanding" had been raised by skeptics of AI, so here I'm treating them together.

I will say flat out that I don't think the Chinese room argument proved anything useful or conclusive with regard to AI.  I could talk at much greater length about the weaknesses of the Chinese room, but — in the interest of efficiency — I will simply point out one fatal flaw (or rather a complex of weaknesses that amount to a fatal flaw) in its construction.

Here's the Chinese room argument in a nutshell:

The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences. Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".

The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls the strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

Although its proponents originally presented the argument in reaction to statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general. While widely discussed, the argument has been subject to significant criticism and remains controversial among philosophers of mind and AI researchers.

(Wikipedia)

Fans / devotees of Searle will not be happy with what I say in the following paragraphs.  He makes far too many assumptions for comfort, and I do not believe that we — the recipients of his argument — should be obliged to smooth over his unexplained assumptions.  A surfeit of unexplained assumptions amounts to an abandonment, rejection, or negation.

I will not point out every defect in the Chinese room argument, but will merely signal a series of unexplained / inexplicable terms / expressions that do not contribute to the goal of Searle's contention.  Whether or not Searle used all of these exact terms, I trust Wikipedia enough to believe that they convey the gist of Searle's intent.

For a much more exhaustive, philosophically professional account of Searle's experiment with extensive documentation and quotations, see Larry Hauser, "Chinese Room Argument", Internet Encyclopedia of Philosophy, also David Cole, "The Chinese Room Argument", Stanford Encyclopedia of Philosophy (First published Fri Mar 19, 2004; substantive revision Wed Oct 23, 2024).

converse — This generally signifies exchanging ideas, information, etc. through talking.  Who / What taught the computer to talk?  How does it talk?  By writing / typing out its statements and answers?  By listening to the human's questions and answers?  But don't forget that the door is there to prevent that from happening, except for the writing on the slips of paper passed beneath the door (which defeats the purpose of the door).

perfectly — Not only does the computer allegedly speak Chinese, it does so perfectly.  How did it gain this mastery?

in Chinese; only in English — Somebody or something has to translate between the two, but Searle completely leaves that essential step out; this is where I almost stopped engaging with Searle's defective reasoning and had to force myself to continue.

door — Who put it between the Chinese speaking computer and the English speaking human?  That's an arbitrary act without an actor.  Moreover, the door is ostensibly meant to separate the computer and the human, but then Searle cheats by having an opening at the bottom of the door through which it is easy to slip pieces of paper with Chinese characters written on them.

speaking and writing — In Searle's experiment, there is actually no speaking going on, only writing.  Who does the writing?  Who is slipping those pieces of paper under the door?  How did the computer and the human become literate in the written Chinese that is being slipped under the door on pieces of paper?  Remember that full / "perfect" literacy is a difficult task, whether for a computer or a human.  Especially with maddeningly complex sinographs.  Remember 惡惡, for just a tiny taste?

Enough about the fundamental mechanics of the Chinese room experiment.  I could easily point out many more defects in Searle's argument, not least the fact that it is about two rooms, a Chinese room and an English room, not just a Chinese room.  Something else that irks me no end, namely, why Chinese?  Why not Russian or German or Maori or Cantonese or Cia-Cia (in which script — Latin or Arabic or Hangul?), Dungan (in which script — Arabic, Cyrillic, Sinographic?), Nakhi / Nashi / Nakhi? 

"Chinese" (Mandarin or Shanghainese or Minnan, or…).  I think Searle chose "Chinese" (i.e., Chinese characters), not one of the possible spoken Sinitic languages or topolects or dialects, because of its propensity for mystification, exoticization, and obfuscation.  Does Searle's computer understand so much as a single Chinese character, much less the 10,000 that it would need to know how to "converse" with the human on the other side of the door?

In my estimation, Searle's so-called "Chinese room argument" is nothing but a complicated and improbable form of the more reasonable and workable Turing Test, for which see this abbreviated, straightforward account:

The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician and theoretical biologist.

Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans.

During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer.

The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have artificial intelligence because the questioner regards it as "just as human" as the human respondent.

—–

"What is the Turing Test?" by Benjamin St. George and Alexander S. Gillis, Tech Target (updated in August 2024)

As for "understanding", I will note only that Searle's "Chinese room" — with a door separating the human and the computer and an opening beneath it to permit the passage of communication between the two — does demonstrate an awareness of the meaning of the significance of that process.

From Middle English understanden, from Old English understandan (to understand), from Proto-West Germanic *understandan (to stand between, understand), from Proto-Germanic *understandaną (to stand between, understand), equivalent to Old English under- (between, inter-) + standan (to stand) (Modern English under- +‎ stand). Cognate with Old Frisian understonda (to understand, experience, learn), Old High German understantan (to understand), Middle Danish understande (to understand). Compare also Saterland Frisian understunda, unnerstounde (to dare, survey, measure), Dutch onderstaan (to undertake, presume), German unterstehen (to be subordinate).

(Wiktionary)

Old English understandan "comprehend, grasp the idea of, achieve comprehension; receive from a word or words or from a sign or symbol the idea it is intended to convey;" also "view in a certain way," probably literally "stand in the midst of," from under + standan "to stand" (see stand (v.)).

If this is the meaning, the under is not the usual word meaning "beneath," but from Old English under, from PIE *nter- "between, among" (source also of Sanskrit antar "among, between," Latin inter "between, among," Greek entera "intestines;" see inter-). Related: Understood; understanding.

That is the suggestion in Barnhart, but other sources regard the "among, between, before, in the presence of" sense of Old English prefix and preposition under as other meanings of the same word. "Among" seems to be the sense in many Old English compounds that resemble understand, such as underfinden "be aware, perceiver" (c. 1200); undersecan "examine, investigate, scrutinize" (literally "underseek"); underðencan "consider, change one's mind;" underginnan "to begin;" underniman "receive." Also compare undertake, which in Middle English also meant "accept, understand."

It also seems to be the sense still in expressions such as under such circumstances. Perhaps the ultimate sense is "be close to;" compare Greek epistamai "I know how, I know," literally "I stand upon."

Similar formations are found in Old Frisian (understonda), Middle Danish (understande), while other Germanic languages use compounds meaning "stand before" (German verstehen, represented in Old English by forstanden "understand," also "oppose, withstand"). For this concept, most Indo-European languages use figurative extensions of compounds that literally mean "put together," or "separate," or "take, grasp" (see comprehend).

The range of spellings of understand in Middle English (Middle English Compendium lists 70, including understont, understounde, unþurstonde, onderstonde, hunderstonde, oundyrston, wonderstande, urdenstonden) perhaps reflects early confusion over the elements of the compound. Old English oferstandan, Middle English overstonden, literally "over-stand" seem to have been used only in literal senses.

By mid-14c. as "to take as meant or implied (though not expressed); imply; infer; assume; take for granted." The intransitive sense of "have the use of the intellectual faculties; be an intelligent and conscious being" also is in late Old English.

In Middle English also "reflect, muse, be thoughtful; imagine; be suspicious of; pay attention, take note; strive for; plan, intend; conceive (a child)." In the Trinity Homilies (c. 1200), a description of Christ becoming human was that he understood mannish.

Also sometimes literal, "to occupy space at a lower level" (late 14c.) and, figuratively, "to submit." For "stand under" in a physical sense, Old English had undergestandan.

(Etymonline)

In conclusion, I quote the Stanford cognitive scientist and computer scientist, John McCarthy (from an article on his website):

The Chinese Room Argument can be refuted in one sentence:

Searle confuses the mental qualities of one computational process, himself for example, with those of another process that the first process might be interpreting, a process that understands Chinese, for example.

'Nuff said, but McCarthy , being a sort of philosopher himself, also presents "the argument in more detail" and "the refutation in still more detail".  He also explains what is required for a Chinese room that passes the Turing test, and, having "developed time-sharing, invented LISP, and founded the field of Artificial Intelligence", sensibly emphasizes the role of translation between computer and human, briefly taking on the formidable Willard Van Orman Quine with regard to "the indeterminacy of radical translation".

Searle's idea against the notion of artificial consciousness may have been correct, but his Chinese room experiment did not serve to advance his cause.

 

Selected readings



21 Comments »

  1. Peter Taylor said,

    November 28, 2024 @ 9:35 am

    Whether or not Searle used all of these exact terms, I trust Wikipedia enough to believe that they convey the gist of Searle's intent.

    I fear that trust may be misplaced. The very first sentence of the description:

    The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them.

    has almost nothing to do with the situation described in Searles' paper Minds, Brains, and Programs. His setup is that he is locked in a room (presumably to restrict his access to external information) with a listing of an AI computer program which is capable of receiving questions in Chinese and responding to them in Chinese. He then receives questions in English and Chinese: he responds to the English ones using his own intelligence, and to the Chinese ones by manually executing the computer program. The point is that externally it is not apparent that his approach to responding to the two classes of questions is different, but internally he understands the English ones but not the Chinese ones: or at the least (and this appears to be the critical point that he's trying to make) he doesn't understand them in the same way. The experiment is presented as a falsification of the claim that AI cognition is fundamentally the same as human cognition, and that studying AI can explain human cognition.

  2. Allan from Iowa said,

    November 28, 2024 @ 11:30 am

    I don't see the Chinese Room as an overly complicated version of the Turing Test. The Turing test asks if the computer is thinking at as high a level as a human does. The Chinese Room asks if the human is following instructions at as low a level as a computer does.

    I don't necessarily find the Chinese Room persuasive, but at least it is asking a pertinent question. Over many years of reading arguments about artificial intelligence, I've always thought that the disagreements are not about what machines can do but about what humans do.

  3. mkvf said,

    November 28, 2024 @ 1:36 pm

    Peter Taylor is right, I think. The summary you're working on confuses the point a lot. The way I remember explained was:

    I have a penpal in China. I don't speak Chinese. When they write, I go to the Chinese Translation Bureau, and post his letter through the door. Out comes an English translation. It is so fluent, I think there is a Chinese-speaker working in the bureau. But how do I know it's not another English-speaker, with a well written set of instructions for turning characters into English?

    The point being, being able to engage in conversation tells us nothing about the being (?) we think we're conversing with. That a machine can generate speech, doesn't mean it has intelligence

  4. Xtifr said,

    November 28, 2024 @ 3:06 pm

    Mmm. What if a "sufficiently advanced" alien built a giant model of the brain of a human that spoke Chinese, and then had another human that didn't speak Chinese operate that model?

    I'm not convinced that the Chinese room tells us anything useful about understanding (or language), But it does help show the difference between a computer and a program. And it may help highlight some of the ways a deliberately created intelligence might differ from a naturally evolved one.

  5. AntC said,

    November 28, 2024 @ 4:33 pm

    Thank you Peter T. Indeed. Prof Mair doesn't quote any of Searle's actual words. Furthermore, I believe Searle rephrased or reformulated the experiment several times — which perhaps would defend it against Prof Mair's criticisms.

    The Stanford Encyclopedia quotes several of Searle's formulations. "door separating them" (that is, two actors) doesn't appear. The wikip article refs Stanford — in the sense of plagiarises and mis-represents, but seems not to quote Searle's actual words specifying the experiment. Its third paragraph in section 'Searle's thought experiment' seems close to the Stanford quote from Searle 1999, and quite different to what Prof Mair quotes from wikip (its second para), as @PT points out.

    Wikip enshittification, I suspect.

  6. Jonathan Smith said,

    November 28, 2024 @ 6:14 pm

    This version seems to reflect Searle's original Behavioral and Brain Sciences article, but should be cited as 1980.3 or 3.3 (pp. 417-424 plus much discussion thru p. 454 by the author and others that goes through all kinds of permutations of the premise including those imagined above and in the other thread.) Indeed none of the terms listed in the OP appear. Well OK the word "door" appears. McCarthy's (note d. 2011) linked remarks on e.g. "What is required for a Chinese room that passes the Turing test?" seem quaint post-ChatGPT… whereas Searle's often apply eerily well to the new-age Elizas and to those strangely infatuated with them (e.g., p. 423" "Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states" but we "we should be able to overcome this impulse", cough.)

    On this and other fronts, Searle's argument is looking better than ever. Sadly it must be noted that Searle the person wound up disgraced for sexual harassment and worse of female research assistants. Sigh…

  7. AntC said,

    November 28, 2024 @ 8:38 pm

    In my estimation, Searle's so-called "Chinese room argument" is nothing but a complicated and improbable form of the more reasonable and workable Turing Test,

    Well, yes: Searle's own writings, and the Stanford article say the experiment is developed from the Turing Test. Furthermore, that explains the set-up: a closed door/communication by marks on paper, so that the observers can't see into the 'machine room' to observe flesh-and-blood (or not) [**]; and can't hear the chatter of machinery (or not); and not voiced output because artificial speech wasn't a thing in Turing's day.

    [**] Or rather whether the flesh-and-blood is producing the output by laboriously going through a set of printed rules and hunting through the 'database' of prepared Chinese characters; or just writing (the English answers to the English questions) by pure ratiocination.

    Something else that irks me no end, namely, why Chinese?

    I [Searle] know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.
    [from Jonathan's link, thank you]

    Presumably if the squiggles looked like an alphabet and were arranged in what looked like words/sentences/paragraphs, there'd be a danger of the unseen operator dimly 'understanding' something of what they were engaged in.

  8. John Baker said,

    November 28, 2024 @ 10:12 pm

    There are a number of ways in which I find the Chinese room thought experiment unconvincing, but I start with the assumption that the English-speaking human in the room can produce perfect Chinese. All the evidence, I believe, is to the contrary. Since the thought experiment is comparing a computer to a human performance that has never existed and probably could never exist, that does not seem to me to be a useful comparison.

    For reference, here is how Searle first described it:

    “ One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to the Schank program with the following Gedankenexperiment. Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a "script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call the "program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view—from the point of view of someone reading my "answers"—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.”

  9. ~flow said,

    November 29, 2024 @ 2:33 am

    Thanks to John Baker for salvaging this discussion with an actual quote. There are several mutually incompatible versions of the Chinese Room Argument given in the OP and the comments, maybe because Searle himself published more than one version. The quote given by JB is quite understandable, much more so than most of the renderings I've read in the past 40 or so years. Finally I believe I can comment on the CSA: It is, quite simply, comparing written conversations between a human who will be the judge on the nature of his hidden partner and a partner who is either a human native speaker, or a human proxy following a rule book (a program). The latter could've been replaced by a room of human calculators or an electronic computer, but presumably Searle chose a human and a rule book to disspel any misconceptions about "electronic brains" and the like. As such, the argument is in no way confined to electronic computers; anyone or anything that follows fixed rules will do.

    Now in Searle's days it was an open question whether a human with a rule book could or could not possible produce meaningful 'lifelike' answers, but today we can say with confidence that the answer is yes, even though an actual rule book is more like a big library and the computing times will be in the order of centuries or worse, per answer. That does not take away from the validity of the argument.

    So after several decades of grappling with this argument, I think I can say: to the degree that humans today interact with LLMs like they would do with humans, and also praise and denounce their interaction partners with anthropomorphizing terms like "it/he/she is lying to me", to that degree the mechanical computer+program or human+rule book combo has successfully emulated part of what makes humans appear intelligent. Which, it should be added, is of course not at all proof that the computer+program is a good model of how human language processing works.

  10. Philip Taylor said,

    November 29, 2024 @ 5:41 am

    The fact that « humans today interact with LLMs [as] they would […] with humans, and also praise and denounce their interaction partners with anthropomorphizing terms [such as] "it/he/she is lying to me" » tells us, I think, far more about the intelligence (or otherwise) of the humans who behave in this way that it does about the understanding (or otherwise) of the LLMs.

  11. Victor Mair said,

    November 29, 2024 @ 8:58 am

    @John Baker

    Bless your soul for engaging meaningfully with the argument of the original post.

  12. Doug said,

    November 29, 2024 @ 9:30 am

    Daniel C. Dennett's book, "Intuition Pumps and Other Tools for Thinking" has a discussion of the Chinese Room that may be of interest to people here.

  13. Peter Taylor said,

    November 29, 2024 @ 9:33 am

    John Baker wrote:

    There are a number of ways in which I find the Chinese room thought experiment unconvincing, but I start with the assumption that the English-speaking human in the room can produce perfect Chinese. All the evidence, I believe, is to the contrary. Since the thought experiment is comparing a computer to a human performance that has never existed and probably could never exist, that does not seem to me to be a useful comparison.

    But that assumption is nowhere in the original experiment, which you quoted below. The comparison is between an English-speaking human replying in English and a finite-state-machine-based process, executed either by a computer or by the English-speaking human, which replies in Chinese. Perfection is nowhere mentioned. The assumptions are that an English-speaking human can produce English good enough to pass for a native speaker's response and that an AI can produce Chinese good enough to pass for a native speaker's response.

  14. John Baker said,

    November 29, 2024 @ 12:51 pm

    Peter, Searle wrote: “Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers.”

    My point is that there is no evidence that a human could do this. In fact, it sounds most implausible. And if the argument is that we know a human could do this, given sufficient time, because a computer can do it, then that makes the whole thing circular.

  15. Peter Taylor said,

    November 29, 2024 @ 5:11 pm

    If your objection is that people are not as good as silicon at executing algorithms without making mistakes then e.g. Matt Parker's massive collaborations to manually compute pi (e.g. https://www.youtube.com/watch?v=LIg-6glbLkU ) would tend to corroborate. In fairness to Searle, back when he wrote this Dijkstra was still teaching computer programming courses in which the students had to manually execute their programs for the first two years of study and didn't get to feed them to a computer until their third year. But the human error rate seems to be missing the point, which is that explicitly executing a finite state machine is not the same cognitive process as the one which I am currently engaged in of thinking about what you've written and selecting words to form an adequate response.

    I should perhaps state that I don't have a horse in this race. It seems to me that the position which Searle was trying to refute, that if FSMs are capable of emulating human cognition that this tells us something about what human cognition is or how it works, is an extraordinary claim. I have a smartphone which is capable of emulating the sounds of hundreds of bird species, but examining the smartphone isn't going to teach me anything worthwhile about how avian vocal systems work. On the other hand, contra Searle, the fact that I am not conscious of executing an FSM as I type this provides no evidence as to whether or not my neurons are in fact an FSM or not.

  16. AG said,

    November 29, 2024 @ 7:31 pm

    I find it amusing that our understanding of the Chinese Room fell victim to Chinese Whispers.

  17. Vampyricon said,

    November 29, 2024 @ 11:21 pm

    There are a number of ways in which I find the Chinese room thought experiment unconvincing, but I start with the assumption that the English-speaking human in the room can produce perfect Chinese. All the evidence, I believe, is to the contrary.

    Hear, hear! The version I've heard also specifies that the conversation is taking place in real time (as part of "indistinguishable from a native speaker"), which introduces many more complications, but the intuition pump is that a person remembering these rules can converse with others without "understanding" "Chinese". The issue is that someone who can converse with native speakers at a native level without outside help obviously understands "Chinese" (whatever that is).

    In fact, what I would say is that I find such a person indistinguishable from an interpreter, and interpreters understand both languages, therefore this person understands "Chinese". By extension then, this ruleset+person combination understands "Chinese", even though its components don't. If this sounds absurd, ask yourself if any one of your neurons understand any single thing that you understand.

  18. AntC said,

    December 1, 2024 @ 4:45 am

    assumption that the English-speaking human in the room can produce perfect Chinese.

    specifies that the conversation is taking place in real time

    Let's quote from the original Turing 1950 'The Imitation Game', in response to a Q to add two 5-digit numbers:

    A : (Pause about 30 seconds and then give as answer) 105621.
    [Which is not correct — forgotten to add the carry-up from the tens]

    So Turing already had built in to the Gedankenexperiment that the machine (or is it?) might mimic a human's poor performance at arithmetic — both in terms of thinking-time and accuracy. The machine's programming anticipates that since electronic brains are well-known for being faster and more reliable than humans at math, the interrogator would try to unmask it as a calculating engine. This is a counter-ruse to appear as human as humans.

    So both @JB's and @Vamp's points are wide of the mark. Searle is presuming/presupposing for the sake of argument the machine has passed 'The Imitation Game'/The Turing Test. He's probing whether that's sufficient evidence for … anything / "ïs a mind" [Searle].

    Whoever programmed the machine/or wrote the instructions that Searle's dumb human is following clearly has a mind. The writing the instructions is the "outside help".

    In the case of LLMs we don't even have programming of the machine to do math calculations or respond to specific inputs (questions) in Chinese: LLMs merely try to match a similar 'conversation' in their repository, and mimic that. Famously they're terrible at arithmetic, because their repository is unlikely to have an example of adding exactly those two 5-digit numbers.

  19. Milan said,

    December 2, 2024 @ 3:48 pm

    Why did Searle choose Chinese for his example?

    Searle was almost certainly influenced by racist tropes about the Chinese language(s) in his choice. However, there are some other reasons. With many languages written in the Latin script, an English speaker would be able to recognise a fair number of Greek, Latinate or English loanwords. Even with the Greek and Cyrillic script, a learned Anglophone may recognise some loans. She may even start to recognise certain word classes, e.g. pronouns or articles. Thus, an objector might have said: 'Ha, she understands the language, just not as well as her native language'. Searle is preempting this by choosing a writing system that the operator does not understand.

    Furthermore, with alphabets and abjads there is a more straightforward correspondence between writing and sound. An objector might think that in the English task, the speaker is first decoding the writing into representations of sounds. Then, she is processing these representations of sounds as language. In reading an unknown language written in the Latin script, the person in the room may similarly first decode the writing into representations of sounds, and then manipulate the representations of sounds. Thus, at least some of the processing in both cases would be the same. Even with an unknown alphabet the person in the room may attempt some such decoding. However, with a Sinographic script, they wouldn't know where to start.

    I'd offer the 'Finnish room', as a slightly redescribed version of the 'Chinese room'. In the 'Finnish room', a monolingual English speaker is placed in a room. In the first task, they are given a numerical representation of an English-language textfile. They mechanically translate the numerical representation into readable English text, write an answer, encode it and return the numerical representation. The 'reader' decodes the representation. In the second task, they are given a similar representation of Finnish text. They do NOT decode that representation. Rather, they perform a series of manipulations on the numerical representation, as prescribed by the instruction's of the computer programme. They return the result. According to Searle, the person on the room knows English, but not Finnish.

  20. Speedwell said,

    December 2, 2024 @ 9:40 pm

    @Milan: I'd offer the 'English room', as a slightly redescribed version of the 'Finnish room'. In the 'English room', a monolingual English speaker is placed in a room. In the first task, they are given a numerical representation of an English-language textfile. They mechanically translate the numerical representation into readable English text, write an answer, encode it and return the numerical representation. The 'reader' decodes the representation. In the second task, they are given another representation of the same English text. They do NOT decode that representation and presumably could not have memorised the first complex encoding. Rather, they perform a series of manipulations on the numerical representation, as prescribed by the instruction's of the computer programme. They return the result. According to Searle, the person on the room knows English with respect to the first task, but not the second.

    Have I got that right?

  21. Milan Ney said,

    December 3, 2024 @ 1:17 pm

    @Speedwell,

    Well, they do not draw on their knowledge of English in the second task. To be more precise, the person in the Finnish room might know Finnish (and the person in the Chinese room might know Manadarin, and just lack knowledge of Hanzi). However, their performance provides no eveidence that they know Finnish (or Chinese). The contrast is clearer with two different languages.

RSS feed for comments on this post

Leave a Comment