Searle's "Chinese room" and the enigma of understanding

« previous post |

In this comment to "'Neutrino Evidence Revisited (AI Debates)' | Is Mozart's K297b authentic?" (11/13/24), I questioned whether John Searle's "Chinese room" argument was intelligently designed and encouraged those who encounter it to reflect on what it did — and did not — demonstrate.

In the same comment, I also queried the meaning of "understand" and its synonyms ("comprehend", and so forth).

Both the "Chinese room" and "understanding" had been raised by skeptics of AI, so here I'm treating them together.

I will say flat out that I don't think the Chinese room argument proved anything useful or conclusive with regard to AI.  I could talk at much greater length about the weaknesses of the Chinese room, but — in the interest of efficiency — I will simply point out one fatal flaw (or rather a complex of weaknesses that amount to a fatal flaw) in its construction.

Here's the Chinese room argument in a nutshell:

The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences. Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".

The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls the strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

Although its proponents originally presented the argument in reaction to statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general. While widely discussed, the argument has been subject to significant criticism and remains controversial among philosophers of mind and AI researchers.

(Wikipedia)

Fans / devotees of Searle will not be happy with what I say in the following paragraphs.  He makes far too many assumptions for comfort, and I do not believe that we — the recipients of his argument — should be obliged to smooth over his unexplained assumptions.  A surfeit of unexplained assumptions amounts to an abandonment, rejection, or negation.

I will not point out every defect in the Chinese room argument, but will merely signal a series of unexplained / inexplicable terms / expressions that do not contribute to the goal of Searle's contention.  Whether or not Searle used all of these exact terms, I trust Wikipedia enough to believe that they convey the gist of Searle's intent.

For a much more exhaustive, philosophically professional account of Searle's experiment with extensive documentation and quotations, see Larry Hauser, "Chinese Room Argument", Internet Encyclopedia of Philosophy, also David Cole, "The Chinese Room Argument", Stanford Encyclopedia of Philosophy (First published Fri Mar 19, 2004; substantive revision Wed Oct 23, 2024).

converse — This generally signifies exchanging ideas, information, etc. through talking.  Who / What taught the computer to talk?  How does it talk?  By writing / typing out its statements and answers?  By listening to the human's questions and answers?  But don't forget that the door is there to prevent that from happening, except for the writing on the slips of paper passed beneath the door (which defeats the purpose of the door).

perfectly — Not only does the computer allegedly speak Chinese, it does so perfectly.  How did it gain this mastery?

in Chinese; only in English — Somebody or something has to translate between the two, but Searle completely leaves that essential step out; this is where I almost stopped engaging with Searle's defective reasoning and had to force myself to continue.

door — Who put it between the Chinese speaking computer and the English speaking human?  That's an arbitrary act without an actor.  Moreover, the door is ostensibly meant to separate the computer and the human, but then Searle cheats by having an opening at the bottom of the door through which it is easy to slip pieces of paper with Chinese characters written on them.

speaking and writing — In Searle's experiment, there is actually no speaking going on, only writing.  Who does the writing?  Who is slipping those pieces of paper under the door?  How did the computer and the human become literate in the written Chinese that is being slipped under the door on pieces of paper?  Remember that full / "perfect" literacy is a difficult task, whether for a computer or a human.  Especially with maddeningly complex sinographs.  Remember 惡惡, for just a tiny taste?

Enough about the fundamental mechanics of the Chinese room experiment.  I could easily point out many more defects in Searle's argument, not least the fact that it is about two rooms, a Chinese room and an English room, not just a Chinese room.  Something else that irks me no end, namely, why Chinese?  Why not Russian or German or Maori or Cantonese or Cia-Cia (in which script — Latin or Arabic or Hangul?), Dungan (in which script — Arabic, Cyrillic, Sinographic?), Nakhi / Nashi / Nakhi? 

"Chinese" (Mandarin or Shanghainese or Minnan, or…).  I think Searle chose "Chinese" (i.e., Chinese characters), not one of the possible spoken Sinitic languages or topolects or dialects, because of its propensity for mystification, exoticization, and obfuscation.  Does Searle's computer understand so much as a single Chinese character, much less the 10,000 that it would need to know how to "converse" with the human on the other side of the door?

In my estimation, Searle's so-called "Chinese room argument" is nothing but a complicated and improbable form of the more reasonable and workable Turing Test, for which see this abbreviated, straightforward account:

The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician and theoretical biologist.

Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans.

During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer.

The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have artificial intelligence because the questioner regards it as "just as human" as the human respondent.

—–

"What is the Turing Test?" by Benjamin St. George and Alexander S. Gillis, Tech Target (updated in August 2024)

As for "understanding", I will note only that Searle's "Chinese room" — with a door separating the human and the computer and an opening beneath it to permit the passage of communication between the two — does demonstrate an awareness of the meaning of the significance of that process.

From Middle English understanden, from Old English understandan (to understand), from Proto-West Germanic *understandan (to stand between, understand), from Proto-Germanic *understandaną (to stand between, understand), equivalent to Old English under- (between, inter-) + standan (to stand) (Modern English under- +‎ stand). Cognate with Old Frisian understonda (to understand, experience, learn), Old High German understantan (to understand), Middle Danish understande (to understand). Compare also Saterland Frisian understunda, unnerstounde (to dare, survey, measure), Dutch onderstaan (to undertake, presume), German unterstehen (to be subordinate).

(Wiktionary)

Old English understandan "comprehend, grasp the idea of, achieve comprehension; receive from a word or words or from a sign or symbol the idea it is intended to convey;" also "view in a certain way," probably literally "stand in the midst of," from under + standan "to stand" (see stand (v.)).

If this is the meaning, the under is not the usual word meaning "beneath," but from Old English under, from PIE *nter- "between, among" (source also of Sanskrit antar "among, between," Latin inter "between, among," Greek entera "intestines;" see inter-). Related: Understood; understanding.

That is the suggestion in Barnhart, but other sources regard the "among, between, before, in the presence of" sense of Old English prefix and preposition under as other meanings of the same word. "Among" seems to be the sense in many Old English compounds that resemble understand, such as underfinden "be aware, perceiver" (c. 1200); undersecan "examine, investigate, scrutinize" (literally "underseek"); underðencan "consider, change one's mind;" underginnan "to begin;" underniman "receive." Also compare undertake, which in Middle English also meant "accept, understand."

It also seems to be the sense still in expressions such as under such circumstances. Perhaps the ultimate sense is "be close to;" compare Greek epistamai "I know how, I know," literally "I stand upon."

Similar formations are found in Old Frisian (understonda), Middle Danish (understande), while other Germanic languages use compounds meaning "stand before" (German verstehen, represented in Old English by forstanden "understand," also "oppose, withstand"). For this concept, most Indo-European languages use figurative extensions of compounds that literally mean "put together," or "separate," or "take, grasp" (see comprehend).

The range of spellings of understand in Middle English (Middle English Compendium lists 70, including understont, understounde, unþurstonde, onderstonde, hunderstonde, oundyrston, wonderstande, urdenstonden) perhaps reflects early confusion over the elements of the compound. Old English oferstandan, Middle English overstonden, literally "over-stand" seem to have been used only in literal senses.

By mid-14c. as "to take as meant or implied (though not expressed); imply; infer; assume; take for granted." The intransitive sense of "have the use of the intellectual faculties; be an intelligent and conscious being" also is in late Old English.

In Middle English also "reflect, muse, be thoughtful; imagine; be suspicious of; pay attention, take note; strive for; plan, intend; conceive (a child)." In the Trinity Homilies (c. 1200), a description of Christ becoming human was that he understood mannish.

Also sometimes literal, "to occupy space at a lower level" (late 14c.) and, figuratively, "to submit." For "stand under" in a physical sense, Old English had undergestandan.

(Etymonline)

In conclusion, I quote the Stanford cognitive scientist and computer scientist, John McCarthy (from an article on his website):

The Chinese Room Argument can be refuted in one sentence:

Searle confuses the mental qualities of one computational process, himself for example, with those of another process that the first process might be interpreting, a process that understands Chinese, for example.

'Nuff said, but McCarthy , being a sort of philosopher himself, also presents "the argument in more detail" and "the refutation in still more detail".  He also explains what is required for a Chinese room that passes the Turing test, and, having "developed time-sharing, invented LISP, and founded the field of Artificial Intelligence", sensibly emphasizes the role of translation between computer and human, briefly taking on the formidable Willard Van Orman Quine with regard to "the indeterminacy of radical translation".

Searle's idea against the notion of artificial consciousness may have been correct, but his Chinese room experiment did not serve to advance his cause.

 

Selected readings



1 Comment »

  1. Peter Taylor said,

    November 28, 2024 @ 9:35 am

    Whether or not Searle used all of these exact terms, I trust Wikipedia enough to believe that they convey the gist of Searle's intent.

    I fear that trust may be misplaced. The very first sentence of the description:

    The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them.

    has almost nothing to do with the situation described in Searles' paper Minds, Brains, and Programs. His setup is that he is locked in a room (presumably to restrict his access to external information) with a listing of an AI computer program which is capable of receiving questions in Chinese and responding to them in Chinese. He then receives questions in English and Chinese: he responds to the English ones using his own intelligence, and to the Chinese ones by manually executing the computer program. The point is that externally it is not apparent that his approach to responding to the two classes of questions is different, but internally he understands the English ones but not the Chinese ones: or at the least (and this appears to be the critical point that he's trying to make) he doesn't understand them in the same way. The experiment is presented as a falsification of the claim that AI cognition is fundamentally the same as human cognition, and that studying AI can explain human cognition.

RSS feed for comments on this post

Leave a Comment