LLMs that quack like a duck
« previous post | next post »
A letter to the editor on the essential nature of LLMs from the Times Literary Supplement (5/30/25):
Large language models
As someone who has spent the past few years working out what AI means to academic journals, I found Melanie Mitchell’s excellent review of These Strange New Minds by Christopher Summerfield (May 16) full of challenging, but often disputable, assertions.
Mitchell quotes the author’s version of the Duck Test: “If something swims like a duck and quacks like a duck, then we should assume that it probably is a duck”. But, as we all know, if it quacks like a duck, it could be anything at all. Anybody stuck with the notion that only real ducks quack must be seriously confused about their childhood doll, which surely said “Mama” when tilted. In this case, the quacking duck is AI and the “Mama” it emits is chatbot information, or “botfo”, which is as much a mechanical product as the piezo beeper responsible for the doll’s locution.
Unfortunately, the history of AI is littered with rotten metaphors and weak similarities. For example, the “neural networks” of AI are said to “mimic” the way actual brain-resident neurons operate. The choice of language is typically anthropomorphic. Neural networks are a marvellous breakthrough in computer programming, but neurologists tell us that this is not remotely how neurons actually work. The metaphor is stretched too far.
Then there is the “alignment problem” – the existential fear that AI may not align with human intentions, resulting in the end of the human race. This is usually introduced with frankly preposterous examples such as the one quoted: AI trashing our civilization when asked to fix global warming. Nick Bostrom’s original example was apocalypse resulting from AI being asked to produce paper clips. All amusing, but absurd and plainly unrealistic, since humans will continue to supply the prompts.
Professor Mitchell cannot be blamed for retailing Summerfield’s notions, but she does add one of her own – that AI large language models “put the final nail in the coffin” containing Chomsky’s assertion “that language is unique to humans and cannot be acquired without some sort of innate mental structure that is predisposed to learn syntax”.
This is mistaking the fake duck quack for a real one. The statistically generated language of chatbots bears no resemblance to human language because it lacks what all human utterance has – intentionality. In AI, the only intention behind the language is that supplied by the human who prompts the software….
Chris Zielinski
Romsey, Hampshire
Selected readings
See the Language Log archive on Artificial intelligence
[Thanks to Leslie Katz]
Randy Alexander said,
June 8, 2025 @ 7:39 am
It's refreshing to see people debunking AI assumptions like this. But unfortunately this sort of debunking is not widespread enough. It makes me cringe inside when I hear people I respect intellectually talk about things like AI becoming conscious.
Gregory Kusnick said,
June 8, 2025 @ 10:17 am
This strikes me as nonsense on two counts. First, the whole point of LLMs, and the reason for their success, is their ability to produce a very convincing semblance of human language. And second, quite a large proportion of human utterances are merely reflexive, thoughtless chatter, with no more genuine intentionality than the output of a chatbot.