LLMs that quack like a duck

« previous post | next post »

A letter to the editor on the essential nature of LLMs from the Times Literary Supplement (5/30/25):

 Large language models

As someone who has spent the past few years working out what AI means to academic journals, I found Melanie Mitchell’s excellent review of These Strange New Minds by Christopher Summerfield (May 16) full of challenging, but often disputable, assertions.

Mitchell quotes the author’s version of the Duck Test: “If something swims like a duck and quacks like a duck, then we should assume that it probably is a duck”. But, as we all know, if it quacks like a duck, it could be anything at all. Anybody stuck with the notion that only real ducks quack must be seriously confused about their childhood doll, which surely said “Mama” when tilted. In this case, the quacking duck is AI and the “Mama” it emits is chatbot information, or “botfo”, which is as much a mechanical product as the piezo beeper responsible for the doll’s locution.

Unfortunately, the history of AI is littered with rotten metaphors and weak similarities. For example, the “neural networks” of AI are said to “mimic” the way actual brain-resident neurons operate. The choice of language is typically anthropomorphic. Neural networks are a marvellous breakthrough in computer programming, but neurologists tell us that this is not remotely how neurons actually work. The metaphor is stretched too far.

Then there is the “alignment problem” – the existential fear that AI may not align with human intentions, resulting in the end of the human race. This is usually introduced with frankly preposterous examples such as the one quoted: AI trashing our civilization when asked to fix global warming. Nick Bostrom’s original example was apocalypse resulting from AI being asked to produce paper clips. All amusing, but absurd and plainly unrealistic, since humans will continue to supply the prompts.

Professor Mitchell cannot be blamed for retailing Summerfield’s notions, but she does add one of her own – that AI large language models “put the final nail in the coffin” containing Chomsky’s assertion “that language is unique to humans and cannot be acquired without some sort of innate mental structure that is predisposed to learn syntax”.

This is mistaking the fake duck quack for a real one. The statistically generated language of chatbots bears no resemblance to human language because it lacks what all human utterance has – intentionality. In AI, the only intention behind the language is that supplied by the human who prompts the software….

Chris Zielinski
Romsey, Hampshire

 

Selected readings

See the Language Log archive on Artificial intelligence

[Thanks to Leslie Katz]



6 Comments »

  1. Randy Alexander said,

    June 8, 2025 @ 7:39 am

    It's refreshing to see people debunking AI assumptions like this. But unfortunately this sort of debunking is not widespread enough. It makes me cringe inside when I hear people I respect intellectually talk about things like AI becoming conscious.

  2. Gregory Kusnick said,

    June 8, 2025 @ 10:17 am

    The statistically generated language of chatbots bears no resemblance to human language because it lacks what all human utterance has – intentionality.

    This strikes me as nonsense on two counts. First, the whole point of LLMs, and the reason for their success, is their ability to produce a very convincing semblance of human language. And second, quite a large proportion of human utterances are merely reflexive, thoughtless chatter, with no more genuine intentionality than the output of a chatbot.

  3. Jonathan Smith said,

    June 8, 2025 @ 11:56 am

    Continuing the struggle to find a metaphor that will get through to LLMphiles, let us turn to Plato's dialogue in which the timeless question is posed "wouldya rather eat chocolate [Form] appearing to the senses to be shit, or shit [Form] appearing to the senses to be chocolate?" LLMs produce shit which is chocolate-like enough for enthusiasts to lap it up on tap straight from the um semicolon. Just Like the Real Thing they say! Not healthy, turns out…

    So reference to "intentionality" is arguably not exactly hitting the nail on the head but is certainly not nonsense.

  4. Chris Button said,

    June 8, 2025 @ 12:13 pm

    "Neural network" is great branding though (technically it's called an "artificial neural network" in this context though). To take another emerging tech, compare the name "blockchain" to what it really is: a kind of "distributed ledger technology". Doesn't quite have the same ring to it.

  5. Tim Leonard said,

    June 8, 2025 @ 1:13 pm

    "In AI, the only intention behind the language is that supplied by the human who prompts the software…."

    Vastly more is implicitly extracted from the training data. But the training data was, again, generated by humans.

  6. Gregory Kusnick said,

    June 8, 2025 @ 2:35 pm

    Whether you call it "intentionality" or something else, Zielinski's argument smacks of an appeal to some sort of "secret sauce" that renders human speech Meaningful with a capital M in a way that mechanically produced speech never can be even in principle. One needn't have drunk the LLM Kool-aid to reject that sort of quasi-dualism in favor of a naturalistic view that considers human brains as (collections of) sophisticated mechanisms for producing language.

RSS feed for comments on this post · TrackBack URI

Leave a Comment