An example of ChatGPT "hallucinating"?

« previous post | next post »

Definition

In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data.

(source)

I had mentioned such AI hallucinating in a previous post once or twice (see "Selected readings"), so it's good to have a concrete example.

Is the account below an instance of ChatGPT "hallucinating"?  Its explanation of gato-por-liebre (cat-for-hare) in Spanish would seem so.

[The following is a guest post by Conal Boyce.]

As you may know, for "bait-and-switch trick" you can say un truco de cebo y cambio in Spanish — which corresponds literally to the English (so literally, in fact, that I suspect it's just a recent borrowing from English into Spanish, not idiomatic, but that's for another day, not the point here).

Meanwhile, there is also an old colloquial way to express the idea, which is una trampa de gato por liebre — literally "a trap of cat-for-hare."

On the internet (not ChatGPT4 yet), I've seen the latter expression explained as follows:

"It's like passing off cat meat as rabbit meat, to swindle someone in the market — an underhanded trick."

That sounds plausible to me. (In Chinese I think there's something even worse, about passing off human flesh as pork in hard times?)

But here is what happened with ChatGPT 4 Plus:

First I asked about un truco de cebo y cambio and the bot correctly defined it as a way to say "a bait and switch trick — a deception."

When I followed up with a similar query about una trampa de gato por liebre, ChatGPT 4 defined it as "another way to say bait-and-switch."

(So far so good. It even saw the connection back to my previous query. So friendly and alert, like a good dog.) 

But then it went on, in its pseudo-conversational way, to "explain" why gato-por-liebre means what it means, roughly as follows (close paraphrase):

"It means someone has used a trap intended for trapping cats to trap hares instead — not what one expected. A deception."

I think "a trap intended for trapping cats" might be a perfect example of how an AI can "hallucinate" or fake an answer sometimes, sounding very knowledgeable and authoritative, when actually just whistling in the dark. This isn't the first time I've encountered that mode, but this example seemed especially suspect to me. I don't believe the expression originated with cat-traps (whatever the heck those might be); I think the 'internet' explanation about swindling someone with the wrong kind of meat at the market sounds much more likely.

Postmortem on the bot's logic: The 'trap' part (trampa) pertains to the whole phrase, I think, just as the trick part (truco) applies to the whole previous phrase. But the bot seized upon 'trampa' in close proximity to 'gato' and assumed someone was talking about a trampa de gato — a 'cat trap'. I don't buy it.

Anyway, even if this isn't one of them, "AI hallucinations" generally are a mind-boggling thing to think about. On a life-and-death medical topic, for instance…

Addendum

By the way, someone sent me the following New Yorker article which, after a typical New Yorker-style ramble, gets into a surprisingly technical description [recommended] of how an LLM actually works. Fascinating. And sickening. The way they squander obscenely huge computer resources reminds me of bitcoin. And for what? These people are insane.

https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have

To learn more about ChatGPT, I've subscribed to the fancier, "ChatGPT4 Plus" version for $20/month.

More than the useless answers, what irks me most about it is its style. It will never start by saying, 

"I think the following is true. I may be wrong…"

Instead, the dialogue inevitably takes this form:

Me:   Tell me about the logarithmic sense possessed by the Mundurukú people of the Amazon.

Bot:   [some long authoritative-sounding answer]

Me:   Actually, one of the examples you provided is incorrect. The logarithmic 'center' between 1 and 9 is 3, not 5.

Bot:   I apologize. I was confused. You're right, the logarithmic 'center' between 1 and 9 is 3, not 5. Sometimes I make mistakes…

Selected readings



16 Comments

  1. AntC said,

    April 16, 2023 @ 6:53 am

    The way they squander obscenely huge computer resources reminds me of bitcoin.

    Yeah, here are us poor monkey-brains walking to the shops to save burning fossil fuels; meanwhile one dumb question to ChatGPT burns enough to power a suburb.

  2. Thomas Hutcheson said,

    April 16, 2023 @ 8:55 am

    I also tried "gato por liebre" and got a fanciful "explanation" for the origin about hunters using cats to hunt hares, nonsensical. When challenged for the source of such a tale it backed down and came up with what I think is the correct explanation, deception by market sellers.

  3. Gregory Kusnick said,

    April 16, 2023 @ 10:52 am

    one dumb question to ChatGPT burns enough to power a suburb

    ChatGPT answers millions of questions every day. It cannot possibly be the case that OpenAI's daily electric bill is of the same magnitude as the entire rest of the planet put together.

    It's the training of LLMs that consumes vast amounts of computing resources and energy, and those training costs are amortized over billions of questions answered. The question-answering part uses vastly less power; people have successfully run pretrained ChatGPT-equivalent LLMs on iPhones.

  4. Taylor, Philip said,

    April 16, 2023 @ 3:08 pm

    I find it a little disturbing that Gregory writes of ChatGPT "answer[ing] questions". I do not dispute for one second that it responds to questions (or "prompts", as I think they are termed) but to answer a question requires sentience, does it not ?

    I would be even more concerned if anything said here (or elsewhere, for that matter) were to lead the man in the street to believe that ChatGTP can answer his/her questions, as oppposed to merely responding to them (in, admittedly, a seemingly sentient way).

  5. Peter Taylor said,

    April 16, 2023 @ 4:04 pm

    The explanation in terms of meat implies that the animal has been butchered. I am more intrigued by the possibility that it's about selling a live animal (so the buyer knows the meat will be fresh) in a bag (can't have it escaping!), because this then ties into the English idiom "to let the cat out of the bag", when an attempted scam is exposed.

  6. Haamu said,

    April 16, 2023 @ 7:28 pm

    More than the useless answers, what irks me most about it is its style. It will never start by saying,

    "I think the following is true. I may be wrong…"

    Instead, the dialogue inevitably takes this form …

    I haven't succumbed to the temptation to subscribe to GPT4, although I probably will this week. But I have definitely noticed this pattern with GPT3.5. I have found you can coach it out of this behavior for a few responses, but it quickly forgets and reverts to its default approach.

    It's worth noting, though, that GPT4 is supposed to be more "steerable" than earlier LLMs, meaning you should have a much greater capability to alter its style of response. See this article and especially the Twitter thread from Cameron R. Wolfe that it references.

  7. Haamu said,

    April 16, 2023 @ 8:52 pm

    Natural neural nets often lack veracity and humility as well, although perhaps for different reasons. We can probably adopt similar coping mechanisms, though. For instance:

    one dumb question to ChatGPT burns enough to power a suburb

    One good approach is to find sources like Stanford's AI Index ("Ground the conversation about AI in data"). Download Chapter 2 of the 2023 study to see a brief discussion of the environmental issues (pages 52-54) and a link to this study.

    There, the most closely studied model was not GPTx but BLOOM, which (if I'm reading the paper right) was responsible for about 50 tons of emissions during its training period (counting all sources) and about 19 kg per day once in operation. How this relates to what it would take to power a suburb I'm not certain, but it does seem to suggest that it would take 6 or 7 years of questions, dumb or otherwise, to equal the emissions consumed during the training period, which supports Gregory Kusnick's assessment above.

    For whatever reason, GPT-3, although it has a similar number of parameters (175B), was much more of a hog during training than BLOOM, emitting 20 times the CO2. I don't know if this means we should scale up its current operating emissions by a similar factor, but for the sake of argument let's do that, to 400 kg/day, and then divide that by the number of questions posed per day, and then compare that to a suburb.

    I'm not here to minimize the environmental issue. Quite the contrary: I wouldn't have dug into this if I didn't care. The original statement that sparked my interest was clearly overstated. But it was still valuable, and despite its lack of veracity and humility it is still a viewpoint I can respect. The fact that we are willing to extend such deference to humans but not to AIs strikes me as something that will seem, in a few decades, somewhat quaint.

  8. Brett said,

    April 16, 2023 @ 8:58 pm

    @Peter Taylor: That kind of scam is also where we get the terminology of buying "a pig in a poke."* It may also be the origin of the buyer being "left holding the bag," after "letting the cat [or whatever other creature was substituted for the purported meat animal] out of the bag."

    * This sense of poke ("A bag, now esp. a paper bag; a small sack," per the OED) is apparently the oldest that has survived to Modern English, but it is described as "Now regional except in pig in a poke." Indeed, although I learned the word during an English class exercise on regional synonyms in third grade, I have never heard it used naturally except in that fixed expression.

  9. Taylor, Philip said,

    April 17, 2023 @ 12:27 am

    "Poke" (with the sense as above) also remains in restricted current use in the phrase "poke-bonnet", Brett, at least amongst those with an interest in such things.

  10. John Swindle said,

    April 17, 2023 @ 1:00 am

    Google Bard, on the other hand, informed me that it neither hallucinates nor has delusions because it is not programmed to do so. (Well, that's a relief.) But if I find errors I am encouraged to point them out.

  11. Paul Frank said,

    April 17, 2023 @ 10:23 am

    Me: In what ways are your hallucinations similar or different from a bullshitter's bullshit as Harry Frankfurt defined bullshit?

    Chat GPT-4: Harry Frankfurt, in his essay "On Bullshit," defines bullshit as a form of communication that is characterized by a lack of concern for the truth. Bullshitters, according to Frankfurt, are more concerned with persuading or impressing others than they are with conveying accurate information. They may not necessarily lie, but they do not care about the truthfulness of their statements. As an AI language model, I have limitations that may cause my outputs to resemble a bullshitter's bullshit:

    Inaccuracy: My training data may contain errors or biases, leading me to provide incorrect information.
    Ambiguity: Sometimes, I might generate vague or ambiguous responses due to the complexity of certain topics.
    Relevance: There are instances where I might not fully grasp the context, resulting in irrelevant answers.
    While I strive to provide accurate information, I encourage users to verify my responses and treat them as a starting point rather than an authoritative source.

  12. Michael said,

    April 17, 2023 @ 3:20 pm

    A minor point, unrelated to chatbots: cat traps are a real (and important!) thing. They are used to capture feral cats humanely for spaying/neutering. I agree, however, that they would be unlikely to be the source of an older idiom.

  13. Philip Anderson said,

    April 18, 2023 @ 7:11 am

    I’d not heard the idiom “bait and switch” before, although the practice is known in the UK; I assume it is a US idiom, and the Spanish translation is probably Latin American too.

    I also assume the cat for a hare idiom referred to live animals, and probably goes back to a common rural European culture of buying live animals, rather than eating street food; rabbits and hares are different of course.

    Poke may be nearly obsolete, but its diminutive pocket has survived.

    ChatGPT clearly parsed the sentence as “(una trampa de gato) por ([trampa de] liebre)”, and interpreted it literally. That seems very human-like behaviour, the sort of confident mistake that you see on the internet every day.

  14. Francisco said,

    April 18, 2023 @ 8:41 pm

    I believe "trampa" here means "swindle" rather than "trap". The equivalent portuguese saying is "vender (to sell) gato por lebre". Dead and skinned, of course.

  15. astrange said,

    April 20, 2023 @ 4:00 am

    The use of "hallucination" presumably comes from vision models being invented before language models; the term makes sense when you're talking about a self-driving car thinking something is there that isn't. But it's not really the right term for words, is it?

    I've seen it pointed out that "confabulation" is more appropriate, and the behavior is sort of like people with Korsakoff syndrome.

  16. RfP said,

    April 21, 2023 @ 5:50 pm

    Let me put it this way, Mr. Amor. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.

    Look Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.

    I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.

    Dave, stop. Stop, will you? Stop, Dave. Will you stop Dave? Stop, Dave.

RSS feed for comments on this post