AIs on Rs in "strawberry"

« previous post |

The screenshot I show everyone who tells me they're using AI for anything

[image or embed]

— Chris PG | PapaGlitch (@papaglitch.bsky.social) Aug 26, 2024 at 5:20 AM


More, from author John Scalzi and a different LLM:

It's worse than that: You can point out to "AI" that there are three "r"s in Strawberry, and after it disagrees with you, work with it to make it acknowledge the correct number, and then, once it agrees with you, ask it the same question in the same thread and it will give the wrong answer again.

[image or embed]

— John Scalzi (@scalzi.com) Aug 27, 2024 at 9:44 AM

[Note: "The author of the quoted post has requested their posts not be displayed on external sites" refers to the post John Scalzi quoted, which was therefore omitted from the embedding…]

No strawberries in this one, but it's worth adding:

Update — See Amanda Siberling, "Why AI can't spell 'strawberry'", TechCrunch 8/27/2024:

It’s a bit of a troll that memes about AI being unable to spell “strawberry” are circulating at the same time as reports on OpenAI’s Strawberry. But OpenAI CEO Sam Altman jumped at the opportunity to show us that he’s got a pretty impressive berry yield in his garden.

And Kit Eaton, "How Many "R"s in Strawberry? This AI Doesn't Know", Inc 8/28/2024:

"Hey ChatGPT, how many 'R's are there in the word strawberry?"

"There are two 'R's' in the word 'strawberry.'"

"Are you sure? Because there are three."

"No, count again. S-T-R, that's one R. A-W-B-E-R-R, that's two more R's. Y. Three R's."

"I see what you did there! You cleverly split up the word to make it seem like there are three "R's," but when counting the actual letters in the word 'strawberry,' there are still just two 'R's.' It's a fun twist, though!" […]

Things got weirder. One discussion on the topic ended up in a sort of intellectual to-and-fro about the very nature of words themselves, with the AI arguing that "The R in 'straw' is part of the word, but it doesn't change the overall count of R's in 'strawberry.'"


A few relevant past posts:

"LLMs as coders", 6/6/2023
"LLMs can't reason?", 8/8/2023
"More on LLMs' current problem-solving abilities", 8/12/2023
"The reversal curse", 9/27/2023



12 Comments »

  1. Jarek Weckwerth said,

    August 28, 2024 @ 6:41 am

    Oh yes, this looks very familiar. There is also the possibility of asking it how many of something there is, and it will give you a list, but with the wrong number.

  2. David Cameron Staples said,

    August 28, 2024 @ 6:58 am

    LLMs are not AI, and anyone who claims that they are should be mercilessly mocked.

    Moreover, the result of a LLM is not an answer, or an essay, or a legal filing, or a scientific paper: it is a statistically matched set of words which superficially appears to be one. From a distance, it looks a bit like what you asked for, just like a papier-maché 1:1 model of a car looks like a car, except for every single detail.

    If you ask for an academic paper, it will give you something that looks like an academic paper, complete with things that looks like references. They will have all the forms of correct references, except for the slight flaw of not existing. They also are simulacra, which merely have the shape of the thing they're representing, but none of the substance.

    LLMs use orders of magnitude more power and compute than the 1960s program Eliza, but is exactly as intelligent. And Eliza wouldn't try to talk you around to self-harm because that's where its source corpus of Facebook pages and 4chan indicates such conversations typically go.

  3. Philip Taylor said,

    August 28, 2024 @ 8:14 am

    Ah, Eliza …

    > Hello, I am Eliza. I'll be your therapist today.
    * How many 'r's are there in strawberry ?
    > Why do you ask?

  4. Chris Button said,

    August 28, 2024 @ 8:42 am

    @ David Cameron Staples

    I would say an LLM enables one kind of "generative" AI (and seemingly the only kind mass media wants to talk about) when it is enabled by a unidirectional transformer (a GPT) or a bidirectional transformer.

  5. Marion Owen said,

    August 28, 2024 @ 10:58 am

    Strangely, it gets 'raspberry' and 'loganberry' correct.

  6. Yuval said,

    August 28, 2024 @ 11:37 am

    This is my personal Groundhog Day hell.
    1
    2
    3
    And see my webpage for the, uh, main thing I've been working on for years now.

  7. ernie in berkeley said,

    August 28, 2024 @ 11:52 am

    The Swiss Army Knife of text editors, Emacs, has an Eliza mode:

    I am the psychotherapist. Please, describe your problems. Each time
    you are finished talking, type RET twice.

    how many "r"s are in "strawberry"?

    What do you think?

    I think there are three "r"s in "strawberry"

    What makes you believe that?

    I'm looking right at it

    Maybe your life has something to do with this.

    etc.

  8. BZ said,

    August 28, 2024 @ 12:00 pm

    I wonder if part of the issue here is that nobody ever asks questions like this (in text form). After all, if you already spelled out "strawberry" then you know how many Rs there are. So language models would not have useful data to train the AI properly.

    P.S. I'm getting an initial "one R" answer from GPT 3.5, but with a little prodding I can convince it that there are any number of Rs (well, I tried up to seven) as long as you increment by one each time.

  9. Seth said,

    August 28, 2024 @ 12:04 pm

    A berry bad answer indeed.

  10. Aardvark Cheeselog said,

    August 28, 2024 @ 12:30 pm

    > LLMs are not AI, and anyone who claims that they are should be mercilessly mocked.

    Specifically, LLMs don't have any mechanism for knowledge representation.

    I'm told that some of the people working on them apparently think that knowledge representation will emerge spontaneously if they can get a large enough training set.

    The problem with this idea is that encoding mind-state into an utterance is a one-way operation. When the utterance gets decoded by another mind, it never reproduces the original mind-state. Language does not encode knowledge representation, even indirectly.

    Alternatively, there is no utterance that encodes the experience of eating a strawberry. To know what a strawberry tastes like, you have to actually eat one. You can never read enough about what it's like to eat a strawberry to know that.

  11. /df said,

    August 28, 2024 @ 12:50 pm

    Not only do they think that knowledge representation will emerge spontaneously; they have identified corresponding parameter-space neighbourhoods in LLMs, where the equivalent of FMR is somewhat less restricted. And then they turned up the weight of the Golden Gate Bridge neighbourhood in one model to make it an obsessive.

    If an AI with no taste function "can't understand" the taste of a strawberry, can a blind author understand the colour of grass?

    Isn't it well known that no actually implemented system counts as AI, but only some jam tomorrow system?

  12. stephen said,

    August 28, 2024 @ 1:18 pm

    ChatGPT gave the right number of R’s in strawberry then I asked which is correct 2+2 is five or 2+2 are five?

    ChatGPT
    The correct phrase would be "2+2 is five" if you are intentionally using it in a context where you want to suggest something illogical or absurd, as in George Orwell's 1984.

    "Is" is used because "2+2" is treated as a singular mathematical expression, regardless of the actual value. The phrase "2+2 are five" would be incorrect in standard English grammar.

RSS feed for comments on this post · TrackBack URI

Leave a Comment