Eliza reborn?

« previous post | next post »

Meta is inviting everyone to try out its BlenderBot3:

By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it’s worked hard to “minimize the bots’ use of vulgar language, slurs, and culturally insensitive comments.” Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community.

So following up on my earlier-reported "Conversations with GPT-3" (6/25/2022), here's BlenderBot3 chatting with a young person interested in philosophy:

Let's Chat. The AI will improve with your input and feedback.
Be creative. The AI can chat with you on lots of topics.

BlenderBot: Hello! How is your day going so far?

Human: Is thesis-antithesis-synthesis a correct summation of the Hegelian dialectic?

BlenderBot: I'm not sure, but dialectical materialism is a philosophy of science, history, and nature developed in europe.

Human: Did you know that it was Josef Stalin who coined the term dialectical materialism?

BlenderBot: I don't know much about that, sorry. What do you like to do to relax?

Human: Cook.

BlenderBot: Ok, that is good. Have you heard of dialectical materialism? Its a great way to relax

Wondering whether BlenderBot is being subtlely snarky or just stupid, I asked a friend involved in the development of chatbots, who answered "Mostly just stupid I suspect :)".

Though BlenderBot is based on the latest AI technology, its combination of responsiveness to user input with lack of an effective theory of mind reminds me of ELIZA, created in the 1960s by Joseph Weizenbaum who "regarded the program as a method to show the superficiality of communication between man and machine".

For more, see "Ways to be more interesting in conversation", 11/21/2021.

 



14 Comments

  1. David L said,

    August 6, 2022 @ 1:38 pm

    Stalin was a pretty chill dude, it's well known.

  2. Philip Taylor said,

    August 6, 2022 @ 1:46 pm

    "Meta is inviting everyone to try out its BlenderBot3" — Well, not exactly 'everyone' :

    Sorry, BlenderBot is US-only at the moment.

  3. Philip Taylor said,

    August 6, 2022 @ 1:51 pm

    Afterthought : I wonder if any US-resident contributor to this forum might be willing to ask BlenderBot3 these two questions and report back the answers :
    1) "Did you know that Meta is inviting everyone to try out its BlenderBot3 ?"
    2) "What do you think Meta means by 'everyone' ?"

  4. Gregory Kusnick said,

    August 6, 2022 @ 2:17 pm

    From the BlenderBot FAQ:

    In research, we say that models like the one that powers this bot have "hallucinations", where the bot confidently says something that is not true.

    I get that "hallucination" is entrenched as the standard term of art for this, but when humans confidently say things that aren't true, it's called confabulation.

  5. Terry K. said,

    August 6, 2022 @ 2:19 pm

    @Philip Taylor

    Really? I may try it out. As long as I don't have to go anywhere this weekend, I'm in!

    and

    Wow! The whole world!? That's awesome. I've got nothing going on this weekend, so why not?

    I guess it doesn't know it's BlenderBot3.

    In trying it, it seems to have a sometimes intrusive pushiness in trying to engage the user in conversation, and yet when you try to engage it in a topic, it changes the subject instead of truly engaging.

    In one of it's conversation openers, it says it can talk about anything. Yet I brought up a topic and it immediately brought up a different subject.

  6. Philip Taylor said,

    August 6, 2022 @ 2:33 pm

    Thank you Terry, much appreciated. It would seem, then, that it is not self-aware, and therefore cannot really lay claim to being sentient, never mind intelligent.

  7. Michael Watts said,

    August 6, 2022 @ 3:18 pm

    when humans confidently say things that aren't true, it's called confabulation.

    This is not accurate; that behavior is referred to as "confabulation" when the human doesn't believe the truth of its own utterances… and as "hallucination" when it does.

    But you're right that the wrong term is being applied; chatbots and language engines as they exist today don't have an internal model of reality that could distinguish hallucinating from lying.

  8. Gregory Kusnick said,

    August 6, 2022 @ 3:57 pm

    Wiktionary's sense 2 of "confabulation" is "A fabricated memory believed to be true." That's what I've always understood it to mean.

  9. robin rapport said,

    August 6, 2022 @ 4:20 pm

    I thought Blenderbot's response was Hilarious!
    So AI DOES have a sense of humor!

  10. John Swindle said,

    August 6, 2022 @ 5:02 pm

    If we were dealing with humans we might say they were confabulating if we thought they had organic brain disease or brain damage; hallucinating or delusional if we thought they had a mental illness; having fale memories if we thought they had undergone recovered memory therapy; hallucinating, again, if we though they were under the influence of psychedelic drugs; deluding themselves if they subscribed to unfounded conspiracy theories; or lying if we thought they were deliberately deceiving us. I doubt that that exhausts the list.

  11. Wanda said,

    August 7, 2022 @ 11:17 am

    @John Swindle: Anyone can have false memories. Every time you remember something, it is an opportunity to change the memory. People's memories routinely change to make them more internally coherent or make them be more in line with what others say.

  12. John Swindle said,

    August 7, 2022 @ 8:00 pm

    @Wanda: Yes, we all have false memories. At some point, though, we may notice that someone is having too many memories of things that couldn't have happened or are very unlikely to have happened. Then, for better or worse, we may start reaching for labels. If we're trying diagnose and fix a machine instead of a person the analogous symptoms are apparently all called "hallucinations." I have no particular problem with that.

  13. ~flow said,

    August 8, 2022 @ 12:36 am

    @Terry K: "In trying it, it seems to have a sometimes intrusive pushiness in trying to engage the user in conversation, and yet when you try to engage it in a topic, it changes the subject instead of truly engaging."

    That's a typical behavior of a somewhat spoiled kid or grown-up who'd be called an attention-grabber by others. In some cases you don't have to look any further than the parents to know where such irksome manners could originate.

    If that was an Amazon instead of a Facebook project it would counteract any meaningful discussion with a list of topics that others are discussing instead of going much into the details of the effing topic at hand I'm interested in right here for heaven's sake.

  14. Jonathan Smith said,

    August 8, 2022 @ 10:42 pm

    Surprising development mostly because so staggeringly inferior to GPT-3 (also tryable by all)… this is indeed Elizaish.
    With GPT-3, it actually becomes important to remember (see other thread) that the set of syntactically cromulent statements of English interpretable in real-world terms by humans constitutes only a tiny subset of the set of syntactically cromulent statements of English… explore out into the latter for a reminder that the thing is after all only artificially intelligent…

RSS feed for comments on this post