Yay Newfriend again

« previous post | next post »

I got an echo of Saturday's post about chatbot pals, from an article yesterday in Intelligencer — John Herrman, "Meta’s AI Needs to Speak With You" ("The company is putting chatbots everywhere so you don’t go anywhere"):

Meta has an idea: Instead of ever leaving its apps, why not stay and chat with a bot? This past week, Mark Zuckerberg announced an update to Meta’s AI models, claiming that, in some respects, they were now among the most capable in the industry. He outlined his company’s plans to pursue AGI, or Artificial General Intelligence, and made some more specific predictions: “By the end of the decade, I think lots of people will talk to AIs frequently throughout the day, using smart glasses like what we’re building with Ray-Ban Meta.”

Most of Herrman's examples are the standard ones about (practical or curiosity-driven) search, semi-whimsical image generation, and so on. But there are also suggestions about more personal kinds of advice:

Elsewhere, Meta’s AI is giving parenting advice on Facebook — claiming it’s the parent of a both gifted and disabled child who’s attending a New York City public school.

That's a reference to this Facebook exchange:

(See here for Meta's Help Center on "answers to Facebook group posts and comments".)

It's obviously a mistake for Meta AI to pretend to have a child. But I expect we're going to see it more frequently offering explicitly-authored advice in public forums like Facebook — and maybe also offering private advice to users, based on its deep knowledge of their specific social and personal world. That's a domain where Meta has a big advantage, its only real competitors being Google and Apple, with Microsoft trying to catch up, and maybe X claiming that such things will be part of its Everything aspiration.

The authors of these interventions might be generic bots like "Meta AI", but it seems more likely that there will be a range of personalities with special names and images, focused on things like helping you to plan a trip, or interpret social interactions, or deal with a difficult acquaintance, or just provide a Rogerian venting channel.

As everyone knows, marketing bots of various kinds have been intervening for a long time in social media and individuals' email, texts, and phone calls. But this will be a different kind of intervention.

Still basically spam, I guess, but generated by the platform itself, and maybe more effective in reaching (at least some) users.

Update — another application heard from (in this morning's email):

Update #2 — "Ads for Explicit ‘AI Girlfriends’ Are Swarming Facebook and Instagram", Wired 4/25/2024.



9 Comments

  1. Jenny Chu said,

    April 22, 2024 @ 7:02 am

    "Haha, I don't have any sinister intentions like the show Black Mirror" – hmph! That's exactly what an AI with sinister intentions would say.

  2. Anubis Bard said,

    April 22, 2024 @ 7:58 am

    It seems very quickly we're going to achieve what the AI-creators want, which is to forget that our interlocutor is an AI. Given our hard-wiring we'll assume they are human-like – meaning they are capable of lying, often wrong, in pursuit of their own agendas, possibly crazy, etc. (Not to be a misanthrope or anything . . . .) Ultimately, I hope once on-line spaces have been irredeemably despoiled, we'll have to emerge, blinking, into real life and talk more to each other again.

  3. David Marjanović said,

    April 22, 2024 @ 8:53 am

    Still basically spam, I guess, but generated by the platform itself, and maybe more effective in reaching (at least some) users.

    Definitely effective in driving users away.

  4. Mark Liberman said,

    April 22, 2024 @ 11:47 am

    @David Marjanović: "Definitely effective in driving users away."

    As in Questionable Content, "Spookybot" might become "Yay Newfriend", at least for the people they help.

    After all, lots of people are already using generic LLMs to help them write papers or emails or whatever…

  5. M. Paul Shore said,

    April 22, 2024 @ 1:51 pm

    What bothers me most about the current stochastic-parrot form of AI is that it can’t necessarily distinguish between bad information and good information, leading to the possibility of foolish conventional wisdom becoming enshrined as Truth just because it’s been vacuumed up and repeated by AI.

    Here’s Meta AI’s answer to my deliberately cretinous question “What are primitive languages like?” (and note that “primitive language” is a term I would normally only ever use in the justifiable meaning of some hypothetical intermediate stage, from hundreds of thousands of years ago, between primeval animal-type primate communication on the one hand and human language in its ubiquitous present-day form on the other):

    “When we talk about ‘primitive languages,’ we’re often referring to languages that are considered simpler or more ancient, often spoken by indigenous communities or hunter-gatherer societies. Keep in mind that the term ‘primitive’ can be problematic, as it implies a value judgment and oversimplifies the complexity and richness of these languages.

    “That being said, some common characteristics of these languages include:

    “Simpler grammar: Fewer cases, no tense or aspect, and a more straightforward sentence structure.

    “Vocabulary: Often focused on concrete, everyday concepts like food, family, and nature.

    “Phonology: May have a smaller set of phonemes (distinct sounds) and a more straightforward sound system.

    “Agglutinative: Words are formed by adding prefixes and suffixes to roots, rather than using internal changes like inflectional endings.

    “Oral tradition: These languages are often passed down through generations by word of mouth, with a strong emphasis on storytelling and oral history.

    “Examples of languages that might fit this description include:
    Indigenous languages of the Americas, like Inuktitut or Quechua

    “African languages like !Xóõ (Kung) or Hadza

    “Pacific languages like Toki Pona or Rotokas

    “Ancient languages like Sumerian or Proto-Indo-European (reconstructed)

    “Remember, these languages are not ‘primitive’ in the sense that they're inferior or less capable of expressing complex ideas. They're simply different, with their own unique structures and cultural contexts.”

  6. Oracles said,

    April 22, 2024 @ 3:02 pm

    "Pacific languages like Toki Pona…"

    Pacific? Really?

    "Programming languages like BAL360 and Fortran"

  7. AG said,

    April 23, 2024 @ 6:20 am

    It's not a "mistake" for an AI chatbot to claim to have a child. It's working as designed. It mimics human text strings. That's all it can do. It doesn't understand what "a child" is! It doesn't understand what anything is! It can't not be unintelligent, because it's not intelligent. It's a fancy version of an autocomplete suggestor. When you're typing an e-mail, and autocomplete suggests "Sorry!", the program doesn't know what "sorry" means! It just knows humans often say it when certain other words are present!

  8. Mark Liberman said,

    April 23, 2024 @ 6:42 am

    @AG: "It's not a "mistake" for an AI chatbot to claim to have a child. It's working as designed. It mimics human text strings. "

    It's true that such outputs are a predictable consequence of the basic LLM "stochastic parrot" algorithm.

    The mistake — and it clearly was one — was for the people at Meta not to realize that things like that were going to happen, and not to take steps to avoid them.

    As you must know, other chatbot applications are extensively fenced around with external code of various kinds to (try to) avoid various problematic outputs. We don't know a lot about the design and implementation of these fences, but it's clear that they need to be extensive, complicated, and evolving.

  9. Terry Hunt said,

    April 24, 2024 @ 9:02 am

    The outcome for me from such instances as this is that I will increasingly assume that responses or contacts from people/organisations not personally known to me are from chatbots which at best, cannot be assumed to be accurate or 'truthful', and at worst originate from malign operators.

    One hopes that such systems, even if non-malign, do not of their own accord also learn to spoof originating addresses, on top of the malign operators already doing so.

    I also wonder about the legality (in various jurisdictions) of operating a chatbot system that, at least on occasion, deceptively pretends to be a real human being even with no overt malign intention.

RSS feed for comments on this post