AI hallucinations

« previous post | next post »

Tom Simonite, "AI has a hallucination problem that's proving tough to fix", Wired 3/9/2018:

Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.

Simonite's article is all about "adversarial attacks", where inputs are adjusted iteratively to hill-climb towards an impressively (or subversively) wrong result. But anyone who's been following the "Elephant semifics" topic on this blog knows that for Google's machine translation, at least, spectacular hallucinations can be triggered by shockingly simple inputs: random strings of vowels, the Vietnamese alphabet, repetitions of single hiragana characters, random Thai keyboard banging, etc.

This particular quirk seems relatively harmless, and in any case would be very easy to defend against if anyone cared enough to bother. But it suggests a more general weakness of the underlying methods, which is that they lack common sense. Nonsensical inputs far outside their training set, rather than being recognized as nonsense, often generate (semi-)sensible hallucinated responses.

In a world where such algorithms were responsible for managing large areas of individual and social life, more serious things could go wrong.

 



12 Comments

  1. DCBob said,

    March 10, 2018 @ 9:21 am

    "Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there." … Hey! I resemble that remark!

  2. Jason M said,

    March 10, 2018 @ 9:36 am

    Wonder what Alex Trebek could have made Watson do on Jeopardy! if he'd only known to deliver prompts in random strings of Vietnamese vowel sounds and such.

    Watson: "I'll take 'Former Presidents' for $500, Alex."

    Trebek: "ă â ê ô ơ ư".

    Watson: "Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you."

  3. John Roth said,

    March 10, 2018 @ 10:49 am

    Would that creepy laugh that Alexa is reported to do for no apparrent reason be an example?

  4. Stephen Hart said,

    March 10, 2018 @ 11:54 am

    "But it suggests a more general weakness of the underlying methods, which is that they lack common sense."
    Paul Allen Wants to Teach Machines Common Sense

  5. Stephen Hart said,

    March 10, 2018 @ 11:55 am

    Oops. The URL failed to appear in my post. Maybe this will work.

    https://www.nytimes.com/2018/02/28/technology/paul-allen-ai-common-sense.html

  6. Eric said,

    March 10, 2018 @ 12:48 pm

    I choose to believe that google translate is purposefully generating band names. I mean, who wouldn't go see "The Surname of the Ninth Surname" or "The Two-Dimensional Bifurcation"?

  7. Alice said,

    March 10, 2018 @ 2:15 pm

    I work on Siri, and this is something that comes up a lot. We'll get reports of odd responses because the poor thing will hear some gibberish background noise and try her best to make SOME sense out of it. And we can't possibly train Siri to recognize every single potential garbage utterance as nonsense.

    Ultimately, the goal is that Siri should be as helpful as possible. Having her be over-responsive, and occasionally act strange when she picks up false positives, is preferable to risking the user's commands being ignored if they said something Siri didn't completely understand.

    I'm interested to hear what people think would be an acceptable, "common sense" solution to this problem.

    [(myl) What we expect from human beings, who also haven't been trained "to recognize every single potential garbage utterance as nonsense", is something along the lines of "Excuse me, what did you say?", "Huh?", "I'm sorry, that doesn't make any sense", or "Did you really just ask me to ___?"]

  8. AntC said,

    March 10, 2018 @ 3:32 pm

    we can't possibly train Siri to recognize every single potential garbage utterance as nonsense.

    That's a straw man, um, bot. Our monkey brains can't recognise every garbage utterance. But we can readily recognise many/most: the sort of examples in 'Elephant semifics'. We can also usually recognise when a speaker makes a 'slip of the tongue' at least enough to common-sensically ask if that's what they really meant. Whereas the bots seem to have no concept of "that's strange, I'd better make sure".

    I'm particularly disappointed with Google translate: I was travelling recently and asking it to translate restaurant and hotel reviews (not abstract poetry, not technical material). 'Garbage utterance' was what it produced in English nearly all the time.

  9. Ex Tex said,

    March 10, 2018 @ 10:41 pm

    I think introducing voice recognition (Siri) is off-topic. That adds another level of complication that is plagued by its own insane complexity.

    The simplest answer is for GT to simply do spell-check the input, and not attempt to translate anything nto in its,dictionary. In fact, unrecognized words (slang, specialized terms) are usually just left as is in the output. I don't get why GT would even attempt to translate random strings, when they ignore other stuff that's not parseable.

  10. James Wimberley said,

    March 12, 2018 @ 3:20 pm

    Eliza at least responded "tell me more about your mother".

  11. Anarcissie said,

    March 12, 2018 @ 4:08 pm

    What about semantic networks?

  12. ktschwarz said,

    March 14, 2018 @ 5:19 pm

    Hallucinations are not only from background noise and garbage input. Sometimes you can get weird translations from normal, grammatical input. See the comments to the next post for some bizarre Hungarian-English translations. The country list translation oddity is another example.

    Just a few of many examples:
    Hungarian: "60 dkg marhalábszár" (60 dkg beef shin); Google Translate: "60 dkg beetle bark"
    Hungarian: "A vizslák a kutyavilág arisztokratái." (Vizslas [dog breed] are the aristocrats of the dog world.) Google Translate: "Aquariums are aristocrats of the dog world."
    Finnish: "Viherpeipon koivet ovat vaaleanpunertavat." (Greenfinches' legs are pinkish.) Google Translate: "The birch trees in the green pepper are pinkish."

    It's actually making errors like a brain-damaged human here: it's obeying syntactic category, substituting nouns for nouns!

RSS feed for comments on this post