AI hallucinations
Tom Simonite, "AI has a hallucination problem that's proving tough to fix", Wired 3/9/2018:
Tech companies are rushing to infuse everything with artificial intelligence, driven by big leaps in the power of machine learning software. But the deep-neural-network software fueling the excitement has a troubling weakness: Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.
Simonite's article is all about "adversarial attacks", where inputs are adjusted iteratively to hill-climb towards an impressively (or subversively) wrong result. But anyone who's been following the "Elephant semifics" topic on this blog knows that for Google's machine translation, at least, spectacular hallucinations can be triggered by shockingly simple inputs: random strings of vowels, the Vietnamese alphabet, repetitions of single hiragana characters, random Thai keyboard banging, etc.
Read the rest of this entry »