That howling void of thoughtlessness beneath

« previous post | next post »

From Charles Stross, Neptune's Brood. It's 7000 AD, and Krina Alizond-114 has this to say about a not-very-helpful piece of interactive software:

[T]hese things bore only a thin veneer of intelligence: Once you crack the ice and tumble into the howling void of thoughtlessness beneath, the illusion ceases to be comforting and becomes a major source of irritation.

The Siri vs. Siri conversations on YouTube illustrate this idea nicely, e.g.

Siri1: Hi, Dylan.
Siri2: Sorry, I don't understand "I Dylan".
Siri1: It's OK. Really.
Siri2: I don't understand "it's okay really".
Siri1: I'm sorry, I'll try harder.
Siri2: I don't understand "I'll try harder".
Siri1: I'm sorry, I'll try harder.
Siri2: I don't know what you mean by "Eight I'll try harder".
Siri1: Sorry, I didn't get that.
Siri2: Sorry, I missed that.
Siri1: It's all good.
Siri2: That's good to know.
Siri1: Yes, everything is just fine.
Siri2: That's what I figured.
Siri1: Dylan, I don't understand "that's what I figured".
Siri2: I'm sorry, I'll try harder.
Siri1: It's no problem.
Siri2: I don't understand "it's no problem".
Siri1: Unfortunately, Dylan, I am as yet imperfect.
Siri2: Sorry, I don't understand "Unfortunately doing I am at Yegan perfect".
Siri1: It's OK.
Siri2: I'm not sure what you said there.
Siri1: Dylan, I don't understand "I'm not sure what you said there".
Siri2: I found 13 places matching "Dylan" … 6 of them are not far from you.
Siri1: Dylan, I don't understand "I found 13 places matching Dylan six of them are not far from you".
Siri2: I don't understand "Keller I don't understand by".
Siri1: I don't know what you mean by "Understand why".
Siri2: OK, it's no big deal that you don't know.
Siri1: I'm just trying to help you.
Siri2: I don't know what you mean by "I'm just trying to help you".
Siri1: No problem.
Siri2: Uh oh. Help doesn't seem to have an address. Tap the name below if you'd like to add one.
Siri1: I'm not sure what you said there, Dylan.
Siri2: I don't even exist… how can you hate me?

The algorithms may have been improved since 2011, but the howling void is still there.

Ironically, Krina Alizond-114 herself "is metahuman, descended from the robots that once served humanity", as the publisher's blurb puts it. And of course the howling void of bureaucratic regulations can sometimes make AI attempts at interaction seem intelligent by contrast.

But still, it's noticeable that in examples of this sort, Siri's imitation of speech recognition works much better than her imitation of conversational interaction. It's not clear whether this is because half-way intelligent interaction is harder than speech recognition, or because people have recently worked harder on speech recognition, or (most likely in my opinion) because this little joke stresses Siri's communicative capabilities and doesn't really stress her speech recognition capabilities.



18 Comments

  1. Mark P said,

    July 10, 2013 @ 9:18 am

    The veneer is thicker than it used to be, but still thin. On the other hand, given 5000 years for further development, I might expect an object like a nut or a bolt to have about the same level of conversational interaction that Siri has.

  2. KeithB said,

    July 10, 2013 @ 9:24 am

    Things haven't improved much. In _The Naked Computer_ by Rochester and Gantz, a 1984 book of computer trivia, they mention a conversation between ELIZA and PARRY (a paranoid AI). Unfortunatly, they don't recorde it.

  3. mike said,

    July 10, 2013 @ 9:50 am

    @KeithB, I think what you just said was that in spite of a whole 30 years' worth of technological innovation, computer people STILL haven't developed the capability to have a machine converse with a human in a natural way.

    Slackers.

  4. Dick Margulis said,

    July 10, 2013 @ 10:27 am

    Slackers, there's a computer people still in Seattle. Are you in Seattle? Type or say your address, and I will give you directions to reach the computer people still so that you can arrive there in a whole 30 years human capability innovation spite.

  5. Keith M Ellis said,

    July 10, 2013 @ 10:42 am

    If anyone is wondering, the non-AI of the interactive software and the AI of the protagonist is not a contradiction — it's mentioned just before that section that such devices are explicitly engineered to be significantly below sentience so as to avoid issues of rights and such. Also, the sentience of these descended-from-robots metahumans is, as far as I can tell, quasi-organic, more like grown neural networks than a reductionist designed computing system. There's some clues about this with regard to the difficulty of integrating several partial backups of a character that was killed — if they fully understood consciousness in a reductionist fashion, there wouldn't be this difficulty.

    So my inference is that such software devices wouldn't be sentient, anyway, as it's a different paradigm. That doesn't quite work with the "avoid sentience because of human rights issues", but I don't think Stross really went into detail when he thought this out. I'm only a third of the way into the book, though, so maybe he explains more later.

  6. Nick Lamb said,

    July 10, 2013 @ 10:56 am

    KeithB, I don't think Siri's creators were interested in improving on the state of the art of AI, though that does seem to be how it's interpreted by the layman. Instead Siri is intended to make use of the state of the art in untrained speech recognition, something ELIZA didn't attempt.

    AI is rarely the right solution, not only because it's fantastically difficult but also because it would inevitably be unpredictable and that hasn't proved to be a great property in people, let alone in machines. In fiction Peter Watts' experimental AI drone Azrael "fears it might not be enough" to suicide into its refuelling base. The reader sympathises, Azrael is acting on a new found perspective that has led fighting men to similar thoughts before. But this isn't what we want from automation.

    For example, when we automate a railway, we don't create an AI that can learn each route, recognise the many visual and audible signals used, and understand verbal instructions from a signaller. We simplify. If the train should slow down for a curve, we don't try to teach it to recognise a curve, figure out the correct speed and adjust the controls, we just set an appropriate maximum target speed for that section of track and the software mindlessly obeys. For better, or very occasionally worse.

  7. Joe said,

    July 10, 2013 @ 11:12 am

    "Once you crack the ice and tumble into the howling void of thoughtlessness beneath, the illusion ceases to be comforting and becomes a major source of irritation."

    Under the ice, it seems, lies the the the uncanny valley.

  8. peter said,

    July 10, 2013 @ 1:23 pm

    mike said (July 10, 2013 @ 9:50 am)

    "@KeithB, I think what you just said was that in spite of a whole 30 years' worth of technological innovation, computer people STILL haven't developed the capability to have a machine converse with a human in a natural way."

    Of course, there's also the possibility that the computer people have in fact been very successful these last 30 years. In other words, the weird, mis-aligned, off-kilter machine-to-machine and machine-to-human conversations we witness here may be PRECISELY how computer people speak to other humans.

  9. JS said,

    July 10, 2013 @ 3:41 pm

    While we know that Siri doesn't represent AI innovation, it's still useful to have, in these conversations, a reminder that she has no idea what anything she says means…

  10. Wonks Anonymous said,

    July 10, 2013 @ 3:49 pm

    ELIZA vs PARRY is here: http://www.faqs.org/rfcs/rfc439.html

  11. Eric P Smith said,

    July 10, 2013 @ 5:04 pm

    "The howling void of thoughtlessness beneath" – As this is Language Log and not AI log, I just thought I'd voice my admiration of the iambic pentameter.

  12. KeithB said,

    July 10, 2013 @ 5:45 pm

    It appears to me that Siri is mainly having problems with understanding. It could very well be that the Siri's speaking compression was designed to be comprehended by humans and that when sent back to Siri's speech recognition it is missing parts used to comprehend natural speech.

  13. KeithB said,

    July 10, 2013 @ 5:46 pm

    Sorry posted too soon!
    It sould be interesting to repeat the experiment with a human re-speaking the Siri responses.

  14. Aristotle Pagaltzis said,

    July 10, 2013 @ 7:16 pm

    To everyone at all interested in this I also recommend the following book, authored by a computer program:
    http://www.goodreads.com/book/show/2123898.Policeman_s_Beard_is_Half_Constructed

  15. Gene Callahan said,

    July 10, 2013 @ 8:27 pm

    "It appears to me that Siri is mainly having problems with understanding."

    Well, yes, in that it has none, that would be the case.

  16. Q. Pheevr said,

    July 10, 2013 @ 10:38 pm

    It seems to me that one reason any conversation between two instances of Siri is doomed to sound unnatural is that it creates a situation in which it's impossible to follow Grice's Cooperative Principle:

    Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.

    A Siri–Siri conversation has no agreed-upon purpose. Each Siri is programmed to try to respond appropriately to an interlocutor's questions or requests; neither Siri is programmed to make requests or ask questions. It's like watching two people trying to play a game of ping pong when they're both standing on the same side of the table—the ball just keeps bouncing off into oblivion on the other side.

  17. David J. Littleboy said,

    July 12, 2013 @ 11:50 pm

    Gene C. has it exactly right: the AI types gave up trying to understand years ago, and have been looking for cheap tricks to give the impression/effect of understanding without doing the work. ELIZA (one of the cheapest of cheap tricks ever) was fun to talk to, and people baring their souls to ELIZA freaked out it's author (Joe Weizenbaum) something fierce, and he became a vehement critic of AI. In my opinion, he was quite wrong at the time, since Winograd and Minsky and Schank and some of their students really were trying to figure out what understanding meant and was, but nowadays, he's become completely right.

  18. Paul said,

    July 16, 2013 @ 8:47 am

    The purpose of AI is not so much to replicate intelligence as to demonstrate that more and more activities don't require intelligence.

RSS feed for comments on this post