I can't do that, Dilbert

« previous post | next post »

Dilbert for 12/17/2011 suggests that we may be in more danger from smartphone apps than from autonomous warbots — not Arnold Schwarzenegger as the Terminator, but Rowan Atkinson as the Administrator:

The previous strip merely suggested a threat to the jobs of administrative assistants:

But middle managers are probably easier to replace.

And it's worth pointing out that Siri was spun off from a 2003-2008 Defense Advanced Research Projects Agency effort called CALO, for "Cognitive Assistant that Learns and Organizes".


  1. Gene Callahan said,

    December 20, 2011 @ 2:34 pm

    "But middle managers are probably easier to replace."

    Then who will the administrative assistants be assisting?

    [(myl) The AIs that replace the middle managers, of course.]

  2. Joe said,

    December 20, 2011 @ 5:28 pm

    I get the same feeling with my Garmin GPS. Sometimes, while using the route prescribed by the Garmin, I'll decide to make an alternative turn or route adjustment: I can almost hear an exasperated sigh when it says, "Recalculating…"

  3. suntzuanime said,

    December 20, 2011 @ 6:21 pm

    Understanding spoken language is spooky? Maybe *that's* what I've been doing wrong.

  4. Aaron Toivo said,

    December 20, 2011 @ 6:34 pm


    For a nonhuman device to understand spoken language? I don't know about you, but for me that lands smack into the uncanny valley. I want nothing to do with it. If a device or phone prompt requires me to use my voice I stop using it or hang up.

  5. Stephen Nicholson said,

    December 20, 2011 @ 7:09 pm

    Interesting, I've never heard of "uncanny valley" being used to describe a user interface. Also, I think of uncanny valley being a response to visual stimuli, not aural.

  6. Emily said,

    December 20, 2011 @ 9:42 pm

    "So help me Jobs" was a nice touch.

    @Stephen Nicholson: I find that speech synthesis and autotune can produce an auditory uncanny valley effect.

  7. MikeA said,

    December 21, 2011 @ 12:03 pm

    For a while I've been wondering if we are approaching success at Turing's imitation game from both ends. Increased machine capabilities (Siri, Watson) meet lowered human expectations (Dealing with scripted customer service representatives)

  8. Keith M Ellis said,

    December 21, 2011 @ 12:44 pm

    "Also, I think of uncanny valley being a response to visual stimuli, not aural."

    Well, yeah. And I think you know what he meant, but it's worth exploring.

    What's happening in the uncanny valley is that our theory of (human) mind is being engaged while, simultaneously, we are still getting cues that this is very much not a (human) mind. IMO, that creates a cognitive dissonance that is just below conscious processing; we are aware of it through its affects as it filters upward and so the whole thing is experienced as a kind of gestalt of not-quite-rightness.

    That analysis doesn't depend upon any specific channels of sensory input. Assuming it's broadly correct, then people could experience the same sort of thing in contexts other than computer generated imagery (and robots, presumably).

    Really, this gets to the heart of some issues in linguistics, doesn't it? We're simply not accustomed to interacting with non-living things via natural language. In the past, it's been so unreliable that we have always been aware that it's not really responding to our natural language in any way that we understand language, it's obviously a sort of elaborate trick. At best.

    With contemporary technology, it's reached the point where the evidence of trickery is less and less apparent, to the point of complete invisibility in some cases. Then, especially when we're not being self-aware about the interaction, it can seem like we're having a sort of conversation with a machine. Conversations require a theory of mind, we're most likely utilizing one when we speak to the machine. But it's a machine, it has no mind. (I'm not saying this is necessarily true, just that it's not true at present in fact.) We are aware of this, it's hard to ignore that this is a machine, and so we (or some of us) have that special kind of cognitive dissonance.

  9. nick said,

    December 21, 2011 @ 3:10 pm

    The page on CALO rather oddly gives the *genitive* calonis of the Latin noun calo.

    There's an interesting origin for this word: according to Lewis & Short's dictionary it is borrowed from the Greek κᾶλον meaning 'timber' or 'log', since originally the calo's function was carrying such logs – if we believe the ancient etymologists, at least.

  10. Simon Spero said,

    December 25, 2011 @ 4:02 pm

    The DARPA project was PAL; the SRI project for PAL was CALO; there was also a CMU project called RADAR, which ended up being managed by SRI after the first year.

RSS feed for comments on this post