Unfair Turing Test handicaps

« previous post | next post »

Today's PhD Comics:

As in the recently-celebrated case of an alleged 13-year-old Ukrainian, there are circumstances in which the humanity of correspondents may be somewhat obscured.

Update — An interactive professorial chatbot, based on "Actual Responses from Real Professors".



15 Comments

  1. David Denison said,

    June 12, 2014 @ 8:15 am

    http://www.theguardian.com/science/brain-flapping/2014/jun/09/13-year-old-boy-passes-the-turing-test-spoof

  2. Bill Benzon said,

    June 12, 2014 @ 9:15 am

    FWIW, Kurzweil doesn't think that this chatbot has passed the test:

    http://www.kurzweilai.net/response-by-ray-kurzweil-to-the-announcement-of-chatbot-eugene-goostman-passing-the-turing-test

    Though I'm skeptical about Kurzweil's views on AI, I think he's right about this one. In his blog post he talks about the bet he made with Mitch Kapoor that the Turing Test will be passed by 2029. He noted that they had to spend a great deal of time negotiating the specific terms of the text, which is itself rather vague.

    What I'm wondering is whether or not the Turing Test is, in fact, just another specialized domain for AI. Given that the conversation is, in principle, open-ended one might not think of it as being specialized. But I'm no so sure. After all, no human can talk well on every topic so we shouldn't expect it of a computer either. And, after all, people can get pretty strange in conversation. So strangeness – whatever that is – wouldn't be a surefire indicator either.

    But what would be a surefire indicator? Anything?

    No, I'm beginning to think that Turing Testing is a specialized performance domain. The specialization may be rather different from playing Jeopardy, but it's still a specialization. If so, there will come a time when computer performance is so good that the contest holds little further interest.

    [(myl) Surely this particular episode is preposterous and not worth taking seriously. We don't need Ray Kurzweil to tell us that.

    For those who are interested in the more serious issues involved, it's hard to do better than Stu Shieber's edited 2004 volume, "The Turing Test: Verbal Behavior as the Hallmark of Intelligence".]

  3. Jim said,

    June 12, 2014 @ 3:51 pm

    See also: https://xkcd.com/329/

  4. Carl Offner said,

    June 12, 2014 @ 4:12 pm

    And also see http://www.scottaaronson.com/blog/?p=1858 — Scott Aaronson's famous and remarkably well-written blog.

  5. Adrian Morgan said,

    June 12, 2014 @ 8:18 pm

    This is what transpired the other day when I tried conversing with the online version of the chatbot. I wasn't impressed.

    ME: Should I tweet about you?

    BOT: Oh, if I tell you about myself, I'll definitely be dishonest:-) Well, I am a son of Rockefeller, I've arrived here in the Limo… If I'm not mistaken – you still didn't tell me where you live. OR it's a secret?:-)

    ME: We've never spoken before.

    BOT: You are so negative just to annoy me, I know! Some another topic?

  6. Adam Funk said,

    June 13, 2014 @ 4:06 am

    Yorick Wilks's piece in the Guardian was interesting. The last paragraph is pretty funny for anyone who's heard his catchphrase at conferences ("We did this in the 60s.").

  7. un malpaso said,

    June 13, 2014 @ 3:06 pm

    I work outside of academia and I can attest to the fact that many of my clients/coworkers would not pass the Turing test. (Especially as deadlines approach and vital information becomes more and more important to elicit from them.)

  8. maidhc said,

    June 13, 2014 @ 4:54 pm

    The comics weigh in on the discussion:
    http://www.gocomics.com/wumo/2014/06/13

  9. Ray Dillinger said,

    June 13, 2014 @ 10:52 pm

    As someone who has written chat bots professionally, I have to state that it is ridiculously easy to fool human judges.

    It's not that the "intelligence" is false, not really. But it has to be understood as what it is; it's the kind of intelligence wasps and bees have that prompts them to particular responses when they receive particular inputs. We (humans) perceive these elicited responses as evidence of human-style intelligence because both the stimulus and the response look like language, and we have never seen wasps or bees that recognize linguistic stimulus and produce linguistic responses.

    But they are not language. The words in the stimulus, like the words in the response, mean about as much to the little organism accepting and producing them as the colors in front of a bee's compound eye and the particular angle at which it holds the joints of its legs.

    As I was explaining to my dear wife last night, we are now making chatbots smarter than bees. That's progress; ten years ago we were making systems smarter than clams, and the famous ELIZA program was barely smarter than grass. IBM's "Watson" and related efforts such as Google's translation program which acquire and apply knowledge by statistical exposure, are now nearly as smart as mice, and that's a huge leap in capabilities.

    But as yet they are primarily power tools applied to linguistic problems, and not creatures whose consciousness develops concepts and symbols that language is used to communicate, interpret or express. The fact that we have framed their inputs and responses in language-dependent terms leads us to classify them in a strange way, as though they deserve more or different credit for their intelligence than the credit that applies to animals whose perceptions are chemical or visual and whose responses are muscular.

  10. Bill Benzon said,

    June 14, 2014 @ 5:54 am

    @Ray Dillinger: Have you seen this talk by Dave Ferrucci, who headed the Watson team?

    https://www.youtube.com/watch?feature=player_embedded&v=F_0hpnLdNjk

    Toward the end he speculates about they way forward. He suggests that Watson-class technology may be just powerful enough to support a man-machine dialog through which the machine would be able to "learn" by constructing "classical" knowledge representation schemas based on human response.

  11. Ray Dillinger said,

    June 14, 2014 @ 11:18 am

    Indeed I have. And that's exciting stuff.

    Constructing that "classical" knowledge representation would be a task very much like constructing the set of symbols that animals think with – and, by extension, that humans use language to express and interpret. Of course, it will also reveal the limitations of our classical knowledge representation schemas – or the degree to which they need supplementation by para-symbolic or statistical knowledge.

    Once a system has world knowledge that's expressed symbolically, it becomes possible for generalization to happen; for example, a mouse has a "Food" symbol somewhere in its mental makeup and uses it in reasoning about the world – thus can generalize and conclude that the same strategies it uses to get access to kibble can be used to get access to crackers. The ability to independently construct such a symbol set amounts to mammalian-style learning.

    That's the kind of thing that current chatbots cannot do; like bees acquire and communicate the range and size of a pollen source, modern chatbots can acquire specific information like the interlocutor's name, account number, order, desired delivery date, etc, and reliably relay such things back to the "Hive" or database. But each of those symbols has to be defined for it, before any use of it can take place, and each instance or generalization of those symbols has to be added by the intervention of a programmer.

    Oh. Another system almost as smart as a mouse is the Google self-driving car. We don't consider it in the same way because its inputs and responses are not given in linguistic form – but it's dealing with the same kind of real-world fuzzy-set knowledge that's integrated over a lot of statistics, it's doing it in real time, and it's doing a darn good job of it.

  12. Ray Dillinger said,

    June 14, 2014 @ 11:26 am

    Actually, as I think about it, the self-driving car is probably closer to the level of a skink or a gecko than a mouse. "Mouse level" or mammalian-style learning is what the Watson researchers are going for with a system being able to construct its own symbol set. The ability to refine programmed knowledge using statistics and experience is more a reptilian-level achievement.

  13. David J. Littleboy said,

    June 16, 2014 @ 2:22 am

    I wonder how many people who chatter about the Turing test have actually read the paper? My impression is that most people have neither read nor thought about what Turing was trying to say in that paper.

  14. Dan H said,

    June 19, 2014 @ 4:57 pm

    I wonder how many people who chatter about the Turing test have actually read the paper? My impression is that most people have neither read nor thought about what Turing was trying to say in that paper.

    To be fair, I think you could make a reasonable case that "Turing Test" has a (perhaps rather ill-defined) meaning of its own independent of what Turing originally posited it as. Indeed the *original* Turing test as I understand it (and for what it's worth I haven't read the paper either, I'm working off Wikipedia) seems to be a rather low bar.

  15. Francisco said,

    June 26, 2014 @ 11:22 am

    When I read Turing's paper as an adolescent I was shocked by the vagueness of his proposal. Later I realised that he was only proposing a kind of thought experiment, intended to stress that artificial intelligence was not a matter of building a conscious 'thinking machine' but rather a simple matter of measurable performance.

RSS feed for comments on this post