Could Watson parse a snowclone?

« previous post | next post »

Today on The Atlantic I break down Watson's big win over the humans in the Jeopardy!/IBM challenge. (See previous Language Log coverage here and here.) I was particularly struck by the snowclone that Ken Jennings left on his Final Jeopardy response card last night: "I, for one, welcome our new computer overlords." I use that offhand comment as a jumping-off point to dismantle some of the hype about Watson's purported ability to "understand" natural language.

An excerpt:

If you are a fan of "The Simpsons," you'll be able to identify [Ken's joke] as a riff on a line from the 1994 episode, "Deep Space Homer," wherein clueless news anchor Kent Brockman is briefly under the mistaken impression that a "master race of giant space ants" is about to take over Earth. "I, for one, welcome our new insect overlords," Brockman says, sucking up to the new bosses. "I'd like to remind them that as a trusted TV personality, I can be helpful in rounding up others to toil in their underground sugar caves."

Even if you're not intimately familiar with that episode (and you really should be), you might have come across the "Overlord Meme," which uses Brockman's line as a template to make a sarcastic statement of submission: "I, for one, welcome our (new) ___ overlord(s)." Over on Language Log, where I'm a contributor, we'd call this kind of phrasal template a "snowclone," and that one's been on our radar since 2004. So it's a repurposed pop-culture reference wrapped in several layers of irony.

But what would Watson make of this smart-alecky remark? The question-answering algorithms that IBM developed to allow Watson to compete on Jeopardy! might lead it to conjecture that it has something to do with "The Simpsons" — since the full text of Wikipedia is among its 15 terabytes of reference data, and the Kent Brockman page explains the Overlord Meme. After all, Watson's mechanical thumb had beaten Ken and Brad's real ones to the buzzer on a "Simpsons" clue earlier in the game (identifying the show as the home of Itchy and Scratchy). But beyond its Simpsonian pedigree, this complex use of language would be entirely opaque to Watson. Humans, on the other hand, have no problem identifying how such a snowclone works, appreciating its humorous resonances, and constructing new variations on the theme.

Read the rest here. For a recap of the tournament, check out my Word Routes column on the Visual Thesaurus here. And listen here for Stephen Baker and me talking about Watson on WNYC's "The Brian Lehrer Show."



32 Comments

  1. Spell Me Jeff said,

    February 17, 2011 @ 12:33 pm

    Certainly overhyped, and nowhere close to winning Alan Turing's imagined game, but still impressive as baby steps go. I can imagine it, say, making a preliminary medical diagnosis. There's value in that.

  2. Hyman Rosen said,

    February 17, 2011 @ 12:48 pm

    Do you know for a fact that Watson could not parse a snowclone? There are presumably many instances of the overlord snowclone in his database, possibly enough so that he could "get the joke". It is pointless to use scare quotes around Watson's ability to understand language. We should evaluate his understanding based not on presumptions on how his cognition works but rather on the results of that cognition; he understands language if he answers questions with appropriate (even if not correct) answers. You sound like you're falling prey to the fallacy of The Chinese Room.

  3. Stephen Nicholson said,

    February 17, 2011 @ 12:56 pm

    One thing about Watson that is helpful is the ranking of the top 3 answers. Originally a debugging tool, it has the ability to narrow inquires to a few choices a human being may not have thought of. I noticed that a lot of times when it didn't buzz in, or got the answer wrong, the second or third choice was the correct answer.

    A large disparity between the top choice and 2nd ranked choice suggests that Watson is right. If the choices are ranked more closely though, that suggests that the right answer might be among the three. Since wrong answers have a tendency to be "obviously" wrong, investigating all three shouldn't be a problem for a human.

  4. MattF said,

    February 17, 2011 @ 2:08 pm

    I wonder if Watson could identify the "overlords" snowclone as humor– even though it doesn't have a sense of humor… whatever that is.

  5. Spell Me Jeff said,

    February 17, 2011 @ 2:34 pm

    I suspect BZ was using the scare quotes because the word "understand" originates in the hype.

    Hyman Rosen is certainly correct to argue that we should examine the results. Adducing Searle is appropriate. Perhaps a more appropriate thinker on the subject is Alan Turing, whom I mentioned above, and his seminal 1950 paper "Computing Machinery and Intelligence."

    At first, Turing poses the strawman question “Can machines think?” He quickly explains that such a question is meaningless and spends much of the paper describing the imitation game. In such a game, like the Chinese Room, a computer that is hidden from an interrogator must, via teletype, engage in conversation sufficient to convince the interrogator that it is in fact human. Turing's conclusion: “'Can machines think?' should be replaced by 'Are there imaginable digital computers which would do well in the imitation game?'”

    Turing then expands the question into this:

    “Let us fix our attention on one particular digital computer C. Is it true that by modifying
    this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?”

    Early science fiction may have had little clue about the direction computing would take, but 61 years ago, Turing was right on the money.

  6. Chandra said,

    February 17, 2011 @ 2:53 pm

    I suppose a more elementary question than whether Watson can parse the "overlord" snowclone is whether Watson can recognize sarcasm.

    (…I just noticed that I used the word "elementary" where I would normally use a word like "fundamental", and am amused by the fact that I was probably influenced by the name "Watson". [/offtopic])

  7. The Ridger said,

    February 17, 2011 @ 2:57 pm

    Given his (its) utter inability to understand the category "Also on your computer keyboard", I'm not sure it could handle snowclones well. But I don't know enough to really have an opinion. I just found that category's obvious impenetrability fascinating.

  8. C. Jason said,

    February 17, 2011 @ 3:58 pm

    I'm not sure about this being an overhyped event — if anything it seems the opposite when compared to the media and attention given to the Deep Blue/Kasparov match-up.

    If IBM ever used the term AI in describing Watson, I never heard or read about it. It seems unfair to claim Watson failed the Turing Test because to my knowledge it wasn't comperting in one.

    If instead of looking at Watson as being a primitive AI, you regard it as a highly sophisticated search-engine, well I'd say the 'Grand Challenge', as IBM called it was not only met but surpassed.

  9. Trey Jones said,

    February 17, 2011 @ 4:52 pm

    It's important to note that the IBM researchers specifically disclaimed "understanding" of natural language. Which pretty much means that no one else has a leg to stand on to make that claim (not even one-legged gymnasts). Specifically, in a promotional video from January 2011, Dr. David Ferrucci, Watson Research Lead said:

    “The reality is that being able to win a game at Jeopardy! doesn’t mean you’ve completely conquered the language understanding task. Far from it.”

    jump to that point in the video:

    http://www.youtube.com/watch?v=FC3IryWr4c8&t=2m25s

  10. Hyman Rosen said,

    February 17, 2011 @ 5:03 pm

    "Not completely conquered language understanding" is not the same as "specifically disclaiming 'understanding' of natural language". Far from it.

  11. Ben Zimmer said,

    February 17, 2011 @ 5:41 pm

    The commercial that I discuss in The Atlantic article appears on IBM's YouTube channel under the title, "IBM Watson: Computer Understands Natural Language," and Ferrucci himself talks about how "computers couldn't understand" language before Watson.

    I do note later in the piece that Ferrucci sings a rather different tune about Watson's "understanding" elsewhere. Mixed messages, to say the least.

  12. Chris Brew said,

    February 17, 2011 @ 5:49 pm

    It all depends what you think is involved in understanding the "Overlords" meme. I would not be too sure that even 30% of the people who use the meme totally get it, and I suspect that Watson might be just as good at reacting appropriately as most of us are. It certainly has a fair amount of Simpsons lore to hand, and probably has a set of habits and responses reflecting the fact that The Simpsons is witty, ironic geek comedy. What more do you want?

  13. Spell Me Jeff said,

    February 17, 2011 @ 5:55 pm

    Overhyped in the sense that the media don't seem to understand what was in fact demonstrated. Either we're at the dawn of a whole new era, or Watson made a lot of goofs, har har.

    The actuality is pretty damned impressive, and the incorrect answers may provide as many insights into language processing as the correct ones. Probably more, since the correct answers probably just confirm the programmers' assumptions.

  14. naddy said,

    February 17, 2011 @ 7:10 pm

    From the Atlantic article:

    "For a computer, there is no connection from words to human experience and human cognition. The words are just symbols to the computer. How does it know what they really mean?"

    But we have that in the human realm, too. For instance, I know that the word Buche in my native German refers to some sort of deciduous tree. I also know that this kind of tree is called beech in English. And that is about all I know. I couldn't identify a beech tree if my life depended on it. Can I really claim to know the meaning of the word—or is it just a symbol to me? There are probably a lot of words in people's vocabulary that are "just symbols".

  15. John Roth said,

    February 17, 2011 @ 7:33 pm

    What would it mean for a computer to understand an utterance? Without having an answer to that question, the whole debate of whether Watson "understands" the questions is a meaningless noise.

    My own personal definition is that the computer needs to build a model that's arguably similar to one a human would build from the same utterance, and then use that to continue the conversation.

    Watson doesn't do that, and, since it's based on massive corpus techniques, it's headed off in a different direction at warp speed.

  16. Charly said,

    February 17, 2011 @ 8:54 pm

    I love Ken Jennings. I met him (briefly) at an art museum in Seattle. We also went to the same college! He really is that funny (and baby-faced!) in real life.

    Also, an eggcorn I found on urban dictionary (under "whipped," or possibly "p%%%%-whipped"):

    he responds to her "every beckon call"

    Interesting. Is there a more proper place to submit those?

    [BZ: There's the Eggcorn Database and its forum, but we already have beckon call covered.]

  17. Spell Me Jeff said,

    February 17, 2011 @ 10:12 pm

    ". . . what would Watson make of this smart-alecky remark?"

    Such an innocuous question. What does it mean in this context? Let's forget the remark and simply ask what "make of" means in Watson's context.

    I'm not convinced (and highly doubt) that Watson makes anything out of anything. To make something out of something sounds suspiciously like understanding, and we've dealt with that.

    Could Watson process the syntax and content of the remark and output some kind of response? I think so.

    Pattern recognition is one of the simpler things computers are good at. Should Watson do a loose search for Jennings's remark, the result would be dozens/hundreds of snowclones of the pattern "I, for one, welcome our (new) ___ overlord(s)."

    If Watson's database contains the concept of snowclones, any timestamp information would lead it to the Simpsons as a likely source of this one. Watson might easily output, "Yes, I enjoy the Simpsons too."

    If Watson's database knows that the Simpsons is classified as a "comedy," and that comedy is associated with laughter, and that laughter can be represented in text as "ha ha," then Watson could certainly generate a laugh.

    Asked to generate an original snowclone (remember, it's a pattern that could easily be identified as a pattern) then supplying virtually any NP would do the trick. And I'm pretty sure we've seen evidence that Watson could generate one in context, if in fact it is provided with a context. E.g., "I, for one, welcome our new televised overlords."

    Staying close to to the Turing model, any of these responses might mimic understanding. Impressive, but not drastically so.

    Here's a trickier puzzle. What might Watson due if Jennings had uttered this: "I, for one, welcome our new Tucson massacre overlords." Because of the pattern quality, identifying a snowclone joke as a joke seems trivial. Identifying a tasteless joke as such, snowclone or otherwise, strikes me as something else. Watson will presumably parse some of this as a joke, and some of it as an event that causes sadness. Yet this alone does not make it tasteless. Plenty of jokes involve people being stewed to death, jumping out of airplanes without parachutes, babies in blenders, and we might call them weird, or even sick, but not necessarily tasteless. The fact that the Tucson massacre is real does not entirely change things. A thousand jokes can begin, "Ronald Reagan, Jimmy Carter, and Elvis walk into a bar . . ." and the result might be stupid or even sick (whatever that really means) but not necessarily tasteless.

    Admittedly, tasteless is a subjective quality, but I suspect that most of us, like Potter Stewart, would know it when we saw it. Could Watson? Somehow Watson will have to reinterpret the concept of joke in the context of brutality that contemporary interlocutors are unlikely to consider funny. Which concept shall prevail? Will Watson be "stumped"?

    What kind heuristic would have to exist for Watson to respond, "I don't think that's funny, Ken" rather than "ha ha"? A pretty damned good one, I think. One that might even pass the Turing test.

  18. D.O. said,

    February 18, 2011 @ 2:22 am

    I know that the word Buche in my native German refers to some sort of deciduous tree. I also know that this kind of tree is called beech in English. And that is about all I know. I couldn't identify a beech tree if my life depended on it. Can I really claim to know the meaning of the word—or is it just a symbol to me?

    naddy, but I suppose there are words in English and German for which you know quite a bit more about the object, process, or quality they refer to. The claim is that Watson does not have any words in its possession with really deep understanding.

  19. ESarr said,

    February 18, 2011 @ 5:18 am

    @Spell Me Jeff

    Interesting point. This reminds me that there’s a whole subplot in Heinlein’s classic SF novel ‘The Moon is a harsh mistress’ involving a HAL-like computer learning how to tell good jokes from poor ones.

  20. zafrom said,

    February 18, 2011 @ 6:39 am

    Jennings's actual post-Stoker comment is
    (I FOR ONE WELCOME OUR
    NEW COMPUTER OVERLORDS)
    without the rhythm-changing commas. And a hundred years from now it may have morphed into "Mr. Watson—Come here—I want to see you." (I won't wait up to verify, as my condition will be terminal, but yours can contact mine.)

  21. Geoff Nunberg said,

    February 18, 2011 @ 7:45 am

    Not entirely on topic: a programmer friend of mine once said that the only proof of machine intelligence he'd find convincing would be if he entered a particularly clever line of code and the computer responded, "Ooh — neat!"

  22. Hyman Rosen said,

    February 18, 2011 @ 12:17 pm

    Accepting that understanding exists only when the computer has been programmed to emulate the human model is not useful, since we in fact do not know what the human model of understanding is! Who says that acquiring a large body of information and making associations within it is not the way the human brain works? Our consciousness is an emergent phenomenon of the operation of our brains and bodies. Using it to introspect does not give us actual information on how it works, and especially not on how it's implemented on the hardware of our bodies.

    There's no practical purpose to endless debate on whether this is "really" that. For natural language understanding, the naive approach is best – natural language is being understood when it appears to be understood.

  23. Nat said,

    February 18, 2011 @ 1:45 pm

    It's not quite inconsistent to simultaneously say that language understanding and the way the human mind works is too difficult for anyone to know about, and that crude behaviorism tells us what thinking and understanding are. But the first statement certainly undercuts any claim to know the second statement. If human consciousness is ineffable and mysterious, then one should admit that there's no way of telling whether or not Watson (or anything?) is really thinking or understanding language. If, on the other hand, you think that it's possibly to scientifically investigate language and the human mind and brain (as I do), then this provides a basis for answering whether or not particular machines really do understand. There's then no reason to say that a thing understands language when it successfully imitates something else that understand language. Why confuse simulation with reality? Or, to put it another way, if all you're interested in is the practical performance of some machine, then you should admit that you're just not engaged with the question of what the machine is doing and whether it's the same in the relevant ways as what humans do.

    I'm not sure how valuable it is to quote Turing, as though there hasn't been fifty subsequent years of development in linguistics, cognitive science, computability theory, neuroscience, psychology, etc.

  24. Ray Dillinger said,

    February 18, 2011 @ 1:51 pm

    For several years, programming conversational expert systems (for internal support, preservation of institutional knowledge, limited customer service, etc) was what I did for a living. There is a significant AI problem that such systems work on or work with, but supposed human-level understanding is not what that problem is.

    Consider a wasp, or an ant. These creatures are not "intelligent" as we use the word when talking about humans. But they have a bundle of reflexes and instincts that lead them to interact with the chaotic material world in an affectively ordered way, often enough and consistently enough that they are successful organisms. And that's the fundamental root from which the evolution toward human-style intelligence begins.

    Systems like Watson (and the smaller conversational robots that I formerly implemented) are smart in that kind of "bug" intelligence that allows them to take a reflexive action in response to "chaotic" stimuli (language has structure, but must be considered as "chaotic" when seen through statistical methods not capable of revealing its structure) – and interact in an affective and appropriate way often enough to prosper in a particular kind of environment.

    Put another way — with as many neurons as a bee, and language well-coded as input and output, you could probably get something that behaves a lot like Watson. That wouldn't make it smarter than a bee. It's just got a bundle of reflexes and instincts that suit it for interacting with a different environment than the one the bee interacts with. It handles language and response about as well as a bee handles navigation in flight and finding nectar. Not on what we'd call a "conscious" level, but purely on instinct.

    But this is real progress. Not so long ago, it was a struggle to make something smarter than grass. The systems I worked on were mostly smarter than clams. Watson is probably smarter than a bee. The ratchet toward "real" artificial intelligence is turning faster and faster.

  25. Hyman Rosen said,

    February 18, 2011 @ 3:34 pm

    Human consciousness is not too difficult for anyone to know about and admitting that understanding exists based on behavior is not crude. But we do not now know how human consciousness works, and there's a very good chance that we will have software which displays understanding of natural language well before we have a good explanation for how consciousness emerges from the working of the brain. Trying to dismiss such behavior as "instinct" rather than "true understanding" strikes me as nothing more than special pleading for a privileged view of humanity, not very different from similar older special pleading that attempted to privilege whites over blacks, or men over women.

  26. Mark F. said,

    February 18, 2011 @ 4:22 pm

    Hyman Rosen – I believe your points will be well taken when we really do have software that displays understanding of natural language. You seem to be acknowledging that we still don't, in which case I see no reason not to use scare quotes around "understand" with respect to a machine like Watson.

  27. D Young said,

    February 18, 2011 @ 6:58 pm

    Im pretty sure the Simpson's quote was derived from The Hitchhiker's Guide to the Galaxy by Douglas Adams. I don't have the book on hand at the moment, and the Google search returns a slew of irrelevant posts, but I'm pretty confident. I will follow up later.

  28. J Lee said,

    February 18, 2011 @ 9:09 pm

    Let's Build a Smarter Planet! Together! Buy IBM Stock! You Can Be an IBMer Too! I'll take lame-ass corporate slogans for $1,000.

  29. army1987 said,

    February 19, 2011 @ 2:07 pm

    @naddy:
    Well, I guess this just means that people disagree about what “understand” means. Personally, I'd define “understanding a sentence” as being aware of what its real-world referents are; e.g., I wouldn't say I understand the word beech in your case. I'd say I understand (e.g.) a recipe in Russian if reading it enabled me to prepare the dish (provided I have the required skills; or, at least, to tell whether someone is preparing that dish or not); not if I was merely able to answer questions about it in Russian. And the person in Searle's Chinese Room definitely doesn't understand Chinese in this sense, even if they memorized the whole rulebook and were able to perform the algorithm in their mind.

  30. Dan Lufkin said,

    February 21, 2011 @ 12:09 pm

    Reading Julian Jaynes's The Origin of Consciousness in the Breakdown of the Bicameral Mind gave me ideas that I've been mulling over for 35 years. From the Jaynsiain point of view, you could argue that Watson is still in the bicameral stage (memory and processor, maybe) and needs to make the transition to meta-awareness (being aware that one is aware) to make the next step toward slightly human capability.

    I recall that Dr. Chris Lanz of SUNY Potsdam (music, physics & computer sci [!]) was working (~2005) on a Jaynesian approach to AI language processing, but I can't find any recent reports. I wonder whether any LLers are in synch with this topic. There's lots of random Googleism out there, but I can't discern any red thread.

  31. chris said,

    February 23, 2011 @ 10:51 am

    And the person in Searle's Chinese Room definitely doesn't understand Chinese in this sense, even if they memorized the whole rulebook and were able to perform the algorithm in their mind.

    Of course the *person* in the Chinese Room thought experiment doesn't understand Chinese. That's like asking which of your neurons speaks English. It's the *set of rules* that understands Chinese, regardless of whether it is being carried out by a person, a machine, a trained monkey, a group, etc. (Or in Douglas R. Hofstadter's version, an ant colony.)

  32. Pete said,

    June 10, 2011 @ 4:30 pm

    Another snowclone I've just noticed today: "the only X in the village", based on the only gay in the village, from Little Britain.

RSS feed for comments on this post