Archive for Computational linguistics

Intellectual automation

Following up on the recent discussion of legal automation, I note that Paul Krugman has added a blog post ("Falling Demand for Brains?", 3/5/2011) and an Op-Ed column ("Degrees and Dollars", 3/6/2011), pushing an idea that he first suggested in a 1996 NYT Magazine piece ("White Collars Turn Blue", 9/29/1996), where he wrote as if from the perspective of 2096:

When something becomes abundant, it also becomes cheap. A world awash in information is one in which information has very little market value. In general, when the economy becomes extremely good at doing something, that activity becomes less, rather than more, important. Late-20th-century America was supremely efficient at growing food; that was why it had hardly any farmers. Late-21st-century America is supremely efficient at processing routine information; that is why traditional white-collar workers have virtually disappeared.

Read the rest of this entry »

Comments (18)

Legal automation

Over the past few days, we've discussed the possible relevance of corpus evidence in legal evaluations of ordinary-language meaning. Another (and socio-economically more important) legal application of computational linguistics is featured today in John Markoff's article, "Armies of Expensive Lawyers, Replaced by Cheaper Software", NYT 3/4/2011:

When five television studios became entangled in a Justice Department antitrust lawsuit against CBS, the cost was immense. As part of the obscure task of “discovery” — providing documents relevant to a lawsuit — the studios examined six million documents at a cost of more than $2.2 million, much of it to pay for a platoon of lawyers and paralegals who worked for months at high hourly rates.

But that was in 1978. Now, thanks to advances in artificial intelligence, “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost. In January, for example, Blackstone Discovery of Palo Alto, Calif., helped analyze 1.5 million documents for less than $100,000.

Read the rest of this entry »

Comments (12)

Now on The Atlantic: The corpus in the court

On Tuesday, the Supreme Court ruled in FCC v. AT&T that corporations are not entitled to a right of "personal privacy," even if corporations can be construed as "persons." To reach this decision, they were aided by an amicus brief by Neal Goldfarb that presented corpus evidence on the types of nouns that the adjective "personal" typically modifies. Here on Language Log, Mark Liberman posted about the case on the day the decision was released, and now I have a piece for The Atlantic discussing the use of corpus analysis in the courtroom.

Read the rest of this entry »

Comments (2)

…with just a hint of Naive Bayes in the nose

Coco Krumme, "Velvety Chocolate With a Silky Ruby Finish. Pair With Shellfish.", Slate 2/23/2011:

Using descriptions of 3,000 bottles, ranging from \$5 to \$200 in price from an online aggregator of reviews, I first derived a weight for every word, based on the frequency with which it appeared on cheap versus expensive bottles. I then looked at the combination of words used for each bottle, and calculated the probability that the wine would fall into a given price range. The result was, essentially, a Bayesian classifier for wine. In the same way that a spam filter considers the combination of words in an e-mail to predict the legitimacy of the message, the classifier estimates the price of a bottle using its descriptors.

The analysis revealed, first off, that "cheap" and "expensive" words are used differently. Cheap words are more likely to be recycled, while words correlated with expensive wines tend to be in the tail of the distribution. That is, reviewers are more likely to create new vocabulary for top-end wines. The classifier also showed that it's possible to guess the price range of a wine based on the words in the review.

Read the rest of this entry »

Comments (15)

Could Watson parse a snowclone?

Today on The Atlantic I break down Watson's big win over the humans in the Jeopardy!/IBM challenge. (See previous Language Log coverage here and here.) I was particularly struck by the snowclone that Ken Jennings left on his Final Jeopardy response card last night: "I, for one, welcome our new computer overlords." I use that offhand comment as a jumping-off point to dismantle some of the hype about Watson's purported ability to "understand" natural language.

Read the rest of this entry »

Comments (32)

You can help improve ASR

If you're a native speaker of English, and you have about an hour to spare, and the title of this post (or a small promised gift) convinces you to devote your spare hour to helping researchers improve automatic speech recognition, just pick one of these four links at random and follow the instructions: 1, 2, 3, 4.

[Update — the problem with the tests has been fixed — but more than 1,000 people have participated, and the server is saturated, so unless you've already started the experiment, please hold off for now!]

If you'd like a fuller explanation, read on.

Read the rest of this entry »

Comments (28)

Jeopardizing Valentine's Day

I've stolen the title of this post from the subject line of a message from Hal Daumé, who has invited folks at University of Maryland to a huge Jeopardy-watching party he's organizing tonight. Today is February 14, so for at least some of the audience, Jeopardy might indeed jeopardize Valentine's Day, substituting geeky fun (I use the term fondly) for candle-lit dinners.

In case you hadn't heard, the reason for the excitement, pizza parties, and so forth is that tonight's episode will, for the first time, feature a computer competing against human players — and not just any human players, but the two best known Jeopardy champions. This is stirring up a new round of popular discussion about artificial intelligence, as Mark noted a few days ago. Many in the media — not to mention IBM, whose computer is doing the playing — are happy to play up the "smartest machine on earth", dawn-of-a-new-age angle. Though, to be fair, David Ferrucci, the IBMer who came up with the idea of building a Jeopardy-playing computer and led the project, does point out quite responsibly that this is only one step on the way to true natural language understanding by machine (e.g. at one point in this promotional video).

Regardless of how the game turns out, it's true that tonight will be a great achievement for language technology. Though I would also argue that the achievement is as much in the choice of problem as in the technology itself.

Read the rest of this entry »

Comments (36)

Language and intelligence

Two interesting popular articles on linguistic aspects of artificial intelligence have recently appeared in the popular press.

The first one is by Richard Powers ("What is Artifical Intelligence?", NYT 2/6/2011):

IN the category “What Do You Know?”, for $1 million: This four-year-old upstart the size of a small R.V. has digested 200 million pages of data about everything in existence and it means to give a couple of the world’s quickest humans a run for their money at their own game.

The question: What is Watson?

I.B.M.’s groundbreaking question-answering system, running on roughly 2,500 parallel processor cores, each able to perform up to 33 billion operations a second, is playing a pair of “Jeopardy!” matches against the show’s top two living players, to be aired on Feb. 14, 15 and 16.

Read the rest of this entry »

Comments (18)

Four revolutions

This started out to be a short report on some cool, socially relevant crowdsourcing for Egyptian Arabic. Somehow it morphed into a set of musings about the (near-) future of natural language processing…

A statistical revolution in natural language processing (henceforth NLP) took place in the late 1980s up to the mid 90s or so. Knowledge based methods of the previous several decades were overtaken by data-driven statistical techniques, thanks to increases in computing power, better availability of data, and, perhaps most of all, the (largely DARPA-imposed) re-introduction of the natural language processing community to their colleagues doing speech recognition and machine learning.

There was another revolution that took place around the same time, though. When I started out in NLP, the big dream for language technology was centered on human-computer interaction: we'd be able to speak to our machines, in order to ask them questions and tell them what we wanted them to do. (My first job out of college involved a project where the goal was to take natural language queries, turn them into SQL, and pull the answers out of databases.) This idea has retained its appeal for some people, e.g., Bill Gates, but in the mid 1990s something truly changed the landscape, pushing that particular dream into the background: the Web made text important again. If the statistical revolution was about the methods, the Internet revolution was about the needs. All of a sudden there was a world of information out there, and we needed ways to locate relevant Web pages, to summarize, to translate, to ask questions and pinpoint the answers.

Fifteen years or so later, the next revolution is already well underway.

Read the rest of this entry »

Comments (9)

The case of the missing spamularity

A recent diary post by Charlie Stross  ("It's made out of meat", 12/22/2010) poses a striking paradox. Or rather, he makes a prediction about a process whose trajectory, as so far observable, seems paradoxical to me.

Read the rest of this entry »

Comments (35)

Word lens

Competing with Culturomics for meme room today is Word Lens, which has a great YouTube ad:

Read the rest of this entry »

Comments (26)

Humanities research with the Google Books corpus

In Science today, there's yesterday, there was an article called "Quantitative analysis of culture using millions of digitized books" [subscription required] by at least twelve authors (eleven individuals, plus "the Google Books team"), which reports on some exercises in quantitative research performed on what is by far the largest corpus ever assembled for humanities and social science research. Culled from the Google Books collection, it contains more than 5 million books published between 1800 and 2000 — at a rough estimate, 4 percent of all the books ever published — of which two-thirds are in English and the others distributed among French, German, Spanish, Chinese, Russian, and Hebrew. (The English corpus alone contains some 360 billion words, dwarfing better structured data collections like the corpora of historical and contemporary American English at BYU, which top out at a paltry 400 million words each.)

I have an article on the project appearing in tomorrow's in today's Chronicle of Higher Education, which I'll link to here, and in later posts Ben or Mark will probably be addressing some of the particular studies, like the estimates of English vocabulary size, as well as the wider implications of the enterprise. For now, some highlights:

Read the rest of this entry »

Comments (58)

"Utterly noxious retail" as Search Engine Optimization

David Segal, "A bully finds a pulpit on the web", NYT 11/26/2010:

Today, when reading the dozens of comments [at getsatisfaction.com] about DecorMyEyes, it is hard to decide which one conveys the most outrage. It is easy, though, to choose the most outrageous. It was written by Mr. Russo/Bolds/Borker himself.

“Hello, My name is Stanley with DecorMyEyes.com,” the post began. “I just wanted to let you guys know that the more replies you people post, the more business and the more hits and sales I get. My goal is NEGATIVE advertisement.”

Read the rest of this entry »

Comments (8)