Archive for Computational linguistics

Word-order "universals" are lineage-specific?

This post is the promised short discussion of Michael Dunn, Simon J. Greenhill, Stephen C. Levinson & Russell D. Gray, "Evolved structure of language shows lineage-specific trends in word-order universals", Nature, published online 4/13/2011. [Update: free downloadable copies are available here.] As I noted earlier, I recommend the clear and accessible explanation that Simon Greenhill and Russell Gray have put on the Austronesian Database website in Auckland — in fact, if you haven't read that explanation, you should go do so now, because I'm not going to recapitulate what they did and their reasons for doing it, beyond quoting the conclusion:

These family-specific linkages suggest that language structure is not set by innate features of the cognitive language parser (as suggested by the generativists), or by some over-riding concern to "harmonize" word-order (as suggested by the statistical universalists). Instead language structure evolves by exploring alternative ways to construct coherent language systems. Languages are instead the product of cultural evolution, canalized by the systems that have evolved during diversification, so that future states lie in an evolutionary landscape with channels and basins of attraction that are specific to linguistic lineages.

And I should start by saying that I'm neither a syntactician nor a typologist.  The charitable way to interpret this is that I don't start with any strong prejudices on the subject of syntactic typology. From this unbiased perspective, it seems to me that this paper adds a good idea that has been missing from most traditional work in syntactic typology, but at the same time, it misses two good ideas that have been extensively developed in the related area of historical syntax.

Read the rest of this entry »

Comments (96)

Oice-vay Earch-say

According to the Official Google Research Blog,

As you might know, Google Voice Search is available in more than two dozen languages and dialects, making it easy to perform Google searches just by speaking into your phone.

Today it is our pleasure to announce the launch of Pig Latin Voice Search! […]

To configure Pig Latin Voice Search in your Android phone just go to Settings, select “Voice input & output settings”, and then “Voice recognizer settings”. In the list of languages you’ll see Pig Latin. Just select it and you are ready to roll in the mud!

It also works on iPhone with the Google Search app. In the app, tap the Settings icon, then "Voice Search" and select Pig Latin.

Read the rest of this entry »

Comments (10)

Waseda talker

"This is cool", writes John Coleman — and it is. More later.

Comments (8)

Two Breakfast Experiments™: Literally

A couple of days ago, following up on Sunday's post about literally, Michael Ramscar sent me this fascinating graph:

What this shows us is a remarkably lawful relationship between the frequency of a verb and the probability of its being modified by literally, as revealed by counts from the 410-million-word COCA corpus. (The R2 value means that a verb's frequency accounts for 88% of the variance in  its chances of being modified by literally.)

Read the rest of this entry »

Comments (40)

Intellectual automation

Following up on the recent discussion of legal automation, I note that Paul Krugman has added a blog post ("Falling Demand for Brains?", 3/5/2011) and an Op-Ed column ("Degrees and Dollars", 3/6/2011), pushing an idea that he first suggested in a 1996 NYT Magazine piece ("White Collars Turn Blue", 9/29/1996), where he wrote as if from the perspective of 2096:

When something becomes abundant, it also becomes cheap. A world awash in information is one in which information has very little market value. In general, when the economy becomes extremely good at doing something, that activity becomes less, rather than more, important. Late-20th-century America was supremely efficient at growing food; that was why it had hardly any farmers. Late-21st-century America is supremely efficient at processing routine information; that is why traditional white-collar workers have virtually disappeared.

Read the rest of this entry »

Comments (18)

Legal automation

Over the past few days, we've discussed the possible relevance of corpus evidence in legal evaluations of ordinary-language meaning. Another (and socio-economically more important) legal application of computational linguistics is featured today in John Markoff's article, "Armies of Expensive Lawyers, Replaced by Cheaper Software", NYT 3/4/2011:

When five television studios became entangled in a Justice Department antitrust lawsuit against CBS, the cost was immense. As part of the obscure task of “discovery” — providing documents relevant to a lawsuit — the studios examined six million documents at a cost of more than $2.2 million, much of it to pay for a platoon of lawyers and paralegals who worked for months at high hourly rates.

But that was in 1978. Now, thanks to advances in artificial intelligence, “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost. In January, for example, Blackstone Discovery of Palo Alto, Calif., helped analyze 1.5 million documents for less than $100,000.

Read the rest of this entry »

Comments (12)

Now on The Atlantic: The corpus in the court

On Tuesday, the Supreme Court ruled in FCC v. AT&T that corporations are not entitled to a right of "personal privacy," even if corporations can be construed as "persons." To reach this decision, they were aided by an amicus brief by Neal Goldfarb that presented corpus evidence on the types of nouns that the adjective "personal" typically modifies. Here on Language Log, Mark Liberman posted about the case on the day the decision was released, and now I have a piece for The Atlantic discussing the use of corpus analysis in the courtroom.

Read the rest of this entry »

Comments (2)

…with just a hint of Naive Bayes in the nose

Coco Krumme, "Velvety Chocolate With a Silky Ruby Finish. Pair With Shellfish.", Slate 2/23/2011:

Using descriptions of 3,000 bottles, ranging from \$5 to \$200 in price from an online aggregator of reviews, I first derived a weight for every word, based on the frequency with which it appeared on cheap versus expensive bottles. I then looked at the combination of words used for each bottle, and calculated the probability that the wine would fall into a given price range. The result was, essentially, a Bayesian classifier for wine. In the same way that a spam filter considers the combination of words in an e-mail to predict the legitimacy of the message, the classifier estimates the price of a bottle using its descriptors.

The analysis revealed, first off, that "cheap" and "expensive" words are used differently. Cheap words are more likely to be recycled, while words correlated with expensive wines tend to be in the tail of the distribution. That is, reviewers are more likely to create new vocabulary for top-end wines. The classifier also showed that it's possible to guess the price range of a wine based on the words in the review.

Read the rest of this entry »

Comments (15)

Could Watson parse a snowclone?

Today on The Atlantic I break down Watson's big win over the humans in the Jeopardy!/IBM challenge. (See previous Language Log coverage here and here.) I was particularly struck by the snowclone that Ken Jennings left on his Final Jeopardy response card last night: "I, for one, welcome our new computer overlords." I use that offhand comment as a jumping-off point to dismantle some of the hype about Watson's purported ability to "understand" natural language.

Read the rest of this entry »

Comments (32)

You can help improve ASR

If you're a native speaker of English, and you have about an hour to spare, and the title of this post (or a small promised gift) convinces you to devote your spare hour to helping researchers improve automatic speech recognition, just pick one of these four links at random and follow the instructions: 1, 2, 3, 4.

[Update — the problem with the tests has been fixed — but more than 1,000 people have participated, and the server is saturated, so unless you've already started the experiment, please hold off for now!]

If you'd like a fuller explanation, read on.

Read the rest of this entry »

Comments (28)

Jeopardizing Valentine's Day

I've stolen the title of this post from the subject line of a message from Hal Daumé, who has invited folks at University of Maryland to a huge Jeopardy-watching party he's organizing tonight. Today is February 14, so for at least some of the audience, Jeopardy might indeed jeopardize Valentine's Day, substituting geeky fun (I use the term fondly) for candle-lit dinners.

In case you hadn't heard, the reason for the excitement, pizza parties, and so forth is that tonight's episode will, for the first time, feature a computer competing against human players — and not just any human players, but the two best known Jeopardy champions. This is stirring up a new round of popular discussion about artificial intelligence, as Mark noted a few days ago. Many in the media — not to mention IBM, whose computer is doing the playing — are happy to play up the "smartest machine on earth", dawn-of-a-new-age angle. Though, to be fair, David Ferrucci, the IBMer who came up with the idea of building a Jeopardy-playing computer and led the project, does point out quite responsibly that this is only one step on the way to true natural language understanding by machine (e.g. at one point in this promotional video).

Regardless of how the game turns out, it's true that tonight will be a great achievement for language technology. Though I would also argue that the achievement is as much in the choice of problem as in the technology itself.

Read the rest of this entry »

Comments (36)

Language and intelligence

Two interesting popular articles on linguistic aspects of artificial intelligence have recently appeared in the popular press.

The first one is by Richard Powers ("What is Artifical Intelligence?", NYT 2/6/2011):

IN the category “What Do You Know?”, for $1 million: This four-year-old upstart the size of a small R.V. has digested 200 million pages of data about everything in existence and it means to give a couple of the world’s quickest humans a run for their money at their own game.

The question: What is Watson?

I.B.M.’s groundbreaking question-answering system, running on roughly 2,500 parallel processor cores, each able to perform up to 33 billion operations a second, is playing a pair of “Jeopardy!” matches against the show’s top two living players, to be aired on Feb. 14, 15 and 16.

Read the rest of this entry »

Comments (18)

Four revolutions

This started out to be a short report on some cool, socially relevant crowdsourcing for Egyptian Arabic. Somehow it morphed into a set of musings about the (near-) future of natural language processing…

A statistical revolution in natural language processing (henceforth NLP) took place in the late 1980s up to the mid 90s or so. Knowledge based methods of the previous several decades were overtaken by data-driven statistical techniques, thanks to increases in computing power, better availability of data, and, perhaps most of all, the (largely DARPA-imposed) re-introduction of the natural language processing community to their colleagues doing speech recognition and machine learning.

There was another revolution that took place around the same time, though. When I started out in NLP, the big dream for language technology was centered on human-computer interaction: we'd be able to speak to our machines, in order to ask them questions and tell them what we wanted them to do. (My first job out of college involved a project where the goal was to take natural language queries, turn them into SQL, and pull the answers out of databases.) This idea has retained its appeal for some people, e.g., Bill Gates, but in the mid 1990s something truly changed the landscape, pushing that particular dream into the background: the Web made text important again. If the statistical revolution was about the methods, the Internet revolution was about the needs. All of a sudden there was a world of information out there, and we needed ways to locate relevant Web pages, to summarize, to translate, to ask questions and pinpoint the answers.

Fifteen years or so later, the next revolution is already well underway.

Read the rest of this entry »

Comments (9)