Archive for Computational linguistics

Word String frequency distributions

Several people have asked me about Alexander M. Petersen et al., "Languages cool as they expand: Allometric scaling and the decreasing need for new words", Nature Scientific Reports 12/10/2012. The abstract (emphasis added):

We analyze the occurrence frequencies of over 15 million words recorded in millions of books published during the past two centuries in seven different languages. For all languages and chronological subsets of the data we confirm that two scaling regimes characterize the word frequency distributions, with only the more common words obeying the classic Zipf law. Using corpora of unprecedented size, we test the allometric scaling relation between the corpus size and the vocabulary size of growing languages to demonstrate a decreasing marginal need for new words, a feature that is likely related to the underlying correlations between words. We calculate the annual growth fluctuations of word use which has a decreasing trend as the corpus size increases, indicating a slowdown in linguistic evolution following language expansion. This “cooling pattern” forms the basis of a third statistical regularity, which unlike the Zipf and the Heaps law, is dynamical in nature.

The paper is thought-provoking, and the conclusions definitely merit further exploration. But I feel that the paper as published is guilty of false advertising. As the emphasized material in the abstract indicates, the paper claims to be about the frequency distributions of words in the vocabulary of English and other natural languages. In fact, I'm afraid, it's actually about the frequency distributions of strings in Google's 2009 OCR of printed books — and this, alas, is not the same thing at all.

It's possible that the paper's conclusions also hold for the distributions of words in English and other languages, but it's far from clear that this is true. At a minimum, the paper's quantitative results clearly will not hold for anything that a linguist, lexicographer, or psychologist would want to call "words". Whether the qualitative results hold or not remains to be seen.

Read the rest of this entry »

Comments (13)

Speech and silence

I recently became interested in patterns of speech and silence. People divide their discourse into phrases for many reasons: syntax, meaning, rhetoric; thinking about what to say next; running out of breath. But for current purposes, we're ignoring the content of what's said, and we're also ignoring the process of saying it. We're even ignoring the language being spoken. All we're looking at is the partition of the stream of talk into speech segments and silence segments.

Why?

Read the rest of this entry »

Comments (10)

Dramatic reading of ASR voicemail transcription

Following up on the recent post about ASR error rates, here's Mary Robinette Kowal doing a dramatic reading of the Google Voice transcript of three phone calls (voicemail messages?) from John Scalzi:

Read the rest of this entry »

Comments (17)

High-entropy speech recognition, automatic and otherwise

Regular readers of LL know that I've always been a partisan of automatic speech recognition technology, defending it against unfair attacks on its performance, as in the case of "ASR Elevator" (11/14/2010). But Chin-Hui Lee recently showed me the results of an interesting little experiment that he did with his student I-Fan Chen, which suggests a fair (or at least plausible) critique of the currently-dominant ASR paradigm. His interpretation, as I understand it, is that ASR technology has taken a wrong turn, or more precisely, has failed to explore adequately some important paths that it bypassed on the way to its current success.

Read the rest of this entry »

Comments (23)

Literary moist aversion

Over the years, we've viewed the phenomenon of word aversion from several angles — a recent discussion, with links to earlier posts, can be found here. What we're calling word aversion is a feeling of intense, irrational distaste for the sound or sight of a particular word or phrase, not because its use is regarded as etymologically or logically or grammatically wrong, nor because it's felt to be over-used or redundant or trendy or non-standard, but simply because the word itself somehow feels unpleasant or even disgusting.

Some people react in this way to words whose offense seems to be entirely phonetic: cornucopia, hardscrabble, pugilist, wedge, whimsy. In other cases, it's plausible that some meaning-related associations play a role: creamy, panties, ointment, tweak. Overall, the commonest object of word aversion in English, judging from many discussions in web forums and comments sections, is moist.

One problem with web forums and comments sections as sources of evidence is that they don't tell us what fraction of the population experiences the phenomenon of word aversion, either in general or with respect to some particular word like moist. Dozens of commenters may join the discussion in a forum that has at most thousands of readers, but we can't tell whether they represent one person in five or one person in a hundred; nor do we know how representative of the general population a given forum or comments section is.

Pending other approaches, it occurred to me that we might be able to learn something from looking at usage in literary works. Authors who are squicked by moist, for example, will plausibly tend to find alternatives. (Well, in some cases the effect might motivate over-use; but never mind that for now…)

So for this morning's Breakfast Experiment™, I downloaded the April 2010 Project Gutenberg DVD, and took a quick look.

Read the rest of this entry »

Comments (27)

Translation as cryptography as translation


Warren Weaver, 1947 letter to Norbert Wiener, quoted in "Translation", 1949:

[K]nowing nothing official about, but having guessed and inferred considerable about, powerful new mechanized methods in cryptography – methods which I believe succeed even when one does not know what language has been coded – one naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography.

Mark Brown, "Modern Algorithms Crack 18th Century Secret Code", Wired UK 10/26/2011:

Computer scientists from Sweden and the United States have applied modern-day, statistical translation techniques — the sort of which are used in Google Translate — to decode a 250-year-old secret message.

The original document, nicknamed the Copiale Cipher, was written in the late 18th century and found in the East Berlin Academy after the Cold War. It’s since been kept in a private collection, and the 105-page, slightly yellowed tome has withheld its secrets ever since.

But this year, University of Southern California Viterbi School of Engineering computer scientist Kevin Knight — an expert in translation, not so much in cryptography — and colleagues Beáta Megyesi and Christiane Schaefer of Uppsala University in Sweden, tracked down the document, transcribed a machine-readable version and set to work cracking the centuries-old code.

Read the rest of this entry »

Comments (22)

In favor of the microlex

Bruce Schneier quotes Stubborn Mule citing R.A. Howard:

Shopping for coffee you would not ask for 0.00025 tons  (unless you were naturally irritating), you would ask for 250 grams. In the same way, talking about a 1/125,000 or 0.000008 risk of death associated with a hang-gliding flight is rather awkward. With that in mind. Howard coined the term “microprobability” (μp) to refer to an event with a chance of 1 in 1 million and a 1 in 1 million chance of death he calls a “micromort” (μmt). We can now describe the risk of hang-gliding as 8 micromorts and you would have to drive around 3,000km in a car before accumulating a risk of 8μmt, which helps compare these two remote risks.

This reminds me of the Google Ngram Viewer's habit of citing word frequencies as percentages, with uninterpretably large numbers of leading zeros after the decimal point:

Read the rest of this entry »

Comments (27)

Speech-to-speech translation

Rick Rashid, "Microsoft Research shows a promising new breakthrough in speech translation technology", 118/2012:

A demonstration I gave in Tianjin, China at Microsoft Research Asia’s 21st Century Computing event has started to generate a bit of attention, and so I wanted to share a little background on the history of speech-to-speech technology and the advances we’re seeing today.

In the realm of natural user interfaces, the single most important one – yet also one of the most difficult for computers – is that of human speech.

Read the rest of this entry »

Comments (29)

Pundits were confused and inaccurate

Also, the sky turns out to have been blue much of the time, and early returns are strongly suggesting that water is often wet. John Sides, "2012 Was the Moneyball Election", The Monkey Cage 11/7/2012:

Barack Obama’s victory tonight is also a victory for the Moneyball approach to politics.  It shows us that we can use systematic data—economic data, polling data—to separate momentum from no-mentum, to dispense with the gaseous emanations of pundits’ “guts,” and ultimately to forecast the winner.

Read the rest of this entry »

Comments (25)

The he's and she's of Twitter

My latest column for the Boston Globe is about some fascinating new research presented by Tyler Schnoebelen at the recent NWAV 41 conference at Indiana University Bloomington. Schnoebelen's paper, co-authored with Jacob Eisenstein and David Bamman, is entitled "Gender, styles, and social networks in Twitter" (abstract, full paper, presentation).

Read the rest of this entry »

Comments (6)

'lololololol' ≠ Tagalog

Ed Manley, "Detecting Languages in London's Twittersphere", UrbanMovements 10/22/2012:

Over the last couple of weeks, and as a bit of a distraction from finishing off my PhD, I've been working with James Cheshire looking at the use of different languages within my aforementioned dataset of London tweets.

I've been handling the data generation side, and the method really is quite simple.  Just like some similar work carried out by Eric Fischer, I've employed the Chromium Compact Language Detector – a open-source Python library adapted from the Google Chrome algorithm to detect a website's language – in detecting the predominant language contained within around 3.3 million geolocated tweets, captured in London over the course of this summer. […]

One issue with this approach that I did note was the surprising popularity of Tagalog, a language of the Philippines, which initially was identified as the 7th most tweeted language.  On further investigation, I found that many of these classifications included just uses of English terms such as 'hahahahaha', 'ahhhhhhh' and 'lololololol'.  I don't know much about Tagalog but it sounds like a fun language.  Nevertheless, Tagalog was excluded from our analysis.

Read the rest of this entry »

Comments (10)

Nurbling

Comments (30)

A new chapter for Google Ngrams

When Google's Ngram Viewer was launched in December 2010 it encouraged everyone to be an amateur computational linguist, an amateur historical lexicographer, or a little of both. Today, the public interface that allows users to plumb the Google Books megacorpus has been relaunched, and the new version makes it even more enticing to researchers, both scholarly and nonscholarly. You can read all about it in my online piece for The Atlantic, as well as Jon Orwant's official introduction on the Google Research blog.

Read the rest of this entry »

Comments (13)