Archive for Computational linguistics

Numerous upon the written content material

Another fragment of aleatoric sub-poetry, from the 5,036,601 spam comments that Akismet has caught since we installed it:

I image this might be numerous upon the written content material? nevertheless I nonetheless believe that it may be suitable for just about any type of topic material, because it could frequently be pleasant to resolve a warm and delightful face or possibly listen a voice whilst initial landing.

Read the rest of this entry »

Comments (12)

Depopularization in the limit

George Orwell, in his hugely overrated essay "Politics and the English language", famously insists you should "Never use a metaphor, simile, or other figure of speech which you are used to seeing in print." He thinks modern writing "consists in gumming together long strips of words which have already been set in order by someone else" (only he doesn't mean "long") — joining togther "ready-made phrases" instead of thinking out what to say. His hope is that one can occasionally, "if one jeers loudly enough, send some worn-out and useless phrase … into the dustbin, where it belongs." That is, one can eliminate some popular phrase from the language by mocking it out of existence. In effect, he wants us to collaborate in getting rid of the most widely-used phrases in the language. In a Lingua Franca post published today I called his program elimination of the fittest (tongue in cheek, of course: the proposal is actually just to depopularize the most popular).

For a while, after I began thinking about this, I wondered what would be the ultimate fate of a language in which this policy was consistently and iteratively implemented. I even spoke to a distinguished theoretical computer scientist about how one might represent the problem mathematically. But eventually I realized it was really quite simple; at least in a simplified ideal case, I knew what would happen, and I could do the proof myself.

Read the rest of this entry »

Comments off

Androids in Amazonia: recording an endangered language

Augustine Tembé, recording a story using a smartphoneThe village of Akazu’yw lies in the rainforest, a day’s drive from the state capital of Belém, deep in the Brazilian Amazon. Last week I traveled there, carrying a dozen Android phones with a specialized app for recording speech. It wasn't all plain sailing…

Read the full story here.

Comments (5)

Android app for oral language documentation

Steven Bird, "Cyberlinguistics: recording the world's vanishing voices", 3/11/2013:

Of the 7,000 languages spoken on the planet, Tembé is at the small end with just 150 speakers left. In a few days, I will head into the Brazilian Amazon to record Tembé – via specially-designed technology – for posterity. Welcome to the world of cyberlinguistics.

Our new Android app Aikuma is still in the prototype stage. But it will dramatically speed up the process of collecting and preserving oral literature from endangered languages, if last year’s field trip to Papua New Guinea is anything to go by.

Read the whole thing.

Read the rest of this entry »

Comments (8)

PP attachment is hard

Alex Williams, "Creating Hipsturbia", NYT 2/15/2013:

“When we checked towns out,” Ms. Miziolek recalled, “I saw some moms out in Hastings with their kids with tattoos. A little glimmer of Williamsburg!”

Read the rest of this entry »

Comments (6)

Word String frequency distributions

Several people have asked me about Alexander M. Petersen et al., "Languages cool as they expand: Allometric scaling and the decreasing need for new words", Nature Scientific Reports 12/10/2012. The abstract (emphasis added):

We analyze the occurrence frequencies of over 15 million words recorded in millions of books published during the past two centuries in seven different languages. For all languages and chronological subsets of the data we confirm that two scaling regimes characterize the word frequency distributions, with only the more common words obeying the classic Zipf law. Using corpora of unprecedented size, we test the allometric scaling relation between the corpus size and the vocabulary size of growing languages to demonstrate a decreasing marginal need for new words, a feature that is likely related to the underlying correlations between words. We calculate the annual growth fluctuations of word use which has a decreasing trend as the corpus size increases, indicating a slowdown in linguistic evolution following language expansion. This “cooling pattern” forms the basis of a third statistical regularity, which unlike the Zipf and the Heaps law, is dynamical in nature.

The paper is thought-provoking, and the conclusions definitely merit further exploration. But I feel that the paper as published is guilty of false advertising. As the emphasized material in the abstract indicates, the paper claims to be about the frequency distributions of words in the vocabulary of English and other natural languages. In fact, I'm afraid, it's actually about the frequency distributions of strings in Google's 2009 OCR of printed books — and this, alas, is not the same thing at all.

It's possible that the paper's conclusions also hold for the distributions of words in English and other languages, but it's far from clear that this is true. At a minimum, the paper's quantitative results clearly will not hold for anything that a linguist, lexicographer, or psychologist would want to call "words". Whether the qualitative results hold or not remains to be seen.

Read the rest of this entry »

Comments (13)

Speech and silence

I recently became interested in patterns of speech and silence. People divide their discourse into phrases for many reasons: syntax, meaning, rhetoric; thinking about what to say next; running out of breath. But for current purposes, we're ignoring the content of what's said, and we're also ignoring the process of saying it. We're even ignoring the language being spoken. All we're looking at is the partition of the stream of talk into speech segments and silence segments.

Why?

Read the rest of this entry »

Comments (10)

Dramatic reading of ASR voicemail transcription

Following up on the recent post about ASR error rates, here's Mary Robinette Kowal doing a dramatic reading of the Google Voice transcript of three phone calls (voicemail messages?) from John Scalzi:

Read the rest of this entry »

Comments (17)

High-entropy speech recognition, automatic and otherwise

Regular readers of LL know that I've always been a partisan of automatic speech recognition technology, defending it against unfair attacks on its performance, as in the case of "ASR Elevator" (11/14/2010). But Chin-Hui Lee recently showed me the results of an interesting little experiment that he did with his student I-Fan Chen, which suggests a fair (or at least plausible) critique of the currently-dominant ASR paradigm. His interpretation, as I understand it, is that ASR technology has taken a wrong turn, or more precisely, has failed to explore adequately some important paths that it bypassed on the way to its current success.

Read the rest of this entry »

Comments (23)

Literary moist aversion

Over the years, we've viewed the phenomenon of word aversion from several angles — a recent discussion, with links to earlier posts, can be found here. What we're calling word aversion is a feeling of intense, irrational distaste for the sound or sight of a particular word or phrase, not because its use is regarded as etymologically or logically or grammatically wrong, nor because it's felt to be over-used or redundant or trendy or non-standard, but simply because the word itself somehow feels unpleasant or even disgusting.

Some people react in this way to words whose offense seems to be entirely phonetic: cornucopia, hardscrabble, pugilist, wedge, whimsy. In other cases, it's plausible that some meaning-related associations play a role: creamy, panties, ointment, tweak. Overall, the commonest object of word aversion in English, judging from many discussions in web forums and comments sections, is moist.

One problem with web forums and comments sections as sources of evidence is that they don't tell us what fraction of the population experiences the phenomenon of word aversion, either in general or with respect to some particular word like moist. Dozens of commenters may join the discussion in a forum that has at most thousands of readers, but we can't tell whether they represent one person in five or one person in a hundred; nor do we know how representative of the general population a given forum or comments section is.

Pending other approaches, it occurred to me that we might be able to learn something from looking at usage in literary works. Authors who are squicked by moist, for example, will plausibly tend to find alternatives. (Well, in some cases the effect might motivate over-use; but never mind that for now…)

So for this morning's Breakfast Experiment™, I downloaded the April 2010 Project Gutenberg DVD, and took a quick look.

Read the rest of this entry »

Comments (27)

Translation as cryptography as translation


Warren Weaver, 1947 letter to Norbert Wiener, quoted in "Translation", 1949:

[K]nowing nothing official about, but having guessed and inferred considerable about, powerful new mechanized methods in cryptography – methods which I believe succeed even when one does not know what language has been coded – one naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography.

Mark Brown, "Modern Algorithms Crack 18th Century Secret Code", Wired UK 10/26/2011:

Computer scientists from Sweden and the United States have applied modern-day, statistical translation techniques — the sort of which are used in Google Translate — to decode a 250-year-old secret message.

The original document, nicknamed the Copiale Cipher, was written in the late 18th century and found in the East Berlin Academy after the Cold War. It’s since been kept in a private collection, and the 105-page, slightly yellowed tome has withheld its secrets ever since.

But this year, University of Southern California Viterbi School of Engineering computer scientist Kevin Knight — an expert in translation, not so much in cryptography — and colleagues Beáta Megyesi and Christiane Schaefer of Uppsala University in Sweden, tracked down the document, transcribed a machine-readable version and set to work cracking the centuries-old code.

Read the rest of this entry »

Comments (22)

In favor of the microlex

Bruce Schneier quotes Stubborn Mule citing R.A. Howard:

Shopping for coffee you would not ask for 0.00025 tons  (unless you were naturally irritating), you would ask for 250 grams. In the same way, talking about a 1/125,000 or 0.000008 risk of death associated with a hang-gliding flight is rather awkward. With that in mind. Howard coined the term “microprobability” (μp) to refer to an event with a chance of 1 in 1 million and a 1 in 1 million chance of death he calls a “micromort” (μmt). We can now describe the risk of hang-gliding as 8 micromorts and you would have to drive around 3,000km in a car before accumulating a risk of 8μmt, which helps compare these two remote risks.

This reminds me of the Google Ngram Viewer's habit of citing word frequencies as percentages, with uninterpretably large numbers of leading zeros after the decimal point:

Read the rest of this entry »

Comments (27)

Speech-to-speech translation

Rick Rashid, "Microsoft Research shows a promising new breakthrough in speech translation technology", 118/2012:

A demonstration I gave in Tianjin, China at Microsoft Research Asia’s 21st Century Computing event has started to generate a bit of attention, and so I wanted to share a little background on the history of speech-to-speech technology and the advances we’re seeing today.

In the realm of natural user interfaces, the single most important one – yet also one of the most difficult for computers – is that of human speech.

Read the rest of this entry »

Comments (29)