Archive for Computational linguistics

The new AI is so lifelike it's prejudiced!

Arvind Narayanan, "Language necessarily contains human biases, and so will machines trained on language corpora", Freedom to Tinker 8/24/2016:

We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well.

Read the rest of this entry »

Comments (9)

Annals of parsing

Two of the hardest problems in English-language parsing are prepositional phrase attachment and scope of conjunction. For PP attachment, the problem is to figure out how a phrase-final prepositional phrase relates to the rest of the sentence — the classic example is "I saw a man in the park with a telescope". For conjunction scope, the problem is to figure out just what phrases an instance of and is being used to combine.

The title of a recent article offers some lovely examples of the problems that these ambiguities can cause: Suresh Naidu and Noam Yuchtman, "Back to the future? Lessons on inequality, labour markets, and conflict from the Gilded Age, for the present", VOX 8/23/2016.  The second phrase includes three ambiguous prepositions (on, from, and for) and one conjunction (and), and has more syntactically-valid interpretations than you're likely to be able to imagine unless you're familiar with the problems of automatic parsing.

Read the rest of this entry »

Comments (7)

Ex-physicist takes on Heavy Metal NLP

"Heavy Metal and Natural Language Processing – Part 1", Degenerate State 4/20/2016:

Natural language is ubiquitous. It is all around us, and the rate at which it is produced in written, stored form is only increasing. It is also quite unlike any sort of data I have worked with before.

Natural language is made up of sequences of discrete characters arranged into hierarchical groupings: words, sentences and documents, each with both syntactic structure and semantic meaning.

Not only is the space of possible strings huge, but the interpretation of a small sections of a document can take on vastly different meanings depending on what context surround it.

These variations and versatility of natural language are the reason that it is so powerful as a way to communicate and share ideas.

In the face of this complexity, it is not surprising that understanding natural language, in the same way humans do, with computers is still a unsolved problem. That said, there are an increasing number of techniques that have been developed to provide some insight into natural language. They tend to start by making simplifying assumptions about the data, and then using these assumptions convert the raw text into a more quantitative structure, like vectors or graphs. Once in this form, statistical or machine learning approaches can be leveraged to solve a whole range of problems.

I haven't had much experience playing with natural language, so I decided to try out a few techniques on a dataset I scrapped from the internet: a set of heavy metal lyrics (and associated genres).

[h/t Chris Callison-Burch]

Comments (6)

Some speech style dimensions

Earlier this year, I observed that there seem to be some interesting differences among individuals and styles of speech in the distribution of speech segment and silence segment durations — see e.g. "Sound and silence" (2/12/2013), "Political sound and silence" (2/8/2016) and "Poetic sound and silence" (2/12/2016).

So Neville Ryant and I decided to try to look at the question in a more systematic way. In particular, we took the opportunity to compare the many individuals in the LibriSpeech dataset, which consists of 5,832 English-language audiobook chapters read by 2,484 speakers, with a total audio duration of nearly 1,600 hours. This dataset was selected by some researchers at JHU from the larger LibriVox audiobook collection, which as a whole now comprises more than 50,000 hours of read English-language text. Material from the nearly 2,500 LibriSpeech readers gives us a background distribution against which to compare other examples of both read and spontaneous speech, yielding plots like the one below:

Read the rest of this entry »

Comments (1)

Advances in fuckometry

Tim Kenneally, "Ben Affleck Has a F-ing Thing or 18 or Say About His Bill Simmons Interview", The Wrap 6/23/2016.

Read the rest of this entry »

Comments (5)

The 2016 Blizzard Challenge

The Blizzard Challenge needs you!

Every year since 2005, an ad hoc group of speech technology researchers has held a "Blizzard Challenge", under the aegis of the Speech Synthesis Special Interest Group (SYNSIG) of the International Speech Communication Association.

The general idea is simple:  Competitors take a released speech database, build a synthetic voice from the data and synthesize a prescribed set of test sentences. The sentences from each synthesizer are then evaluated through listening tests.

Why "Blizzard"? Because the early competitions used the CMU ARCTIC datasets, which began with a set of sentences read from James Oliver Curwood's novel Flower of the North.

Anyhow, if you have an hour of your time to donate towards making speech synthesis better, sign up and be a listener!

Comments (2)

Q. Pheevr's Law

In a comment on one of yesterday's posts ("Adjectives and Adverbs"), Q. Pheevr wrote:

It's hard to tell with just four speakers to go on, but it looks as if there could be some kind of correlation between the ADV:ADJ ratio and the V:N ratio (as might be expected given that adjectives canonically modify nouns and adverbs canonically modify verbs). Of course, there are all sorts of other factors that could come into this, but to the extent that speakers are choosing between alternatives like "caused prices to increase dramatically" and "caused a dramatic increase in prices," I'd expect some sort of connection between these two ratios.

So since I have a relatively efficient POS tagging script, and an ad hoc collection of texts lying around, I thought I'd devote this morning's Breakfast Experiment™ to checking the idea out.

Read the rest of this entry »

Comments (17)

Scientific prescriptivism: Garner Pullumizes?

The publisher's blurb for the fourth edition of Garner's Modern English Usage introduces a new feature:

With more than a thousand new entries and more than 2,300 word-frequency ratios, the magisterial fourth edition of this book — now renamed Garner's Modern English Usage (GMEU)-reflects usage lexicography at its finest. […]

The judgments here are backed up not just by a lifetime of study but also by an empirical grounding in the largest linguistic corpus ever available. In this fourth edition, Garner has made extensive use of corpus linguistics to include ratios of standard terms as compared against variants in modern print sources.

The largest linguistic corpus ever available, of course, is the Google Books ngram collection. And "word-frequency ratio" means, for example, the observations that in pluralizing corpus, corpora outnumbers corpuses by 69:1.

Read the rest of this entry »

Comments (19)

Data journalism and film dialogue

Hannah Anderson and Matt Daniels, "Film Dialogue from 2,000 screenplays, Broken Down by Gender and Age", A Polygraph Joint 2016:

Lately, Hollywood has been taking so much shit for rampant sexism and racism. The prevailing theme: white men dominate movie roles.

But it’s all rhetoric and no data, which gets us nowhere in terms of having an informed discussion. How many movies are actually about men? What changes by genre, era, or box-office revenue? What circumstances generate more diversity?

To begin answering these questions, we Googled our way to 8,000 screenplays and matched each character’s lines to an actor. From there, we compiled the number of lines for male and female characters across roughly 2,000 films, arguably the largest undertaking of script analysis, ever.

Read the rest of this entry »

Comments (7)

Some phonetic dimensions of speech style

My posts have been thin recently, mostly because over the past ten days or so I've been involved in the preparation and submission of five conference papers, on top of my usual commitments to teaching and meetings and visitors. Nobody's fault but mine, of course. Anyhow, this gives me some raw material that I'll try to present in a way that's comprehensible and interesting to non-specialists.

One of the papers, with Neville Ryant as first author, was an attempt to take advantage of a large collection of audiobook recordings to explore some dimensions of speaking style. The paper is still under review, so I'll wait to post a copy until its fate is decided — but there are some interesting ideas and suggestive results that I can share. And to motivate you to read the somewhat wonkish explanation that follows, I'll start off with a picture:

Read the rest of this entry »

Comments (3)

I'm learning… something?

Google Translate renders "Tanulok Magyarul" as "I'm learning English":


Read the rest of this entry »

Comments (24)

Poetic sound and silence

Following up on "Political sound and silence", 2/8/2016, here's a level plot of speech segment durations and immediately-following silence segment durations from William Carlos Williams' poetry reading at the Library of Congress in May of 1945:


Read the rest of this entry »

Comments (3)

Political sound and silence

As part of an exercise/demonstration for a course, last night I ran Neville Ryant's second-best speech activity detector (SAD) on Barack Obama's Weekly Radio Addresses for 2010 (50 of them), and George W. Bush's Weekly Radio Addresses for 2008 (48 of them). The distributions of speech and silence durations, via R's kernel density estimation function, look like this:

Read the rest of this entry »

Comments (3)