Archive for Computational linguistics

Ex-physicist takes on Heavy Metal NLP

"Heavy Metal and Natural Language Processing – Part 1", Degenerate State 4/20/2016:

Natural language is ubiquitous. It is all around us, and the rate at which it is produced in written, stored form is only increasing. It is also quite unlike any sort of data I have worked with before.

Natural language is made up of sequences of discrete characters arranged into hierarchical groupings: words, sentences and documents, each with both syntactic structure and semantic meaning.

Not only is the space of possible strings huge, but the interpretation of a small sections of a document can take on vastly different meanings depending on what context surround it.

These variations and versatility of natural language are the reason that it is so powerful as a way to communicate and share ideas.

In the face of this complexity, it is not surprising that understanding natural language, in the same way humans do, with computers is still a unsolved problem. That said, there are an increasing number of techniques that have been developed to provide some insight into natural language. They tend to start by making simplifying assumptions about the data, and then using these assumptions convert the raw text into a more quantitative structure, like vectors or graphs. Once in this form, statistical or machine learning approaches can be leveraged to solve a whole range of problems.

I haven't had much experience playing with natural language, so I decided to try out a few techniques on a dataset I scrapped from the internet: a set of heavy metal lyrics (and associated genres).

[h/t Chris Callison-Burch]

Comments (6)

Some speech style dimensions

Earlier this year, I observed that there seem to be some interesting differences among individuals and styles of speech in the distribution of speech segment and silence segment durations — see e.g. "Sound and silence" (2/12/2013), "Political sound and silence" (2/8/2016) and "Poetic sound and silence" (2/12/2016).

So Neville Ryant and I decided to try to look at the question in a more systematic way. In particular, we took the opportunity to compare the many individuals in the LibriSpeech dataset, which consists of 5,832 English-language audiobook chapters read by 2,484 speakers, with a total audio duration of nearly 1,600 hours. This dataset was selected by some researchers at JHU from the larger LibriVox audiobook collection, which as a whole now comprises more than 50,000 hours of read English-language text. Material from the nearly 2,500 LibriSpeech readers gives us a background distribution against which to compare other examples of both read and spontaneous speech, yielding plots like the one below:

Read the rest of this entry »

Comments (1)

Advances in fuckometry

Tim Kenneally, "Ben Affleck Has a F-ing Thing or 18 or Say About His Bill Simmons Interview", The Wrap 6/23/2016.

Read the rest of this entry »

Comments (5)

The 2016 Blizzard Challenge

The Blizzard Challenge needs you!

Every year since 2005, an ad hoc group of speech technology researchers has held a "Blizzard Challenge", under the aegis of the Speech Synthesis Special Interest Group (SYNSIG) of the International Speech Communication Association.

The general idea is simple:  Competitors take a released speech database, build a synthetic voice from the data and synthesize a prescribed set of test sentences. The sentences from each synthesizer are then evaluated through listening tests.

Why "Blizzard"? Because the early competitions used the CMU ARCTIC datasets, which began with a set of sentences read from James Oliver Curwood's novel Flower of the North.

Anyhow, if you have an hour of your time to donate towards making speech synthesis better, sign up and be a listener!

Comments (2)

Q. Pheevr's Law

In a comment on one of yesterday's posts ("Adjectives and Adverbs"), Q. Pheevr wrote:

It's hard to tell with just four speakers to go on, but it looks as if there could be some kind of correlation between the ADV:ADJ ratio and the V:N ratio (as might be expected given that adjectives canonically modify nouns and adverbs canonically modify verbs). Of course, there are all sorts of other factors that could come into this, but to the extent that speakers are choosing between alternatives like "caused prices to increase dramatically" and "caused a dramatic increase in prices," I'd expect some sort of connection between these two ratios.

So since I have a relatively efficient POS tagging script, and an ad hoc collection of texts lying around, I thought I'd devote this morning's Breakfast Experiment™ to checking the idea out.

Read the rest of this entry »

Comments (17)

Scientific prescriptivism: Garner Pullumizes?

The publisher's blurb for the fourth edition of Garner's Modern English Usage introduces a new feature:

With more than a thousand new entries and more than 2,300 word-frequency ratios, the magisterial fourth edition of this book — now renamed Garner's Modern English Usage (GMEU)-reflects usage lexicography at its finest. […]

The judgments here are backed up not just by a lifetime of study but also by an empirical grounding in the largest linguistic corpus ever available. In this fourth edition, Garner has made extensive use of corpus linguistics to include ratios of standard terms as compared against variants in modern print sources.

The largest linguistic corpus ever available, of course, is the Google Books ngram collection. And "word-frequency ratio" means, for example, the observations that in pluralizing corpus, corpora outnumbers corpuses by 69:1.

Read the rest of this entry »

Comments (19)

Data journalism and film dialogue

Hannah Anderson and Matt Daniels, "Film Dialogue from 2,000 screenplays, Broken Down by Gender and Age", A Polygraph Joint 2016:

Lately, Hollywood has been taking so much shit for rampant sexism and racism. The prevailing theme: white men dominate movie roles.

But it’s all rhetoric and no data, which gets us nowhere in terms of having an informed discussion. How many movies are actually about men? What changes by genre, era, or box-office revenue? What circumstances generate more diversity?

To begin answering these questions, we Googled our way to 8,000 screenplays and matched each character’s lines to an actor. From there, we compiled the number of lines for male and female characters across roughly 2,000 films, arguably the largest undertaking of script analysis, ever.

Read the rest of this entry »

Comments (7)

Some phonetic dimensions of speech style

My posts have been thin recently, mostly because over the past ten days or so I've been involved in the preparation and submission of five conference papers, on top of my usual commitments to teaching and meetings and visitors. Nobody's fault but mine, of course. Anyhow, this gives me some raw material that I'll try to present in a way that's comprehensible and interesting to non-specialists.

One of the papers, with Neville Ryant as first author, was an attempt to take advantage of a large collection of audiobook recordings to explore some dimensions of speaking style. The paper is still under review, so I'll wait to post a copy until its fate is decided — but there are some interesting ideas and suggestive results that I can share. And to motivate you to read the somewhat wonkish explanation that follows, I'll start off with a picture:

Read the rest of this entry »

Comments (3)

I'm learning… something?

Google Translate renders "Tanulok Magyarul" as "I'm learning English":


Read the rest of this entry »

Comments (24)

Poetic sound and silence

Following up on "Political sound and silence", 2/8/2016, here's a level plot of speech segment durations and immediately-following silence segment durations from William Carlos Williams' poetry reading at the Library of Congress in May of 1945:


Read the rest of this entry »

Comments (3)

Political sound and silence

As part of an exercise/demonstration for a course, last night I ran Neville Ryant's second-best speech activity detector (SAD) on Barack Obama's Weekly Radio Addresses for 2010 (50 of them), and George W. Bush's Weekly Radio Addresses for 2008 (48 of them). The distributions of speech and silence durations, via R's kernel density estimation function, look like this:

Read the rest of this entry »

Comments (3)

Totally Word Mapper

Jack Grieve Twitter-based Word Mapper (see "Geolexicography", 1/27/2016) is now available as a web app — like totally:

Read the rest of this entry »

Comments (18)

Style or artefact or both?

In "Correlated lexicometrical decay", I commented on some unexpectedly strong correlations over time of the ratios of word and phrase frequencies in the Google Books English 1gram dataset:

I'm sure that these patterns mean something. But it seems a little weird that OF as a proportion of all prepositions should correlate r=0.953 with the proportion of instances of OF immediately followed by THE, and  it seems weirder that OF as a proportion of all prepositions should correlate r=0.913 with the proportion of adjective-noun sequences immediately preceded by THE.

So let's hope that what these patterns mean is that the secular decay of THE has somehow seeped into some but not all of the other counts, or that some other hidden cause is governing all of the correlated decays. The alternative hypothesis is that there's a problem with the way the underlying data was collected and processed, which would be annoying.

And in a comment on a comment, I noted that the corresponding data from the Corpus of Historical American English, which is a balanced corpus collected from sources largely or entirely distinct from the Google Books dataset, shows similar unexpected correlations.

So today I'd like to point out that much simpler data — frequencies of  a few of the commonest words — shows some equally strong correlations over time in these same datasets.

Read the rest of this entry »

Comments (9)