Archive for Computational linguistics

Correlated lexicometrical decay

This is a brief progress report on "The case of the disappearing determiners", which I've continue to poke at in my spare time.

As the red line in the plot below shows, the proportion of nouns immediately preceded by THE decreased over the course of the 20th century, from an average of 18.9% for books published in 1900-1910 to 13.5% for books published in 1990-2000.  The blue line shows that the proportion of adjective+noun sequences immediately preceded by THE was higher, overall, but followed a remarkably similar falling trajectory, from 29.1% in 1900-1910 to 21.2% in 1990-2000:

Read the rest of this entry »

Comments (12)

Dutch DE

Following up on yesterday's post "The case of the disappearing determiners", Gosse Bouma sent me some data from the CGN ("Corpus Gesproken Nederlands"), about determiner use in spoken Dutch by people born between 1914 and 1987. According to the CGN website,

The Spoken Dutch Corpus project was aimed at the construction of a database of contemporary standard Dutch as spoken by adults in The Netherlands and Flanders. […] In version 1.0, the results are presented that have emerged from the project. The total number of words available here is nearly 9 million (800 hours of speech). Some 3.3 million words were collected in Flanders, well over 5.6 million in The Netherlands.

It's not clear to me exactly when the recordings were made, but the project ran from 1998 to 2004.

Gosse sent data focused on the word de, which is the definite article for masculine and feminine ("common") nouns in Dutch, cognate with English the.  (The definite article for neuter nouns, het, is less frequent and also can be used as a pronoun.)

The results are similar to those that I reported earlier for English: Older people use the definite article more frequently than younger people (at least for people born in the 1950s onwards), and at every age, men use the definite article more than women.

Read the rest of this entry »

Comments (5)

The case of the disappearing determiners

For the past century or so, the commonest word in English has gradually been getting less common. Depending on data source and counting method, the frequency of the definite article THE has fallen substantially — in some cases at a rate as high as 50% per 100 years.

At every stage, writing that's less formal has fewer THEs, and speech generally has fewer still, so to some extent the decline of THE is part of a more general long-term trend towards greater informality. But THE is apparently getting rarer even in speech, so the change is more than just the (normal) shift of writing style towards the norms of speech.

There appear to be weaker trends in the same direction, at overall lower rates, in German, Italian, Spanish, and French.

I'll lay out some of the evidence for this phenomenon, mostly collected from earlier LLOG posts. And then I'll ask a few questions about what's really going on, and why and how it's happening. [Warning: long and rather wonky.]

Read the rest of this entry »

Comments (54)

Reddit culturomics

Randy Olson and Ritchie King, "How The Internet* Talks [*Well, the mostly young and mostly male users of Reddit, anyway]", fivethirtyeight.com 11/18/2015. The interactive viewer reveals some interesting trends:

Read the rest of this entry »

Comments (4)

Normalizing

Alberto Acerbi , Vasileios Lampos, Philip Garnett, & R. Alexander Bentley, "The Expression of Emotions in 20th Century Books", PLOSOne 3/20/2013:

We report here trends in the usage of “mood” words, that is, words carrying emotional content, in 20th century English language books, using the data set provided by Google that includes word frequencies in roughly 4% of all books published up to the year 2008. We find evidence for distinct historical periods of positive and negative moods, underlain by a general decrease in the use of emotion-related words through time. Finally, we show that, in books, American English has become decidedly more “emotional” than British English in the last half-century, as a part of a more general increase of the stylistic divergence between the two variants of English language.

Read the rest of this entry »

Comments (3)

Positivity?

Christiaan H Vinkers et al., "Use of positive and negative words in scientific PubMed abstracts between 1974 and 2014: retrospective analysis", BMJ 2015:

Design Retrospective analysis of all scientific abstracts in PubMed between 1974 and 2014.  

Methods The yearly frequencies of positive, negative, and neutral words (25 preselected words in each category), plus 100 randomly selected words were normalised for the total number of abstracts. […]

Results The absolute frequency of positive words increased from 2.0% (1974-80) to 17.5% (2014), a relative increase of 880% over four decades.

Read the rest of this entry »

Comments (12)

"… to do is (to) VERB …"

Dyami Hayes writes to point out that there has been a change over the past century in the relative popularity (at least in printed text) of constructions like these:

What this book sets out to do is to provide some tools, ideas and suggestions for tackling non-verbal reasoning questions.

What it attempts to do is provide a framework for understanding how local governments are organized.

The Google Books ngram plots for provide, look, tell, and say show similar patterns — or summed for those four verbs (with the to do is VERB version in red and the to do is to VERB version in blue):

Read the rest of this entry »

Comments (12)

Dictionary-sampling estimates of vocabulary knowledge: No Zipf problems

Yesterday I explained why the long-tailed ("Zipf's Law") distribution of word frequencies makes it almost impossible to estimate vocabulary size by counting word types in samples of writing or speaking ("Why estimating vocabulary size by counting words is (nearly) impossible"). In a comment on that post, "flow" suggested that similar problems might afflict attempts to estimate vocabulary size by checking someone's knowledge of random samples from a dictionary.

But in fact this worry is groundless. There are many problems with the method — especially defining the list to sample from, and defining what counts as "knowing" an item in the sample — but the nature of word-frequency distributions is not one of them.

Read the rest of this entry »

Comments (9)

Why estimating vocabulary size by counting words is (nearly) impossible

A few days ago, I expressed skepticism about a claim that "the human lexicon has a de facto storage limit of 8,000 lexical items", which was apparently derived from counting word types in various sorts of texts ("Lexical limits?", 12/5/2015). There are many difficult questions here about what we mean by "word", and what it means to be "in" the lexicon of an individual or a language — though I don't see how you could answer those questions so as to come up with a number as low as 8,000. But today I'd like to focus on some of the reasons that even after settling the "what is a word" questions, it's nearly hopeless to try to establish an upper bound by counting "word" types in text.

Read the rest of this entry »

Comments (8)

Kieran Snyder on CNN

Comments (2)

A new source of jokes

Greg Corrado, "Computer, respond to this email", Google Research Blog 11/3/2015:

I get a lot of email, and I often peek at it on the go with my phone. But replying to email on mobile is a real pain, even for short replies. What if there were a system that could automatically determine if an email was answerable with a short reply, and compose a few suitable responses that I could edit or send with just a tap? […]

Some months ago, Bálint Miklós from the Gmail team asked me if such a thing might be possible. I said it sounded too much like passing the Turing Test to get our hopes up… but having collaborated before on machine learning improvements to spam detection and email categorization, we thought we’d give it a try. […]

We’re actually pretty amazed at how well this works. We’ll be rolling this feature out on Inbox for Android and iOS later this week, and we hope you’ll try it for yourself! Tap on a Smart Reply suggestion to start editing it. If it’s perfect as is, just tap send. Two-tap email on the go — just like Bálint envisioned.

Read the rest of this entry »

Comments (6)

Bookworm on vector space models

A couple of great posts by Ben Schmidt at Bookworm: "Vector space models for the digital humanities", 10/25/2015; and "Rejecting the gender binary: a vector-space operation", 10/30/2015.

Update — A quick experiment by a Penn grad student, which confirms that somewhat plausible things emerge from fairly small and fairly noisy datasets…

 

 

Comments (4)

Alien encounter

I read Ancillary Justice, the first book in Ann Leckie's Imperial Radch series, at some point in the spring of 2014, and so I was not at all surprised to find Brad DeLong referring to her as "an extremely sharp observer […] author of the devastatingly-good Ancillary Justice", in a blog post "Ann Leckie on David Graeber's "Debt: The First 5000 Mistakes": Handling the Sumerian Evidence Smackdown", 11/24/2014, where he quotes at length from her blog post "Debt", 2/24/2013.

And if you haven't read Ann Leckie's trilogy, you should do yourself a favor and start doing so right away. But this is Language Log, not Science Fiction Book Review Log or Unreliable Economic History Log, so why am I bringing up Ann Leckie now?

Read the rest of this entry »

Comments (18)