Archive for Computational linguistics

Distances among genres and authors

Jon Gertner, "True Innovation", NYT 2/25/2012

At Bell Labs, the man most responsible for the culture of creativity was Mervin Kelly. […] In 1950, he traveled around Europe, delivering a presentation that explained to audiences how his laboratory worked.

His fundamental belief was that an “institute of creative technology” like his own needed a “critical mass” of talented people to foster a busy exchange of ideas. But innovation required much more than that. Mr. Kelly was convinced that physical proximity was everything; phone calls alone wouldn’t do. Quite intentionally, Bell Labs housed thinkers and doers under one roof. Purposefully mixed together on the transistor project were physicists, metallurgists and electrical engineers; side by side were specialists in theory, experimentation and manufacturing. Like an able concert hall conductor, he sought a harmony, and sometimes a tension, between scientific disciplines; between researchers and developers; and between soloists and groups.

One element of his approach was architectural. He personally helped design a building in Murray Hill, N.J., opened in 1941, where everyone would interact with one another. Some of the hallways in the building were designed to be so long that to look down their length was to see the end disappear at a vanishing point. Traveling the hall’s length without encountering a number of acquaintances, problems, diversions and ideas was almost impossible. A physicist on his way to lunch in the cafeteria was like a magnet rolling past iron filings.

I started work at Murray Hill in 1975, nine years after someone staged that picture of white lab coats extending to the vanishing point. And even though my first office was in an unused chemistry lab, I don't recall ever seeing more than an occasional pragmatic lab coat —  whoever staged the photograph was apparently using the same lab-coat=scientist iconography as a couple of generations of cartoonists and movie-makers.  But I can certainly attest to the  value of hallway and lunchroom serendipity.

These days, some of the same serendipitous conversational cross-fertilization comes from random encounters in the corridors and cafeterias of the internet.

Read the rest of this entry »

Comments (15)

Cultural diffusion and the Whorfian hypothesis

Geoff Pullum summarizes Keith Chen's view of "The Effect of Language on Economic Behavior" as follows ("Keith Chen, Whorfian economist", 2/9/2012):

Chen […] thinks that if your language has clear grammatical future tense marking […], then you and your fellow native speakers have a dramatically increased likelihood of exhibiting high rates of obesity, smoking, drinking, debt, and poor pension provision. And conversely, if your language uses present-tense forms to express future time reference […], you and your fellow speakers are strikingly more likely to have good financial planning for retirement and sensible health habits. It is as if grammatical marking of the difference between the present and the future insulates you from seeing that the two are coterminous so you should plan ahead. Using present-tense forms for future time reference, on the other hand, encourages you to see that the future is just more of the present, and thus encourages you to put money in a 401(k).

Geoff notes that "Chen's evidence on the lifestyle indicators comes from massive amounts of hard data, and his mathematical analysis is serious". But in addition to expressing some qualms about the linguistic data, Geoff worries that the large number of linguistic traits and the large number of lifestyle and other cultural traits might give rise to spurious connections:

I also worry that it is too easy to find correlations of this kind, and we don't have any idea just how easy until a concerted effort has been made to show that the spurious ones are not supportable. For example, if we took "has (vs. does not have) pharyngeal consonants", or "uses (vs. does not use) close front rounded vowels", would we find correlations there too?

I have similar concerns; but I believe that I can explain and justify my worries without looking at any real data at all. There are two qualitative facts about the world that make it especially easy to fool ourselves about quantitative connections of this kind.

Read the rest of this entry »

Comments (18)

Automatic measurement of media bias

Mediate Metrics ("Objectively Measuring Media Bias") explains that

Based in Wheaton, IL, Mediate Metrics LLC is a privately held start-up founded by technology veteran and entrepreneur Barry Hardek. Our goal is to cultivate knowledgeable consumers of political news by objectively measuring media “slant” — news which contains either an embedded statements of bias (opinion) or an elements of editorial influence (factual content that reflects positively or negatively on U.S. political parties).

Mediate Metrics’ core technology is based on a custom machine classifier designed specifically for this application, and developed based on social science best practices with recognized leaders in the field of text analysis. Today,  text mining systems are primarily used as general purpose marketing tools for extracting insights from platforms such as like Twitter and Facebook, or from other large electronic databases. In contrast, the Mediate Metrics classifier was specifically devised to identify statements of bias (opinions) and influence (facts that reflects positively or negatively) on U.S. political parties from news program transcripts.

(The links to Wikipedia articles on "social science" and "text mining" are original to their page.)

Read the rest of this entry »

Comments (14)

The "dance of the p's and b's": truth or noise?

Stanley Fish asks  ("Mind Your P’s and B’s: The Digital Humanities and Interpretation", NYT 1/23/2011):

[H]ow do the technologies wielded by digital humanities practitioners either facilitate the work of the humanities, as it has been traditionally understood, or bring about an entirely new conception of what work in the humanities can and should be?

After a couple of lengthy detours, he concludes that neither any facilitation nor any worthwhile new conception is likely: the digital humanities

… will have little place for the likes of me and for the kind of criticism I practice: a criticism that narrows meaning to the significances designed by an author, a criticism that generalizes from a text as small as half a line, a criticism that insists on the distinction between the true and the false, between what is relevant and what is noise, between what is serious and what is mere play.

In other words, he agrees with Noam Chomsky that statistical analysis of the natural (or textual) world is intellectually empty — though I suspect that they agree on little else.

Read the rest of this entry »

Comments (39)

#CompuPolitics

A couple of months ago, I pointed out that entertainment industry folks are tracking Justin Bieber's popularity using automated sentiment analysis, and I used that as a leaping-off point for some comments about language technology and social media. Here I am again, but suddenly it's not just Justin's bank account we're talking about, it's the future of the country.

As the Republican primary season marches along, a novel use of technology in politics is evolving even more rapidly, and arguably in a more interesting way, than the race itself: the analysis of social media to take the pulse of public opinion about candidates. In addition to simply tracking mentions about political candidates, people are starting to suggest that volume and sentiment analysis on tweets (and other social media, but Twitter is the poster child here) might produce useful information about people's viewpoints, or even predict the success of political campaigns. Indeed, it's been suggested that numbers derived from Twitter traffic might be better than polls, or at least better than pundits. (Is that much of a bar to set? Never mind.)

Read the rest of this entry »

Comments (1)

Sexual accommodation

You've probably noticed that how people talk depends on who they're talking with. And for 40 years or so, linguists and psychologists and sociologists have referred to this process as "speech accommodation" or "communication accommodation" — or, for short, just plain "accommodation".  This morning's Breakfast Experiment™  explores a version of the speech accommodation effect as applied to groups rather than individuals — some ways that men and women talk differently in same-sex vs. mixed-sex conversations.

Read the rest of this entry »

Comments (13)

Logic! Language! Information! Scholarships!

’Tis the season to announce seasonal schools. Geoff Pullum announced a short course on grammar for language technologists as part of a winter school in Tarragona next month, and Mark Liberman announced a call for course proposals for the LSA's Linguistic Institute in summer 2013. But what if you can't make it to Tarragona next month, and can't wait a year and a half to get your seasonal school fix? Well, I have just the school for you!

Read the rest of this entry »

Comments (1)

Linguistic Deception Detection: Part 1

In "Reputable linguistic "lie detection"?", 12/5/2011, I promised to scrutinize some of the research on linguistic deception detection, focusing especially on the work cited in Anne Eisenberg's 12/3/2011 NYT article "Software that listens for lies".  This post is a first installment, looking at the work of David Larcker and Anastasia Zakolyukina ("Detecting Deceptive Discussions in Corporate Conference Calls", Rock Center for Corporate Governance, Working Paper No. 83, July 2010).

[Update: as of 6/5/2019, the working papers version no longer exists, but a version under the same title was published in the Journal of Accounting Research in 2012.]

Read the rest of this entry »

Comments (4)

Reputable linguistic "lie detection"?

Several readers have noted the article by Anne Eisenberg in Saturday's New York Times, "Software that listens for lies":

SHE looks as innocuous as Miss Marple, Agatha Christie’s famous detective.

But also like Miss Marple, Julia Hirschberg, a professor of computer science at Columbia University, may spell trouble for a lot of liars.

That’s because Dr. Hirschberg is teaching computers how to spot deception — programming them to parse people’s speech for patterns that gauge whether they are being honest.

For this sort of lie detection, there’s no need to strap anyone into a machine. The person’s speech provides all the cues — loudness, changes in pitch, pauses between words, ums and ahs, nervous laughs and dozens of other tiny signs that can suggest a lie.

Dr. Hirschberg is not the only researcher using algorithms to trawl our utterances for evidence of our inner lives. A small band of linguists, engineers and computer scientists, among others, are busy training computers to recognize hallmarks of what they call emotional speech — talk that reflects deception, anger, friendliness and even flirtation.

Read the rest of this entry »

Comments off

The immortal Pierre Vinken

On November 7, publishers Reed Elsevier announced the passing of Pierre Vinken, former Reed Elsevier CEO and Chairman, at age 83. But to those of us in natural language processing, Mr. Vinken is 61 years old, now and forever.

Though I expect it was unknown to him, Mr. Vinken has been the most familiar of names in natural language processing circles for years, because he is the subject (in both senses, not to mention the inaugural bigram) of the very first sentence of the Wall Street Journal (WSJ) corpus:

Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29.

But there's a fascinating little twist that most NLPers are probably not aware of. I certainly wasn't.

Read the rest of this entry »

Comments (1)

Towel-snapping semiotics: How the frontal lobe comes out through the mouth

Comments (18)

Listeners needed for TTS standards intelligibility test

Email from Ann Syrdal on behalf of the S3-WG91 Standards Working Group:

The "Text-to-Speech Synthesis Technology" ASA Standards working group (S3-WG91) is conducting a web-based test that applies the method it will be proposing as an ANSI standard for evaluating TTS intelligibility.  It is an open-response test ("type what you hear"). The test uses syntactically correct but semantically meaningless sentences, Semantically Unpredictable Sentences (SUS).

To take the test, click here.

Read the rest of this entry »

Comments (31)

Spinoculars re-spun?

Back in September of 2008, a Seattle-based start-up named SpinSpotter offered a tool that promised to detect "spin" or "bias" in news stories. The press release about the "Spinoculars" browser toolbar was persuasive enough to generate credulous and positive stories at the New York Times and at Business Week. But ironically, these very stories immediately set off BS detectors at Headsup: The Blog ("The King's Camelopard, or …", 9/8/2008) and at Language Log  ("Dumb mag buys grammar goof spin spot fraud", 9/10/2008), and subsequent investigation verified that there was essentially nothing behind the curtain ("SpinSpotter unspun", 9/10/2008). SpinSpotter was either a joke, a fraud, or a runaway piece of "demoware" meant to create enough buzz to attract some venture funding. Within six months, SpinSpotter was an ex-venture.

An article in yesterday's Nieman Journalism Lab (Andrew Phelps, "Bull beware: Truth goggles sniff out suspicious sentences in news", 11/22/2011) illustrates the same kind of breathless journalistic credulity ("A graduate student at the MIT Media Lab is writing software that can highlight false claims in articles, just like spell check.")  But the factual background in this case involves weaker claims (a thesis proposal, rather than a product release) that are more likely to be workable (matching news-story fragments against fact-checking database entries, rather than recognizing phrases that involve things like "disregarded context" and "selective disclosure").

Read the rest of this entry »

Comments (8)