Archive for Computational linguistics

Another nail in the ATEOTD=manager coffin

Some people are hard to persuade. In response to my post "'At the end of the day' not management-speak", Peter Taylor commented:

I argue that the first question to ask is whether hearing someone use the phrase "At the end of the day" conveys information on whether they are likely to be a manager…

Well, a definitive determination of the information gain involved, aside from its limited general interest, would require more resources than I can bring to bear over my morning coffee. But we can make a plausible guess, and the answer turns out to be that the "information gain" is probably pretty small, and is just about as likely to point away from the conclusion that the speaker or writer is a manager as towards it.

Read the rest of this entry »

Comments (19)

Google Scholar: another metadata muddle?

Following on the critiques of the faulty metadata in Google Books that I offered here and in the Chronicle of Higher Education, Peter Jacso of the University of Hawaii writes in the Library Journal that Google Scholar is laced with millions of metadata errors of its own. These include wildly inflated publication and citation counts (which Jacso compares to Bernie Madoff's profit reports), numerous missing author names, and phantom authors assigned by the parser that Google elected to use to extract metadata, rather than using the metadata offered them by scholarly publishers and indexing/abstracting services:

In its stupor, the parser fancies as author names (parts of) section titles, article titles, journal names, company names, and addresses, such as Methods (42,700 records), Evaluation (43,900), Population (23,300), Contents (25,200), Technique(s) (30,000), Results (17,900), Background (10,500), or—in a whopping number of records— Limited (234,000) and Ltd (452,000). 

What makes this a serious problem is that many people regard the Google Scholar metadata as a reliable index of scholarly influence and reputation, particularly now that there are tools like the Google Scholar Citation Count gadget by Jan Feyereisl and the Publish or Perish software produced by Tarma Software, both of which take Google Scholar's metadata at face value. True, the data provided by traditional abstracting and indexing services are far from perfect, but their errors are dwarfed by those of Google Scholar, Jacso says.

Of course you could argue that Google's responsibilities with Google Scholar aren't quite analogous to those with Google Book, where the settlement has to pass federal scrutiny and where Google has obligations to the research libraries that provided the scans. Still, you have to feel sorry for any academic whose tenure or promotion case rests in part on the accuracy of one of Google's algorithms.

Comments (9)

Semantic fail

Leena Rao at TechCrunch points out a case where semantic search turned into anti-semitic search.

Read the rest of this entry »

Comments (41)

Serial improvement

Although I share Geoff Nunberg's disappointment in some aspects of Google's metadata for books,  I've noticed a significant — though apparently unheralded — recent improvement.  So I decided to check this out by following up Bill Poser's post yesterday about insect species, which I thought was likely to turn up an example of the right sort. And in fact, the third hit in a search for {hemipteran} is a relevant one: Irene McCulloch, "A comparison of the life cycle of Crithidia with that of Trypanosoma in the invertebrate host", University of California Publications in Zoology, 19(4) 135-190, October 4, 1919.

This paper appears in a volume that is part of a serial publication. And until recently, Google Books  routinely gave all such publications the date of the first in the series, even if the result was a decade or a century out of whack.

Read the rest of this entry »

Comments (12)

Google Books: A Metadata Train Wreck

Mark has already extensively blogged the Google Books Settlement Conference at Berkeley yesterday, where he and I both spoke on the panel on "quality" — which is to say, how well is Google Books doing this and what if anything will hold their feet to the fire? This is almost certainly the Last Library, after all. There's no Moore's Law for capture, and nobody is ever going to scan most of these books again. So whoever is in charge of the collection a hundred years from now — Google? UNESCO? Wal-Mart? — these are the files that scholars are going to be using then. All of which lends a particular urgency to the concerns about whether Google is doing this right.

My presentation focussed on GB's metadata — a feature absolutely necessary to doing most serious scholarly work with the corpus. It's well and good to use the corpus just for finding information on a topic — entering some key words and barrelling in sideways. (That's what "googling" means, isn't it?) But for scholars looking for a particular edition of Leaves of Grass, say, it doesn't do a lot of good just to enter "I contain multitudes" in the search box and hope for the best. Ditto for someone who wants to look at early-19th century French editions of Le Contrat Social, or to linguists, historians or literary scholars trying to trace the development of words or constructions: Can we observe the way happiness replaced felicity in the seventeenth century, as Keith Thomas suggests? When did "the United States are" start to lose ground to "the United States is"? How did the use of propaganda rise and fall by decade over the course of the twentieth century? And so on for all the questions that have made Google Books such an exciting prospect for all of us wordinistas and wordastri. But to answer those questions you need good metadata. And Google's are a train wreck: a mish-mash wrapped in a muddle wrapped in a mess.

Read the rest of this entry »

Comments (81)

"Team, Meet Girls; Girls, Meet Team"

The ideal David Bowie song, according to (Nick Troop's interpretation of) the output of Jamie Pennebaker's LIWC program, correlated with sales figures across Bowie's oeuvre:

Read the rest of this entry »

Comments (8)

Computational eggcornology

Chris Waigl, keeper of the Eggcorn Database, brings to our attention a paper that was presented at CALC-09 (Workshop on Computational Approaches to Linguistic Creativity, held in conjunction with NAACL HLT in Boulder, Colorado, on June 4, 2009). As part of a session on "Metaphors and Eggcorns," Sravana Reddy (University of Chicago Dept. of Computer Science) delivered a paper entitled "Understanding Eggcorns." Here's the abstract:

An eggcorn is a type of linguistic error where a word is substituted with one that is semantically plausible – that is, the substitution is a semantic reanalysis of what may be a rare, archaic, or otherwise opaque term. We build a system that, given the original word and its eggcorn form, finds a semantic path between the two. Based on these paths, we derive a typology that reflects the different classes of semantic reinterpretation underlying eggcorns.

You can read the PDF of Reddy's paper here. Yet another advance in the recognition of eggcornology as a legitimate linguistic subdiscipline.

Comments (2)

The and a sex: a replication

On the basis of recent research in social psychology, I calculate that there is a 53% probability that Geoff Pullum is male. That estimate is based the percentage of the and a/an in a recent Language Log post, "Stupid canine lexical acquisition claims", 8/12/2009.

But we shouldn't get too excited about our success in correctly sexing Geoff: the same process, applied to Sarah Palin's recent "Death Panel" facebook post ("Statement on the Current Health Care Debate", 8/7/2009),  estimates her probability of being male at 56%.

Read the rest of this entry »

Comments (8)

Thanks, Bill Dunn!

In a comment on a recent LL post, Daniel C. Parmenter wrote:

In my MT days (starting in the early nineties) we used the WSJ corpus a lot. I read recently that the availablity of this corpus was in no small part thanks to you. And so I thank you. In those pre-and-early Google/Altavista days the WSJ corpus was an enormous help. Thanks!

Daniel is referring to an archive of text from the Wall Street Journal, covering 1987-1989, originally published with some other raw material for corpus linguistics by the  Data Collection Initiative of the Association for Computational Linguistics (ACL/DCI). And the person who most deserves thanks for the availability of the WSJ part of this publication — perhaps its most important part — is Bill Dunn, who was the head of Dow Jones Information Services in the late 1980s.

As far as I know, Bill's role in making this corpus available is not documented anywhere, so I'll take this opportunity to tell some of the story as I remember it. (The rest of this post is a slightly-edited version of an email that I sent on 5/1/2008 to someone at the WSJ who had corresponded with Geoff Pullum about an article on the use of corpus materials in linguistic research.)

Read the rest of this entry »

Comments (7)

NLTK Book on Sale Now

The NLTK book, Natural Language Processing with Python, went on sale yesterday:

Cover of Natural Language Processing with Python

"This book is here to help you get your job done." I love that line (from the preface). It captures the spirit of the book. Right from the start, readers/users get to do advanced things with large corpora, including information-rich visualizations and sophisticated theory implementation. If you've started to see that your research would benefit from some computational power, but you have limited (or no) programming experience, don't despair — install NLTK and its data sets (it's a snap), then work through this book.

Read the rest of this entry »

Comments (5)

Everyone to obey the orders and guidelines Mzmlh call girl

Over the past couple of days, I've continued to use Google's alpha Persian-English translation system as part of an attempt to keep track of what's happening in Iran.

On long passages, the results are still at the fever-dream stage of machine translation, where enough relevant words and phrase-fragments emerge to leave a sort of impressionistic residue of content, but without much overall coherence. For example, I tried it on a bulletin from Mehr News yesterday evening that claimed to be a statement from the Assembly of Experts announcing full support for Kahmenei's speech on Friday. This sentence

به گزارش خبرگزاری مهر ، در این بیانیه آمده است: مجلس خبرگان رهبری ضمن تشکر از حضور شکوهمند و حماسه‌ساز مردم در انتخابات ریاست جمهوری، حمایت قاطع خود را از بیانات روشنگرانه، وحدت‌بخش و داهیانه‌ مقام معظم رهبری در نماز جمعه تهران اعلام می‌دارد و با شکرگزاری به درگاه الهی نسبت به نعمت عظما و بی‌بدیل ولایت فقیه، این رکن رکین حدوث و تداوم انقلاب؛ همگان را به تبعیت از دستورات و رهنمودهای معظم‌‌له فرا می‌خواند.

comes out in the automatic translation as

Mehr News Agency reported, the statement states: the Assembly of Experts also thanked the glorious presence Hmas·hsaz and presidential elections, support their statements Rvshngranh decisive, and Vhdtbkhsh Dahyanh Ayatollah Khamenei Friday Prayers in Tehran and ready Thanksgiving Portal to the Divine favor Zma Bybdyl and velayat-e faqih, the pillars of the revolution and continuity Rkyn Hdvs; everyone to obey the orders and guidelines Mzmlh call girl.

Read the rest of this entry »

Comments (3)

Green verdure tone desert liquidation

Researchers at Google have responded to current events in Iran by offering an alpha version of Persian-to-English machine translation. I'm a big fan of statistical MT, and for that matter of Google's MT team, and current events in Iran are gripping, so I thought I'd try it out.

Read the rest of this entry »

Comments (7)

496M hits for "language log"? Alas, no.

You've probably heard about Microsoft's new search site bing. I don't know much about it yet, but I did observe a couple of things that may be of interest to those of us who try to use web-search counts as data.

Read the rest of this entry »

Comments (6)