{"id":2847,"date":"2010-12-16T21:03:01","date_gmt":"2010-12-17T02:03:01","guid":{"rendered":"http:\/\/languagelog.ldc.upenn.edu\/nll\/?p=2847"},"modified":"2010-12-17T15:20:38","modified_gmt":"2010-12-17T20:20:38","slug":"humanities-research-with-the-google-books-corpus","status":"publish","type":"post","link":"https:\/\/languagelog.ldc.upenn.edu\/nll\/?p=2847","title":{"rendered":"Humanities research with the Google Books corpus"},"content":{"rendered":"<p>In <em>Science<\/em> <span style=\"text-decoration: line-through;\">today, there's<\/span> yesterday, there was an article called \"<a href=\"http:\/\/www.sciencemag.org\/content\/early\/2010\/12\/15\/science.1199644\">Quantitative analysis of culture using millions of digitized books<\/a>\" [subscription required] by at least twelve authors (eleven individuals, plus \"the Google Books team\"), which reports on some exercises in quantitative research performed on what is by far the largest corpus ever assembled for humanities and social science research. Culled from the Google Books collection, it contains more than 5 million books published between 1800 and 2000 &#8212; at a rough estimate, 4 percent of all the books ever published &#8212; of which two-thirds are in English and the others distributed among French, German, Spanish, Chinese, Russian, and Hebrew. (The English corpus alone contains some 360 billion words, dwarfing better structured data collections like the <a href=\"http:\/\/corpus.byu.edu\/\">corpora<\/a> of historical and contemporary American English at BYU, which top out at a paltry 400 million words each.)<\/p>\n<p>I have an article on the project <span style=\"text-decoration: line-through;\">appearing in tomorrow's<\/span> in today's <em><a href=\"http:\/\/chronicle.com\/section\/Home\/5\">Chronicle of Higher Education<\/a><\/em>, which I<span style=\"text-decoration: line-through;\">'ll<\/span> link to <a href=\"http:\/\/chronicle.com\/article\/Counting-on-Google-Books\/125735\/\">here<\/a>, and in later posts Ben or Mark will probably be addressing some of the particular studies, like the estimates of English vocabulary size, as well as the wider implications of the enterprise. For now, some highlights:<\/p>\n<p><!--more--><\/p>\n<p>1. The team: The authors include some Google Books researchers (Jon Orwant, Peter Norvig, Matthew Gray and Dan Clancy), a group of people associated with Harvard bioscience programs (Jean-Baptiste Michel, Erez Lieberman Aiden, Aviva Aiden, Adrien Veres, and Martin Nowak), as well as Steve Pinker of Harvard and Joe Pickett of the American Heritage Dictionary, Dale Hoiberg of the Encyclopedia Britannica, and Yuan Kui Shen of the MIT AI lab. So it's dominated by scientists and engineers, and is framed in scientific (or -istic) terms: the enterprise is described, unwisely, I think, with the name \"culturomics\" (that's a long <em>o<\/em>, as in <em>genome<\/em>). That's apt to put some humanists off, but doesn't affect the implications of the paper one way or the other. I have more to say about this in the <em>Chronicle<\/em> article.<\/p>\n<p>2. The research exercises take various forms. In one, the researchers computed the rates at which irregular English verbs became regular over the past two centuries. In another, very ingenious, they used quantitative methods to detect the suppression of the names of artists and intellectuals in books published in Nazi Germany, the Stalinist Soviet Union, and contemporary China. A third deals with investigate the evolution of fame, as measured by the relative frequency of mentions of people\u2019s names. They began with the 740,000 people with entries in Wikipedia and sorted them by birth date, picking the 50 most frequently mentioned names from each birth year (so that the 1882 cohort contained Felix Frankfurter and Virginia Woolf, and so on). Next they plotted the median frequency of mention for each cohort over time and looked for historical tendencies. It turns out that people become famous more quickly and reach a greater maximum fame today than they did 100 years ago, but that their fame dies out more rapidly &#8212; though it's left unclear what to make of those generalizations or what limits there are to equating fame with frequency of mention.<\/p>\n<p>The paper also presents a number of n-gram trajectories &#8212; that it, graphs that show the relative frequency of words or n-grams (up to five) over the period 1800-2000. (\"Relative frequency\" here means the ratio of tokens of the expression in a given year to the total number of tokens in that year.) By way of example, they plot the changing fame of Galileo, Dickens, Freud, and Einstein; the frequency of \"steak,\" \"hamburger,\" \"pizza\" and \"pasta\"; and the changing frequency of \"influenza\" (it peaks, in the least surprising result of the study, in years of epidemics).<\/p>\n<p>The big news is that Google has set up a <a href=\"http:\/\/ngrams.googlelabs.com\/\">site<\/a> called the Google Books Ngram Viewer where the public can enter words or n-grams (to 5) for any period and corpus and see the resulting graph. They've also announced that the entire dataset of n-grams will be made available for download. Some reports have interpreted this as meaning that Google is making the entire corpus available. It isn't, alas, nor even the pre-1923 portion of the corpus that's in public domain. One can hope&#8230;<\/p>\n<p>At present, that's all you can with this. You can't do many of the things that you can do with other corpora: you can\u2019t ask for a list of the words that follow <em>traditional<\/em> for each decade from 1900 to 2000 in order of descending frequency, or restrict a search for <em>bronzino<\/em> to paragraphs that contain <em>fish<\/em> and don\u2019t contain <em>painting<\/em>, etc. And while Lieberman Aiden and Michel made an impressive effort to purge the subcorpus of the metadata errors that have <a href=\"http:\/\/languagelog.ldc.upenn.edu\/nll\/?p=1701\">plagued<\/a> Google Books, you can't sort books by genre or topic. The researchers do plan to make available a more robust search interface for the corpus, though it's unlikely that users will be able to replicate a lot of the computationally heavy-duty exercises that the researchers report in the paper. But my sense is that even this limited functionality will be interesting and useful to a lot of humanists and historians, even if linguists won't be really happy until they have the whole data set to play with. Again, I have more on this in the <em>Chronicle<\/em> essay.<\/p>\n<p>That's all for now&#8230; watch this space.<\/p>\n<p><strong>12\/17: <\/strong>I was thinking here of the ordinary, technologically limited historian or English professor who logs into the Google Labs site to use the database. With a downloaded corpus, of course, it would be a different story. Jean-Baptiste and Erez wrote me to point out that<\/p>\n<p><span style=\"color: #000080;\">The only part of our paper that could not be done on a small cluster is the computation of the n-gram tables, which is the data that we provide. Thus, any user with the motivation and the computational skills could replicate our work&#8230;.To be exact, absolutely all the analysis we do in this paper can be done on one laptop &#8211; not even a cluster. (the 1-3 grams in English fit easily onto a hard drive, and very little computing power is needed for the computation)<\/span><\/p>\n<p>I think the interesting difference here is how one imagines these data being used &#8212; by technologically sophisticated people working in humanities labs or in subgroups within humanities departments or divisions, say, or by the ordinary humanist who is curious about some cultural or linguistic trend, but isn't about to take the time to write a routine to address it. Of course the hope here might be that the second sort of user &#8212; particularly the students &#8212; will move from the second category to the first; that's why I described the present system as a kind of \"gateway drug\" in my Chronicle article.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In Science today, there's yesterday, there was an article called \"Quantitative analysis of culture using millions of digitized books\" [subscription required] by at least twelve authors (eleven individuals, plus \"the Google Books team\"), which reports on some exercises in quantitative research performed on what is by far the largest corpus ever assembled for humanities and [&hellip;]<\/p>\n","protected":false},"author":11,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[60],"tags":[],"class_list":["post-2847","post","type-post","status-publish","format-standard","hentry","category-computational-linguistics"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/posts\/2847","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2847"}],"version-history":[{"count":0,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=\/wp\/v2\/posts\/2847\/revisions"}],"wp:attachment":[{"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2847"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2847"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/languagelog.ldc.upenn.edu\/nll\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2847"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}