#CompuPolitics

« previous post | next post »

A couple of months ago, I pointed out that entertainment industry folks are tracking Justin Bieber's popularity using automated sentiment analysis, and I used that as a leaping-off point for some comments about language technology and social media. Here I am again, but suddenly it's not just Justin's bank account we're talking about, it's the future of the country.

As the Republican primary season marches along, a novel use of technology in politics is evolving even more rapidly, and arguably in a more interesting way, than the race itself: the analysis of social media to take the pulse of public opinion about candidates. In addition to simply tracking mentions about political candidates, people are starting to suggest that volume and sentiment analysis on tweets (and other social media, but Twitter is the poster child here) might produce useful information about people's viewpoints, or even predict the success of political campaigns. Indeed, it's been suggested that numbers derived from Twitter traffic might be better than polls, or at least better than pundits. (Is that much of a bar to set? Never mind.)

Although I used Justin and his Bieberettes to get the discussion rolling back in November, I definitely had politics in mind:

My worry is compounded by the fact that social media sentiment analyses are being presented without the basic caveats you invoke in related polling scenarios. When you analyze social media you have not only a surfeit of conventional accuracy concerns like sampling error and selection bias (how well does the population of people whose posts you're analyzing represent the population you're trying to describe?), but also the problem of
"automation bias" — in this case trusting that the automatic text analysis is correct. Yet the very same news organization that reports traditional opinion poll results with error bars and a careful note about the sample size will present Twitter sentiment analysis numbers as raw percentages, without the slightest hint of qualification.

I'll say it again: for some reason, when a piece of technology enters the picture, rigor seems to go out the window. Have you been looking at the "plus or minus" values for "percentage positive" and "percentage negative" numbers about the candidates, when the numbers come from social media? Me neither. That's because nobody is producing them.

Now, it's not like it's necessarily easy to get this right. We're in new territory here. When you take a very large sample, which is what you get from social media, standard statistical practice is actually going to give you very tight confidence intervals. Indeed, perhaps it is more misleading to include plus-or-minus values with the wrong assumptions (e.g. the assumption that every tweet labeled "negative" really is negative) than it is to leave them out altogether. But until we come to grips with the nature of this new beast, it seems like putting in a few extra caveats might be in order.

For example, I like the way Noah Smith put it when he got into this game on the early side, back in mid-2010, discussing his group's widely noted CMU study. He explicitly pointed out: "The results are noisy, as are the results of polls. Opinion pollsters have learned to compensate for these distortions, while we're still trying to identify and understand the noise in our data."

It seems to me that we're seeing a whole new domain of activity emerging here, the "compupolitics" of my title. This is related to, but I think also distinct from, computational political science: I'd say compupolitics is to computational political science as search engines are to information retrieval research. Some of it is about improving the quality of the underlying technology — for example, the algorithms that analyze tweets or Facebook postings to identify whether a candidate has been mentioned, whether the author is expressing positive or negative feelings, what the topics under discussion are, and so forth. That's the bread and butter of academic researchers and other language technology folks who are quickly entering this arena. But much of it is also about things like practical utility, scalability, turnaround time, user engagement, and so forth; i.e. issues connected with embedding technology in a real-world social setting.

Interestingly, some of the same issues that quickly arose for search engines are likely to arrive, and to stay on for the duration, once compupolitics begins gaining real traction. It may already be happening. After search engines began making a difference in the world — in particular, a difference measured in dollars — adversarial information retrieval emerged out of a need to detect and counteract tricks like search engine spamming that people use to unfairly make their sites more prominent in search results. Similarly, we're already seeing concerns about gaming the system being raised by watchers of social media and politics. TechPresident blogger Nick Judd, for example, decries the new "game" of getting supporters to post and tweet and retweet, so that they show up higher on "toys" like the Washington Post's Mention Machine. Technology analyst Curt Monash reacts to the new Politico/Facebook partnership (wherein Facebook is sharing with Politico not just public but also private user messages mentioning Republican candidates) not with Fourth Amendment concerns, but with the regretful observation that "you can now stuff an online ballot box by spamming your friends in private conversation."

All that said, I think the old line about genies and bottles applies here. Yes, there's way too much breathlessness and hype out there, and not enough rigor. Yet. But, on the other hand, there is a growing community of people who are starting to examine these issues in a rigorous way. The text analysis problems are challenging, but significant energy is building in research on computational linguistics in a world of social media. Statisticians are coming up with clever statistical corrections to help compute more reliable averages, even if the underlying opinion analysis technology makes very large numbers of mistakes on individual cases. And my own research lately has involved developing a smartphone app that collects people's fine-grained reactions to statements in political debates, in real time. (You can sign up for a beta test invitation here. Tell your friends. ;) )

The bottom line: we shouldn't be waving our hands and making extravagant claims, but we shouldn't be burying our heads in the sand and crying "no, no, no" either. There are a lot of issues that still need to be dealt with, when it comes to social media analysis and politics, but the introduction has already been made: 21st century, meet polling; polling, meet the 21st century. This is the future, and the question is not whether to embrace it, but when, and how, and how well we do it.

[I found Geoff's recent post about blog comments very inspiring, plus inviting comments on any topic related to politics strikes me as a Pandora-like act: not a hopeless mistake, exactly, but certainly not to be done lightly. In the spirit of the discussion, therefore, I invite you to register your comments on social media with the hashtag of the title, and perhaps I'll see if I have any success finding and analyzing them.]



1 Comment

  1. Companies in New Arena of Measuring Sentiment in Social Media Face Unusual Challenges - The Numbers Guy - WSJ said,

    February 10, 2012 @ 9:41 pm

    […] Maryland, College Park, and lead scientist for social-media analysis firm Converseon Inc., who has blogged about social-media measurement. "But today a vast number of those conversations have moved to social media, where they are […]

RSS feed for comments on this post