People — especially Americans — are ignorant. This is something that Everyone Knows, because we read or hear about it from time to time in the mass media. Thus we can listen to Robin Young tell us on NPR's Here and Now that
A new survey conducted by Chicago's McCormick Tribune Freedom Museum, which has yet to open, finds that only 28 percent of Americans are able to name one of the constitutional freedoms, yet 52 percent are able to name at least two Simpsons family members.
Or we can read in the New York Times that
Diane Ravitch, an education historian […], said she was particularly disturbed by the fact that only 2 percent of 12th graders correctly answered a question concerning Brown v. Board of Education, which she called “very likely the most important decision” of the United States Supreme Court in the past seven decades.
And this is not just journalistic sensationalizing, because we can get similar opinions directly from the scholarly literature in the social sciences. Thus Ilya Somin, "Voter ignorance and the democratic ideal", Critical Review: A Journal of Politics and Society, 12:4, 413-458, 1998:
Overall, close to a third of Americans can be categorized as "know-nothings" who are almost completely ignorant of relevant political information (Bennett 1988)—which is not, by any means, to suggest that the other two-thirds are well informed.
Or Robert C. Luskin, ‘"From Denial to Extenuation (and Finally Beyond): Political Sophistication and Citizen Performance.’" In Thinking about Political Psychology, James H. Kuklinski (Ed.) 2002:
The average American's ability to place the Democratic and Republican parties and "liberals" and "conservatives" correctly on issue dimensions and the two parties on a liberal-conservative dimension scarcely exceeds and indeed sometimes falls short of what could be achieved by blind guessing. The verdict is stunningly, depressingly clear: most people know very little about politics, and the distribution behind that statement has changed little if at all over the survey era.
But I've always been skeptical of this particular received idea. In the passage quoted above, Robin Young states the survey result incorrectly — actually, 73% of respondents, not 28%, were able to name one of the five freedoms guaranteed by the first amendment — and she spins it in a doubtful direction to boot, because only 65% were able to name one of the Simpsons characters.
In the cited New York Times article, Diane Ravitch is referring to the 2010 NAEP 12th grade U.S. History test, in which 82%, not 2%, of 12th graders correctly identified Brown v. Board of Education.
And I recently heard a talk by Arthur Lupia ("Challenges and Opportunities in Open-Ended Coding", presented at a workshop on The Future of Survey Research) that made me even less willing to accept at face value claims of the form "Fewer than X% of Americans Know Y". Arthur reported on some forensic analysis, so to speak, of the internal records of the American National Election Study. He learned that the standard methodology, used in this and other surveys for asking, recording, and scoring open-ended questions (and especially open-ended recall questions), systematically underestimates respondents' knowledge.
The way it works is that the survey designers craft a question like the following (asked at a time when William Rehnquist was the Chief Justice of the United States):
“Now we have a set of questions concerning various public figures. We want to see how much information about them gets out to the public from television, newspapers and the like….
What about William Rehnquist – What job or political office does he NOW hold?”
The answers to such open-ended questions are recorded — as audio recordings and/or as notes taken by the interviewer — and these records are coded, later on, by hired coders.
The survey designers give these coders very specific instructions about what counts as right and wrong in the answers. In the case of the question about William Rehnquist, the criteria for an answer to be judged correct were mentions of both "chief justice" and "Supreme Court". These terms had to be mentioned explicitly, so all of the following (actual answers) were counted as wrong:
Supreme Court justice. The main one.
He’s the senior judge on the Supreme Court.
He is the Supreme Court justice in charge.
He’s the head of the Supreme Court.
He’s top man in the Supreme Court.
Supreme Court justice, head.
Supreme Court justice. The head guy.
Head of Supreme Court.
Supreme Court justice head honcho.
Similarly, the technically correct answer ("Chief Justice of the United States") would also have been scored as wrong (I'm not certain whether it actually occurred or not in the survey responses).
Prof. Lupia explained, in a persuasive way, how the American National Election Study has changed its practices to minimize such problems. His list of fixes includes:
Increased documentation at all stages
Evaluation at many stages
Increased procedural transparency
High inter-coder reliability
- When you read or hear in the mass media that "Only X% of Americans know Y", don't believe it without checking the references — it's probably false even as a report of the survey statistics.
- When you read survey results claiming that "Only X% of Americans know Y", don't believe the claims unless the survey publishes (a) the exact questions asked; (b) the specific coding instructions used to score the answers; (c) a measure of inter-annotator agreement in blind tests; and (d) the raw response transcripts.
Future ANES releases should meet these criteria, but it seems that very few other surveys do.