It's a commonplace observation that survey results depend on how questions are worded. But I don't think I've ever seen a larger effect of synonym-substitution than the one reported by a recent CBS News/New York Times poll about the U.S. military's DADT ("Don't Ask Don't Tell") policy.
Question: Do you favor or oppose ___ serving in the military?
|"Homosexuals"||"Gay Men & Lesbians"|
On the face of it, the two wordings of the question seem to refer to exactly the same set of circumstances. But do they? Do (many) people these days think, for example, that "gay men and lesbians" refers to sexual orientation, while "homosexuals" refers to sexual practices? Or is this large difference in the distribution of opinions purely a question of connotation?
A similarly striking effect was seen in responses to the question "Do you favor or oppose ___ being allowed to serve openly?" Changing the description from "homosexuals" to "gay men and lesbians" swung opinion in favor from 44% to 58%, and opinion in opposition from 42% to 28%:
|"Homosexuals"||"Gay Men & Lesbians"|
According to the (rather skimpy) details provided,
This poll was conducted among a random sample of 1,084 adults nationwide, interviewed by telephone February 5-10, 2010. Phone numbers were dialed from random digit dial samples of both standard land-line and cell phones. The error due to sampling for results based on the entire sample could be plus or minus three percentage points.
No breakdown by sex or age is given.
A cynical commenter at TPM suggested the control experiment of asking people if they favor "heterosexuals" serving openly in the military.
One of the classic discussions of such effects is Tom Smith, "That Which We Call Welfare by Any Other Name Would Smell Sweeter: An Analysis of the Impact of Question Wording on Response Patterns", The Public Opinion Quarterly 51(1) 1987:
A recent experiment on the General Social Survey (GSS) comparing three different versions of spending priorities scales revealed systematic differences by question form and some large differences between particular referents used. The largest observed difference in support for spending was between the traditional category "welfare" and the two variant forms "assistance for the poor" and "caring for the poor." Two of the three forms used in the 1984 experiment (excluding "caring") were again employed on the 1985 survey and again showed a large effect. When we compared these results to other surveys that (1) employed some type of program priority question and (2) inquired about "welfare" (or some variation that used this term) and about" the poor," "the unemployed," or "food stamps"( in one variation or another), we found that the effects were large, similar in magnitude, and persistent across time and survey organization. As Table 1 shows, on average support for more assistance for the poor is 39 percentage points higher than for welfare. Similarly, support for the unemployed always exceeds support for welfare (averaging 12 percentage points), although the margin is somewhat variable. Only support for food stamps is as low or lower than support for welfare.
But the denotations of "welfare" and "caring for the poor" are arguably different — one is a bureaucracy, and the other is a moral obligation.
[Update — Differences of this general kind have been noted, and to some extent studied, since the 1940s. There are a number of obvious categories of explanation, including (1) different actual denotations of the words and phrases used, at least among the people surveyed, (2) evocation of differently-evaluated frames by the metaphorical associations of words and phrases, (3) simple positive vs. negative connotations or associations of particular words or phrases, independent of any difference in denotation or in metaphorical frame.
The empirical studies that I've seen don't seem to distinguish these different types of effects very carefully. For example, the study cited above presents a lot of facts about how differently people respond to different ways of phrasing questions about "welfare"-like programs, but doesn't do (or cite) any empirical studies of why they respond in these different ways.
There are also well-known effects of context, including especially the previous questions in the survey, the characteristics of the people asking the questions, and so on.
I'm not very familiar with this research area, but what I know of it suggests that linguists (including psycholinguists, sociolinguists and so on) have not played as much of a role it in as might be appropriate.]
[Update #2 — this particular poll result was also discussed in the NYT's Caucus blog yesterday, with some additional break-down by sub-group:
Democrats in the poll seemed particularly swayed by the wording. Seventy-nine percent of Democrats said they support permitting gay men and lesbians to serve openly. Fewer Democrats however, just 43 percent, said they were in favor of allowing homosexuals to serve openly. Republicans and independents varied less between the two terms.
The blog post promises that "Complete poll results and article will be available this evening at www.nytimes.com" (i.e. yesterday evening), but nothing seems to have shown up yet]