Survey response effects are variations in survey responses that result from seemingly inconsequential aspects of survey design and administration. Susceptibility to these effects varies depending on the stability of one's beliefs. Those without a strong attitude on an issue, for instance, would more be more prone to survey response effects than those strongly for or against the issue. These effects can be broadly grouped as consistency or contrast effects. Consistency effects are effects that lead to survey responses that agree, not to be confused with the identically-named term used to refer tothe phenomenon in which respondents intentionally try to get their survey responses to agree one another. Contrast effects on the other hand, lead to opposing responses.
Common survey response effects
Endorsement effects occur when reference to a political figure or group changes how people respond to questions on a policy issue.
Race of interviewer effects occur when the race of an interview influences survey responses.
Reference group effects occur when respondents are asked about their affiliation with a particular group and thereafter report attitudes that comport with that group's ideals. For example, a Republican might respond with more conservative responsive if first asked about the which political party she identifies with. A similar phenomenon, stereotype threat, has been described by Claude Steele written about in an Malcolm Gladwell’s essay The Art of Failure.
Priming effects of the news occur when news coverage suggests that the public should use certain issues to gauge their evaluation of other issues, which might be assessed in survey questions. Priming is an extension of agenda setting, the process by which new organizations increase salience of particular ideas, making them more likely to be recalled later. These effects, are at play during campaigns when candidates reiterate several key ideas that represent them to the media, priming the electorate to readily recall those ideas over others when stepping into voting booth.
Question order effects occur when the wording or ideas provoked by a survey question linger in the mind and affect the response to subsequent questions. For example, questions about personal finance status might affect the response of questions that evaluate incumbent politicians.
Affective priming occurs when respondents who are asked a question about attitude before being asked to provide arguments about a position provide more affective arguments, that is arguments relating to mood and feeling, than rational arguments.
Haptic metaphor effects occur when physical touch sensations affect responses in a survey or experiment. For example, reviewing job application materials on a heavier clipboard can cause job candidates to seem more important, and briefly holding a warm beverage before an interpersonal interaction can create the perception of others having a warmer personality than briefly holding a cold beverage before an interaction.
Significance
In general, people answer survey questions with remarkable inconsistency, that is their responses are unstable. Response instability is when people are asked the same questions in repeated surveys and respond with conflicting answers. To explain this instability, John Zaller and Stanley Feldman argue that how people respond to surveys depends on which schemas, or considerations, are most readily available in the mind. They claim that an attitude about a given issue, at a given time, is a reflection of an average of the considerations in the mind at that time. To apply a metaphor, attitude is climate; considerations are weather features. And since the pool of considerations in one's mind is constantly changing, attitudes are largely unstable over time. Due to this instability, survey response effects can significantly influence the outcome of surveys and should be considered when using surveys to inform policy making.
Contradictory theories of survey response instability
Philip Converse has attributed survey response instability to respondents lacking meaningful beliefs, while others have chalked it up to measurement errors and vague language in surveys.