Response rate (survey)


In survey research, response rate, also known as completion rate or return rate, is the number of people who answered the survey divided by the number of people in the sample. It is usually expressed in the form of a percentage. The term is also used in direct marketing to refer to the number of people who responded to an offer.
The general consensus in academic surveys is to choose one of the . These definitions are endorsed by the National Research Council and the Journal of the American Medical Association, among other well recognized institutions. They are:
  1. Response Rate 1 – or the minimum response rate, is the number of complete interviews divided by the number of interviews plus the number of non-interviews plus all cases of unknown eligibility.
  2. Response Rate 2 – RR1 + counting partial interviews as respondents.
  3. Response Rate 3 – estimates what proportion of cases of unknown eligibility is actually eligible. Those respondents estimated to be ineligible are excluded from the denominator. The method of estimation *must* be explicitly stated with RR3.
  4. Response Rate 4 – allocates cases of unknown eligibility as in RR3, but also includes partial interviews as respondents as in RR2.
  5. Response Rate 5 – is either a special case of RR3 in that it assumes that there are no eligible cases among the cases of unknown eligibility or the rare case in which there are no cases of unknown eligibility. RR5 is only appropriate when it is valid to assume that none of the unknown cases are eligible ones, or when there are no unknown cases.
  6. Response Rate 6 – makes that same assumption as RR5 and also includes partial interviews as respondents. RR6 represents the maximum response rate.
The six AAPOR definitions vary with respect to whether or not the surveys are partially or entirely completed and how researchers deal with unknown nonrespondents. Definition #1, for example, does NOT include partially completed surveys in the numerator, while definition #2 does. Definitions 3–6 deal with the unknown eligibility of potential respondents who could not be contacted. For example, there is no answer at the doors of 10 houses you attempted to survey. Maybe 5 of those you already know house people who qualify for your survey based on neighbors telling you whom lived there, but the other 5 are completely unknown. Maybe the dwellers fit your target population, maybe they don't. This may or may not be considered in your response rate, depending on which definition you use.
Example: if 1,000 surveys were sent by mail, and 257 were successfully completed and returned, then the response rate would be 25.7%.

Importance

A survey’s response rate is the result of dividing the number of people who were interviewed by the total number of people in the sample who were eligible to participate and should have been interviewed. A low response rate can give rise to sampling bias if the nonresponse is unequal among the participants regarding exposure and/or outcome. Such bias is known as nonresponse bias.
For many years, a survey's response rate was viewed as an important indicator of survey quality. Many observers presumed that higher response rates assure more accurate survey results. But because measuring the relation between nonresponse and the accuracy of a survey statistic is complex and expensive, few rigorously designed studies provided empirical evidence to document the consequences of lower response rates until recently.
Such studies have finally been conducted in recent years, and several conclude that the expense of increasing the response rate frequently is not justified given the difference in survey accuracy.
One early example of a finding was reported by Visser, Krosnick, Marquette and Curtin who showed that surveys with lower response rates yielded more accurate measurements than did surveys with higher response rates. In another study, Keeter et al. compared results of a 5-day survey employing the Pew Research Center’s usual methodology with results from a more rigorous survey conducted over a much longer field period and achieving a higher response rate of 50%. In 77 out of 84 comparisons, the two surveys yielded results that were statistically indistinguishable. Among the items that manifested significant differences across the two surveys, the differences in proportions of people giving a particular answer ranged from 4 percentage points to 8 percentage points.
A study by Curtin et al. tested the effect of lower response rates on estimates of the Index of Consumer Sentiment. They assessed the impact of excluding respondents who initially refused to cooperate, respondents who required more than five calls to complete the interview, and those who required more than two calls. They found no effect of excluding these respondent groups on estimates of the ICS using monthly samples of hundreds of respondents. For yearly estimates, based on thousands of respondents, the exclusion of people who required more calls had a very small one.
Holbrook et al. assessed whether lower response rates are associated with less unweighted demographic representativeness of a sample. By examining the results of 81 national surveys with response rates varying from 5 percent to 54 percent, they found that surveys with much lower response rates decreased demographic representativeness within the range examined, but not by much.
Choung et al. looked at community response rate to a mailed questionnaire about functional gastrointestinal disorders. The response rate to their community survey was 52%. Then, they took a random sample of 428 responders and 295 nonresponders for medical record abstraction, and compared nonresponders against responders. They found that respondents had a significantly higher body mass index and more health care seeking behavior for non-GI problems. However, except for diverticulosis and skin diseases, there was no significant difference between responders and nonresponders in terms of any gastrointestinal symptoms or specific medical diagnosis.
Dvir and Gafni examined if consumer response rate is influenced by the amount of information provided. In a series of large-scale web experiments, they compared variants of marketing web pages, focusing on how changes to content amount impact users’ willingness to provide their e-mail address. The results showed significantly higher response rates on the shorter pages, which indicates that contrary to earlier work, not all response rate theories are effective online.
Nevertheless, in spite of these recent research studies, a higher response rate is preferable because the missing data is not random. There is no satisfactory statistical solution to deal with missing data that may not be random. Assuming an extreme bias in the responders is one suggested method of dealing with low survey response rates. A high response rate from a small, random sample is preferable to a low response rate from a large sample.