12.07.2015 Views

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Question</strong> <strong>and</strong> <strong>Question</strong>naire <strong>Design</strong> 285discourage people from doing so. As a result, data quality does not improve whensuch options are explicitly included in questions.In order to distinguish ‘‘real’’ opinions from ‘‘non-attitudes,’’ follow-up questionsthat measure attitude strength may be used. Many empirical investigations haveconfirmed that attitudes vary in strength, <strong>and</strong> the task respondents presumably facewhen confronting a ‘‘don’t know’’ response option is to decide whether their attitudeis sufficiently weak to be best described by that option. But because the appropriatecut point along the strength dimension is both hard to specify <strong>and</strong> unlikely to bespecified uniformly across respondents, it seems preferable to encourage people toreport their attitude <strong>and</strong> then describe where it falls along the strength continuum(see Krosnick, Boninger, Chuang, Berent, & Carnot, 1993 <strong>and</strong> Wegener, Downing,Krosnick, & Petty, 1995 for a discussion of the nature <strong>and</strong> measurement of thevarious dimensions of strength).9.7. Social Desirability Response BiasFor many survey questions, respondents have no incentive to lie, so there is noreason to believe they intentionally misreport. On questions about socially desirable(or undesirable) matters, however, there are grounds for expecting such misreporting.Theoretical accounts from sociology (Goffman, 1959) <strong>and</strong> psychology (Schlenker &Weigold, 1989) assert that in pursuing goals in social interaction, people attempt toinfluence how others see them. Being viewed more favorably by others is likelyto increase rewards <strong>and</strong> reduce punishments, which may motivate people not only toconvey more favorable images of themselves than is warranted, but possibly evento deceive themselves as well (see Paulhus, 1984, 1986, 1991).The most commonly cited evidence for misreporting in surveys comes fromrecord-check studies, in which respondent answers are compared against entries inofficial records. Using records as the validation st<strong>and</strong>ard, many studies found thatmore people falsely reported in the socially desirable direction than in the sociallyundesirable one (Parry & Crossley, 1950; Loc<strong>and</strong>er, Sudman, & Bradburn, 1976).For example, many more people said they voted when polling place recordsshowed they did not vote than said they did not vote when records showed they did(Katosh & Traugott, 1981).Errors in official records, as well as mistakes made in matching respondents torecords, mean that the disparity between records <strong>and</strong> self-reports is not necessarilydue to social desirability bias (see, for example, Presser, Traugott, & Traugott, 1990).However, several other approaches to studying the matter have also found evidenceconsistent with social desirability bias. One such approach, the ‘‘bogus pipelinetechnique,’’ involves telling people that the researcher can otherwise determine thecorrect answer to a question they will be asked, so they might as well answer itaccurately (see, e.g., Roese & Jamieson, 1993). People are more willing to reportillicit substance use under these conditions than in conventional circumstances(Evans, Hansen, & Mittlemark, 1977; Murray & Perry, 1987). Likewise, Caucasians

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!