12.07.2015 Views

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

292 Jon A. Krosnick <strong>and</strong> Stanley PresserAlthough respondent learning can be advantageous, it may be disadvantageous inthe case of screening items — those with follow-up questions that are asked only if theoriginal item was answered a particular way (usually ‘‘yes’’). After learning thatanswering such questions in a certain way can lengthen the interview, respondents mayfalsely answer later screening items in order to avoid the contingent questions. Severalexperiments have yielded evidence suggesting this happens (Jensen, Watanabe, &Richters, 1999; Lucas et al., 1999; Duan, Alegria, Canino, McGuire, & Takeuchi, 2007;Kreuter, McCulloch, & Presser, 2009). Although it is possible that the reduction in‘‘yes’’ answers to later screening items in these experiments was due to improvedreporting (because respondents better underst<strong>and</strong> later questions), the weight of theevidence suggests this was not the case. 12 Thus, measurement for multiple screeningitems is likely to be improved by grouping them together <strong>and</strong> asking contingent itemsonly after all the screening questions have been administered. 13Later items in a questionnaire may also suffer from fatigue effects if respondentsbecome tired. This possibility has been examined in a variety of experiments assessingthe impact on data quality of earlier versus later item placement. Consistent withexpectations about fatigue <strong>and</strong> satisficing, several studies have found higher missingdata levels, greater agreement, less detailed answers, or less differentiation amongitems when they appear later in a questionnaire compared to the same items placedearlier (Johnson, Sieveking, & Clanton, 1974; Kraut, Wolfson, & Rothenberg, 1975;Herzog & Bachman, 1981; Backor, Golde, & Nie, 2007). Most of the studiesreporting such effects involved self-administered questionnaires. Two experimentsthat found little, if any, difference by item position involved interviewer-administeredsurveys (Clancy & Wachsler, 1971; Burchell & Marsh, 1992). The possibility thatfatigue effects might be slower to set in during interviewer-administered surveys thanin self-administered surveys needs to be tested directly in future research.9.9.2. Semantic Order EffectsThroughout a questionnaire, items should flow coherently, which usually requiresthat items on related topics be grouped together. 14 Coherent grouping can facilitaterespondents’ cognitive processing, e.g., by specifying the meaning of a question moreclearly or making retrieval from memory easier. Consistent with this logic, Knowles(1988; see also Knowles & Byers, 1996) found that serial order affected item12. In a similar vein, Peytchev, Couper, McCabe, <strong>and</strong> Crawford (2006) found that visible skip instructionsin the scrolling version of a web survey led more respondents to choose a response that avoided subsequentquestions for an item on alcohol use (though not for one on tobacco use) compared to a page version withinvisible skips. For findings on related issues, see Gfroerer, Lessler, <strong>and</strong> Parsley (1997).13. Paper <strong>and</strong> pencil administration constitutes an exception to this rule as the skip patterns entailed by therecommendation are apt to produce significant error in that mode.14. Although context can affect judgments about whether or not items are related, this effect is likely to berestricted to judgments about items on the same or similar topics.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!