12.07.2015 Views

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

298 Jon A. Krosnick <strong>and</strong> Stanley PresserThe answers shed light on the correspondence between the respondents’conception of work <strong>and</strong> the definition of work intended by the Current PopulationSurvey. For further details of vignettes, see Martin (2004).Respondent debriefings refer to the entire class of direct or indirect queries aboutsurvey items. Open questions on the order of ‘‘Why do you say that?’’ posed after arespondent has answered an item provide information about what the respondenthad in mind when answering the item <strong>and</strong> thereby can reveal how the item wasinterpreted. Other debriefing questions focus directly on aspects of the question (e.g.,What did you think I meant by y?) or on aspects of choosing an answer (e.g., Howdifficult was it to answer the question about y?). These inquiries may be posedimmediately after the focal item (Schuman, 1966) or after the entire interview, inwhich case the questions need to be repeated (Belson, 1981). For a more detailedoverview of respondent debriefings, see Martin (2004).Cognitive interviewing combines many elements of respondent debriefings <strong>and</strong>produces qualitative data. Respondents are often asked to do two things: (1) thinkout loud when generating an answer to each question <strong>and</strong> (2) answer probes aboutthe questions (e.g., ‘‘How would you restate the question in your own words?’’). Thisapproach can be valuable for revealing respondent interpretations of a question <strong>and</strong>identifying misunderst<strong>and</strong>ings that can be prevented by rewording the question.Some researchers have thought that such interviews also reveal the cognitiveprocesses that people implement during actual survey interviews. But in fact,thinking aloud may disrupt such cognitive processes <strong>and</strong> much of the cognitiveprocessing that yields answers is likely to happen outside of respondent consciousness<strong>and</strong> would therefore not be revealed by this method (Willis, 2004). For detaileddiscussions of the method, see Willis (2005) <strong>and</strong> Beatty <strong>and</strong> Willis (2007).9.10.3. Comparisons across MethodsThe multiplicity of testing methods raises questions about their uniqueness — theextent to which different methods produce different diagnoses. Studies that comparetwo or more methods applied to a common questionnaire often show a mixedpicture — significant overlap in the problems identified but considerable disagreementas well. The interpretation of these results, however, is complicated by the factthat most of the studies rely on a single trial of each method. Thus, differencesbetween methods could be due to unreliability, the tendency of the same method toyield different results across trials.As might be expected, given its relatively objective nature, behavior coding hasbeen found to be highly reliable (Presser & Blair, 1994). Conventional pretests, expertreviews, <strong>and</strong> cognitive interviews, by contrast, have been shown to be less reliable(Presser & Blair, 1994; DeMaio & L<strong>and</strong>reth, 2004). The computer methods (QUAID<strong>and</strong> SQP) may be the most reliable, though we know of no research demonstratingthe point. Likewise, the structure of the remaining methods (QAS, response latency,vignettes, <strong>and</strong> respondent debriefings) suggests their reliability would be between that

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!