12.07.2015 Views

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Question</strong> <strong>and</strong> <strong>Question</strong>naire <strong>Design</strong> 295guidance about most specific wording choices or question orderings. In addition,particular populations or measures may pose exceptions to the rules. As a result,questionnaire construction, although informed by science, remains a craft, <strong>and</strong>pretesting (itself a mix of science <strong>and</strong> craft) can provide valuable assistance in theprocess.Some evaluation methods require administration of the questionnaire torespondents, whereas others do not. Methods not requiring data collection, whichare therefore relatively inexpensive to conduct, rely either on human judgment(in some cases by experts, in others by nonexperts) or on computerized judgments.These methods include expert review, forms appraisal, artificial intelligenceprograms, <strong>and</strong> statistical modeling. Methods that involve data collection, whichare more expensive to carry out, vary along two dimensions: whether they explicitlyengage the respondent in the evaluation task — what Converse <strong>and</strong> Presser (1986)call participating, as opposed to undisclosed, pretests — <strong>and</strong> whether they areconducted in conditions similar to those of the main survey. These methods includecognitive interviews, behavior coding, vignettes, <strong>and</strong> debriefings of interviewers <strong>and</strong>/or respondents. For a more detailed review of pretesting methods, see Presser et al.(2004). 209.10.1. Methods Without Data CollectionProbably the least structured evaluation method is expert review, in which one ormore experts critiques the questionnaire. The experts are typically survey methodologists,but they can be supplemented with specialists in the subject matter(s) of thequestionnaire. Reviews are done individually or as part of a group discussion.As many of the judgments made by experts stem from rules, attempts have beenmade to draw on these rules to fashion an evaluation task that nonexperts can do.Probably the best known of these schemes is the <strong>Question</strong>naire Appraisal System(QAS), a checklist of 26 potential problems (Willis & Lessler, 1999; see also Lessler &Forsyth, 1996). In an experimental comparison, Rothgeb, Willis, <strong>and</strong> Forsyth (2001)found that the QAS identified nearly every one of 83 items as producing a problemwhereas experts identified only about half the items as problematic — suggesting thepossibility of numerous QAS false positives. In a smaller-scale analysis of 8 incomeitems, by contrast, van der Zouwen <strong>and</strong> Smit (2004) reported substantial agreementbetween QAS <strong>and</strong> expert review.Evaluations may also be computerized. The <strong>Question</strong> Underst<strong>and</strong>ing Aid(QUAID) — computer software based partly on computational linguistics — isdesigned to identify questions that suffer from five kinds of problems: unfamiliar20. Prior to pretesting, researchers will often benefit from self-administering their questionnaires (roleplaying the respondent), which provides an opportunity for them to discover the difficulties they haveanswering their own questions.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!