12.07.2015 Views

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

Question and Questionnaire Design - Stanford University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Question</strong> <strong>and</strong> <strong>Question</strong>naire <strong>Design</strong> 299of conventional pretests, expert reviews <strong>and</strong> cognitive interviews, on the one h<strong>and</strong>,<strong>and</strong> computerized methods, on the other. But, again, we know of no good estimatesof these reliabilities.Inferences from studies that compare testing methods are also affected by therelatively small number of items used in the studies <strong>and</strong> by the fact that the items arenot selected r<strong>and</strong>omly from a well-defined population. Nonetheless, we cangeneralize to some extent about differences between the methods. The only methodsthat tend to diagnose interviewer (as opposed to respondent) problems are behaviorcoding (which explicitly includes a code for interviewer departures from verbatimquestion delivery) <strong>and</strong> conventional pretests (which rely on interviewer reports).Among respondent problems, the methods seem to yield many more comprehensiondifficulties (about the task respondents think the question poses) than performancedifficulties (about how respondents do the task), <strong>and</strong> — somewhat surprisingly —this appears most true for cognitive interviews (Presser & Blair, 1994). Conventionaltesting, behavior coding, QAS, <strong>and</strong> response latency are also less apt than the otherapproaches to provide information about how to repair the problems they identify.Although there is no doubt that all of the methods uncover problems withquestions, we know only a little about the degree to which these problems aresignificant, i.e., affect the survey results. And the few studies that address this issue(by reference to reliability or validity benchmarks) are generally restricted to a singlemethod, thereby providing no information on the extent to which the methods differin diagnosing problems that produce important consequences. This is an importantarea for future research.Given the present state of knowledge, we believe that questionnaires willoften benefit from a multimethod approach to testing. Moreover, when significantchanges are made to a questionnaire to repair problems identified by pretesting,it is usually advisable to mount another test to determine whether the revisionshave succeeded in their aim <strong>and</strong> not caused new problems. When time <strong>and</strong>money permit, this multimethod, multi-iteration approach to pretesting can beusefully enhanced by split sample experiments that compare the performance ofdifferent versions of a question or questionnaire (Forsyth, Rothgeb, & Willis, 2004;Schaeffer & Dykema, 2004).9.11. ConclusionResearchers who compose questionnaires should find useful guidance in thespecific recommendations for the wording <strong>and</strong> organization of survey questionnairesthat we have offered in this chapter. They should also benefit from two more generalrecommendations. First, questionnaire designers should review questions fromearlier surveys before writing their own. This is partly a matter of efficiency — thereis little sense in reinventing the wheel — <strong>and</strong> partly a matter of expertise: the designof questions <strong>and</strong> questionnaires is an art as well as a science <strong>and</strong> some previous

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!