13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 12: Data Analysis and Interpretation: Part II. Tests of Statistical Significance and the Analysis Story 389of 76 to detect a medium treatment effect; and it takes a sample size of 464 todetect a small treatment effect. It thus takes over 15 times more participantsto detect a small effect than it does to detect a large effect!Using repeated measures experiments can also affect the power of the statisticalanalyses researchers use. As described in Chapter 7, repeated measuresexperiments are generally more sensitive than are independent groups experiments.This is because the estimates of error variation are generally smallerin repeated measures experiments. The smaller error variation leads to an increasedability to detect small treatment effects in an experiment. And that isjust what the power of a statistical analysis is—the ability to detect small treatmenteffects when they are present.When introducing NHST we suggested that making a so-called Type I erroris equivalent to alpha (.05 in this case). Logically, to make this kind of error,the null hypothesis must be capable of being false. Yet, critics argue that thenull hypothesis defined as zero difference is “always false” (e.g., Cohen, 1995,p. 1000) or, somewhat more conservatively, is “rarely true” (Hunter, 1997, p. 5).If an effect is always, or nearly always, present (i.e., there is more than a zerodifference between means), then we can’t possibly (or at least hardly ever) makea mistake by claiming that an effect is there when it is not. Following this line ofreasoning, the only error we are capable of making is a Type II error (see Hunter,1997; Schmidt & Hunter, 1997), that is, saying a real effect is not there. This typeof error, largely due to low statistical power in many psychological studies,typically is much greater than .05 (e.g., Cohen, 1990; Hunter, 1997; Schmidt &Hunter, 1997). Let us suggest that Type I errors do occur if the null hypothesisis taken literally, that is, if there really is a literally zero difference between thepopulation means or if we believe that in some situations it is worth testing aneffect against a hypothesis of no difference (see Abelson, 1997; Mulaik et al.,1997). As researchers we must be alert to the fact that in some situations it maybe important not to conclude an effect is present when it is not, at least not tomore than a trivial degree (see Box 12.2).BOX 12.2DO WE EVER ACCEPT THE NULL HYPOTHESIS?Despite what we have said thus far, there may besome instances in which researchers will choose toaccept the null hypothesis (rather than simply fail toreject it). Yeaton and Sechrest (1986, pp. 836–837)argue persuasively that findings of no differenceare especially critical in applied research. Considersome questions they cite to illustrate their point:Are children who are placed in daycare centers asintellectually, socially, and emotionally advancedas children who remain in the home? Is a new,cheaper drug with fewer side effects as effective asthe existing standard in preventing heart attacks?These important questions clearly illustratesituations in which accepting the null hypothesis(no effect) involves more than a theoreticalissue—life and death consequences rest onmaking the correct decision. Frick (1995) arguesthat never accepting the null hypothesis is neitherdesirable nor practical for psychology. There maybe occasions when we want to be able to statewith confidence that there is no (meaningful)difference (see also Shadish, Cook, & Campbell,2002).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!