13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

CHAPTER 12: Data Analysis and Interpretation: Part II. Tests of Statistical Significance and the Analysis Story 395intervals provide a range of possible effect sizes in terms of actual mean differencesand not a single value such as Cohen’s d. Because zero is not within the interval,we know that the outcome would be statistically significant at the .05 level(see Chapter 11). However, as the APA Manual emphasizes, confidence intervalsprovide information about precision of estimation and location of an effect that isnot given by NHST alone. Recall from Chapter 11 that the smaller the confidenceinterval, the more precise is our estimate.Power Analysis When we know the effect size, we can determine the statisticalpower of an analysis. Power, as you will recall, is the probability that a statisticallysignificant effect will be obtained. Suppose that a previous study of vocabularysize contrasting younger and older adults produced an effect size of .50, a mediumeffect according to Cohen’s (1988) rule of thumb. We can use power tables createdby Cohen to determine the number of participants needed in a test of mean differencesto “see” an effect of size .50 with alpha .05. A power table identifies thepower associated with various effect sizes as a function of sample size. It turns outthat the sample size (in each group) of a two-group study would have to be about64 to achieve power of .80 (for a two-tailed test). Looking for a medium effect size,we would need a total of 128 (64 2) participants to obtain statistical significancein 8 of 10 tries. Had the researchers been looking for a medium effect, their vocabularystudy would have been underpowered. As it turns out, anticipating a largeeffect size, a sample size of 26 was appropriate to obtain power of .80.If the result is not statistically significant, then an estimate of power should bereported. If, for example, using an independent groups design the outcome hadbeen t(28) 1.96, p .05, with an effect size of .50, we can determine the powerof the study after the fact. Assuming equal-size groups in the study, we knowthat there were 15 subjects in each group (df n 1 n 2 2, or 28 15 15 2). Apower analysis will reveal that power for this study is .26. A statistically significantoutcome would be obtained in only about 1 of 4 attempts with this samplesize and when a medium (.50) effect must be found. In this case, researchersSTRETCHING EXERCISEA TEST OF (YOUR UNDERSTANDING OF) THE NULL HYPOTHESIS TESTAs should be apparent by now, understanding,applying, and interpreting results of NHST is noeasy task. Even seasoned researchers occasionallymake mistakes. To help you avoid mistakes,we provide a true-false test based on the informationpresented thus far about NHST.Assume that an independent groups designwas used to assess performance of participantsin an experimental and control group. Therewere 12 participants in each condition, andresults of NHST with alpha set at .05 revealedt(22) 4.52, p .006. True or false? The researchermay reasonably conclude on the basisof this outcome that1 The null hypothesis should be rejected.2 The research hypothesis has been shown to be true.3 The results are of scientific importance.4 The probability that the null hypothesis is true isonly .006.5 The probability of finding statistical significance atthe .05 level if the study were replicated is greaterthan if the exact probability had been .02.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!