13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 12: Data Analysis and Interpretation: Part II. Tests of Statistical Significance and the Analysis Story 403Key Conceptof the difference between two means, Cohen’s f defines an effect in terms of ameasure of dispersal among group means. Both d and f express the effect relativeto (i.e., “standardized” on) the within-population standard deviation. Cohenhas provided guidelines for interpreting f. Specifically, he suggests that small,medium, and large effects sizes correspond to f values of .10, .25, and .40. Thecalculation of f is not easily accomplished using the information found in theANOVA Summary Table (Table 12.3), but it can be obtained without much difficultyonce eta squared is known (see Cohen, 1988), as 2f ____________ 1 2or, in our example,f ______________ .591 .59 1.20We can thus conclude that memory training accounted for .59 of the total variancein the dependent variable and produced a standardized effect size, f, of1.20. Based on Cohen’s guidelines for interpreting f (.10, .25, .40), it is apparentthat memory training had a large effect on recall scores.Assessing Power for Independent Groups DesignsOnce the effect size is known, we can obtain an estimate of power for a specificsample size and degrees of freedom associated with the numerator (betweengroupseffect) of the F-ratio. In our example, we set alpha at .05; the experimentwas done with n 5 and df 3 for the between-groups effect (number of groupsminus 1). The effect size, f, associated with our data set is very large (1.20), andthere is no good reason to conduct a power analysis for this large effect whichwas statistically significant.However, assume that the ANOVA in our example yielded a nonsignificantF and effect size was f .40, still a large effect according to Cohen’s guidelines.An important question to answer is “What was the power of our experiment?”How likely were we to see an effect of this size given an alpha of .05, a samplesize of n 5, and df 3 for our effect? A power analysis reveals that under theseconditions power was .26. In other words, the probability of obtaining statisticalsignificance in this situation was only .26. In only approximately one-fourth of theattempts under these conditions would we obtain a significant result. The experimentwould be considered underpowered, and it is unreasonable to make muchof the fact that NHST did not reveal a significant result. To do so would ignore thevery important fact that the effect of our independent variable was, in fact, large.Although learning about power after the fact can be important, particularlywhen we obtain a nonsignificant outcome based on NHST, ideally poweranalysis should be conducted prior to an experiment in order to reveal thea priori (from the beginning) probability of finding a statistically significant effect.An experimenter who begins an experiment knowing that power is only

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!