13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 11: Data Analysis and Interpretation: Part I. Describing Data, Confidence Intervals, Correlation 365BOX 11.4INTERPRETING CONFIDENCE INTERVALS FOR A DIFFERENCEBETWEEN TWO MEANS: LOOKING FOR ZEROHaving calculated a 95% confidence interval forthe difference between two means, we can statethatthe odds are 95/100 that the obtained confidenceinterval contains the true population mean differenceor absolute effect size.The width of the confidence interval providesinformation about effect size. By using confidenceintervals we obtain information about theprobable effect size of our independent variable.Obtained effect sizes vary from study to study ascharacteristics of samples and procedures differ(see, for example, Grissom & Kim, 2005). Theconfidence interval “specifies a probable rangeof magnitude for the effect size” (Abelson, 1997,p. 130). It indicates that the effect size likely couldbe as small as the value of the lower boundaryand as large as the value of the upper boundary.Researchers are sometimes amazed to seejust how large an interval is needed to specify aneffect size with a high degree of confidence (e.g.,Cohen, 1995). Thus, the narrower the width of theconfidence interval, the better job we have doneat estimating the true effect size of our independentvariable. Of course, the size (width) of theconfidence interval is directly related to samplesize. By increasing sample size we get a betteridea of exactly what our effect looks like.It is important to determine if the confidenceinterval for a mean difference includes the valueof zero. When zero is included in the confidenceinterval, we must accept the possibility that the twopopulation means do not differ. Thus, we cannotconclude that an effect of the independent variableis present. Remember, confidence intervals give usa probable range for our effect. If zero is amongthe probable values, then we should admit our uncertaintyregarding the presence of an effect (e.g.,Abelson, 1997). You will see in Chapter 12 that thissituation is similar to that when a nonsignificantresult is found using NHST.two different puzzles. Rather than asking two different groups of people towork on each puzzle, she might ask just one group of people to work on bothpuzzles. (Procedures for presenting materials in a repeated measures designwere described in Chapter 7.) All the participants would then provide a scoreon both puzzles. As you will see, the difference between their scores serves asthe measure of interest in a repeated measures design.Procedures for assessing effect size in a matched groups or repeated measuresdesign are somewhat more complex than those we reviewed for an independentgroups design (see Cohen, 1988; and Rosenthal & Rosnow, 1991,for information pertaining to the calculation of d in these cases). One suggestionis to calculate an effect size measure as if the study were an independentgroups design and apply Cohen’s guidelines (i.e., .20, .50, .80) as before (e.g.,Zechmeister & Posavac, 2003).Confidence intervals, too, can be constructed for the population meandifference in a repeated measures design involving two conditions. However,the underlying calculations change for this situation. Specifically, wheneach subject is in both conditions of the experiment, t is based on differencescores (see Chapter 12). A difference score is obtained by subtracting the two

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!