13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

CHAPTER 12: Data Analysis and Interpretation: Part II. Tests of Statistical Significance and the Analysis Story 417—verbal description of statistically significant interaction effect (whenpresent), referring reader to differences between cell means acrosslevels of the independent variables;—verbal description of statistically significant main effect (when present),referring reader to differences among cell means collapsed across levelsof the independent variables;—comparisons of two means, when appropriate, to clarify sources ofsystematic variation among means contributing to main effect;—conclusion that you wish reader to make from the results of this analysis.Additional tips for writing a Results section according to APA style requirementscan be found in Chapter 13.SUMMARYStatistical tests based on null hypothesis significance testing (NHST) are commonlyused to perform confirmatory data analysis in psychology. NHST is usedto determine whether differences produced by independent variables in an experimentare greater than what would be expected solely on the basis of errorvariation (chance). The null hypothesis is that the independent variable did nothave an effect. A statistically significant outcome is one that has a small probabilityof occurring if the null hypothesis were true. Two types of errors may arise whendoing NHST. A Type I error occurs when a researcher rejects the null hypothesiswhen it is true. The probability of a Type I error is equivalent to alpha or the levelof significance, usually .05. A Type II error occurs when a false null hypothesis isnot rejected. Type II errors can occur when a study does not have enough power tocorrectly reject a null hypothesis. The primary way researchers increase power isby increasing sample size. By using power tables researchers may estimate, beforea study is conducted, the power needed to reject a false null hypothesis and, aftera study is completed, the likelihood of detecting the effect that was found. Theexact probability associated with the result of a statistical test should be reported.The appropriate statistical test for comparing two means is the t-test. Whenthe difference between two means is tested, an effect size measure, such asCohen’s d, should also be reported. The APA Publication Manual strongly recommendsthat confidence intervals be reported as well as the results of NHST.When reporting the results of NHST, it is important to keep in mind that statisticalsignificance (or nonoverlapping confidence intervals) is not the same as scientificor practical significance. Moreover, neither NHST, confidence intervals,nor effect sizes tell us about the soundness of a study’s methodology. That is,none of these measures alone may be used to state that the alternative hypothesis(that the independent variable did have an effect) is correct. Only after wehave examined carefully the methodology used to obtain the data for an analysiswill we want to venture a claim about what influenced behavior.Analysis of variance (ANOVA) is the appropriate statistical test when comparingthree or more means. The logic of ANOVA is based on identifying both errorvariation and sources of systematic variation in the data. An F-test is constructedthat represents error variation and systematic variation (if any) divided by error

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!