13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

210 PART III: Experimental Methodsmeans do not differ. In the video-game experiment, the overlap is small andthe sample means for each condition do not fall within the intervals for theother group. We want to decide whether the populations differ, but all wecan really say is that we don’t have sufficient evidence to decide one way orthe other. In this situation we must postpone judgment until the next experimentis done.The logic and computational procedures for confidence intervals and for thet-test are found in Chapter 11. The F-test (in its various forms) is discussedin Chapter 12.What Data Analysis Can’t Tell UsWe’ve already alluded to one thing that our data analysis can’t tell us. Even ifour experiment is internally valid and the results are statistically significant,we cannot say for sure that our independent variable had an effect (or did nothave an effect). We must learn to live with probability statements. The resultsof our data analysis also can’t tell us whether the results of our study havepractical value or even if they are meaningful. It is easy to do experimentsthat ask trivial research questions (see Sternberg, 1997, and Chapter 1). It isalso easy (maybe too easy!) to do a bad experiment. Bad experiments—thatis, ones that lack internal validity—can easily produce statistically significantoutcomes and nonoverlapping confidence intervals; however, the outcomewill be uninterpretable.When an outcome is statistically significant, we conclude that the independentvariable produced an effect on behavior. Yet, as we have seen, our analysisdoes not provide us with certainty regarding our conclusion, even though wereached the conclusion “beyond a reasonable doubt.” Also, when an outcomeis not statistically significant, we cannot conclude with certainty that the independentvariable did not have an effect. All we can conclude is there is notsufficient evidence in the experiment to claim that the independent variableproduces an effect. Determining that an independent variable has not had an effectcan be even more crucial in applied research. For example, is a generic drugas effective as its brand-name counterpart? To answer this research question,researchers often seek to find no difference between the two drugs. The standardsfor experiments attempting to answer questions regarding no differencebetween conditions are higher than those for experiments seeking to confirmthat an independent variable does have an effect. We describe these standardsin Chapter 12.Because researchers rely on probabilities to make decisions about the effectsof independent variables, there is always some chance of making an error.There are two types of errors that can occur when researchers use inferentialstatistics. When we claim that an outcome is statistically significant and the nullhypothesis (no difference) is really true, we are making a Type I error. A Type Ierror is like a false alarm—saying that there is a fire when there is not. Whenwe conclude that we have insufficient evidence to reject the null hypothesis

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!