13.07.2015 Views

Contents

Contents

Contents

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

386 PART V: Analyzing and Reporting ResearchBOX 12.1HEADS OR TAILS? TOSSING COINS AND NULL HYPOTHESESPerhaps you can appreciate the process of statisticalinference by considering the following dilemma.A friend, with a sly smile, offers to toss acoin with you to see who pays for the meal you justenjoyed at a restaurant. Your friend just happens tohave a coin ready to toss. Now it would be convenientif you could directly test whether your friend’scoin is biased (by asking to look at it). Not willing toappear untrusting, however, the best you can dois test your friend’s coin indirectly by assuming itis not biased and seeing if you consistently getoutcomes that differ from the expected 50:50split of heads and tails. If the coin does not exhibitthe ordinary 50:50 split (after many trials of flippingthe coin), you might surmise that your friendis trying, by slightly underhanded means, to getyou to pay for the meal. Similarly, we would liketo make a direct test of statistical significance foran obtained outcome in our experiments. The bestwe can do, however, is to compare our obtainedoutcome with the expected outcome of no differencebetween frequencies of heads and tails. Thekey to understanding null hypothesis testing is torecognize that we can use the laws of probabilityto estimate the likelihood of an outcome only whenwe assume that chance factors are the sole causeof that outcome. This is not different from flippingyour friend’s coin a number of times to makeyour conclusion. You know that, based on chancealone, 50% of the time the coin should come upheads, and 50% of the time it should be tails. Aftermany coin tosses, anything different from this probableoutcome would lead you to conclude thatsomething other than chance is working—that is,your friend’s coin is biased.significance (i.e., p .05) does not tell us about the probability of replicating theresults. For example, a result just below .05 probability (and thus statisticallysignificant) has only about a 50:50 chance of being statistically significant (i.e.,p .05) if replicated exactly (Greenwald et al., 1996). On the other hand, knowingthe exact probability of the results does convey information about what willhappen if a replication were done. The smaller the exact probability of an initialfinding, the greater the probability that an exact replication will produce a statisticallysignificant (p .05) finding (e.g., Posavac, 2002). Consequently, and asrecommended by the American Psychological Association (APA), always reportthe exact probability of results when carrying out NHST.Strictly speaking, there are only two conclusions possible when you do aninferential statistics test: Either you reject the null hypothesis or you fail to rejectthe null hypothesis. Note that we did not say that one alternative is to accept thenull hypothesis. Let us explain.When we conduct an experiment and observe the effect of the independentvariable is not statistically significant, we do not reject the null hypothesis.However, neither do we necessarily accept the null hypothesis of no difference.There may have been some factor in our experiment that prevented us fromobserving an effect of the independent variable (e.g., ambiguous instructionsto subjects, poor operationalization of the independent variable). As we willshow later, too small a sample often is a major reason why a null hypothesis isnot rejected. Although we recognize the logical impossibility of proving thata null hypothesis is true, we also must have some method of deciding which

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!