12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

520 QUANTITATIVE DATA ANALYSIS<br />

Box 24.15<br />

Type I and Type II errors<br />

Decision H 0 true H 0 false<br />

Support H o Correct Type II error (β)<br />

Do not support H 0 Type I error α Correct<br />

One has to be careful not to describe an associative<br />

hypothesis (e.g. gender) as a causal hypothesis, as<br />

gender may not be actually having a causal effect.<br />

In hypothesis testing one has to avoid Type I<br />

and Type II errors. A Type I error occurs when one<br />

does not support the null hypothesis when it is in<br />

fact true. This is a particular problem as the sample<br />

increases, as the chances of finding a significant<br />

association increase, irrespective of whether a true<br />

association exists (Rose and Sullivan 1993: 168),<br />

requiring the researcher, therefore, to set a higher<br />

alpha (α) limit (e.g. 0.01 or 0.001) for statistical<br />

significance to be achieved). A Type II error occurs<br />

when one supports the null hypothesis when it is<br />

in fact not true (often the case if the levels of<br />

significance are set too stringently, i.e. requiring<br />

the researcher to lower the alpha (α) levelof<br />

significance (e.g. 0.1 or 0.2) required). Type 1 and<br />

Type II can be represented as in Box 24.15.<br />

Effect size<br />

One has to be cautious in using statistical<br />

significance. Statistical significance is not the same<br />

as educational significance. For example, I might<br />

find a statistically significant correlation between<br />

the amount of time spent on mathematics and<br />

the amount of time spent in watching television.<br />

This may be completely unimportant. Similarly I<br />

might find that there is no statistically significant<br />

difference between males and females in their<br />

liking of physics. However, close inspection might<br />

reveal that there is a difference. Say, for example,<br />

that more males than females like physics, but that<br />

the difference does not reach the ‘cut-off’ point of<br />

the 0.05 level of significance; maybe it is 0.065. To<br />

say that there is no difference, or simply to support<br />

the null hypothesis here might be inadvisable.<br />

There are two issues here: first, the cut-off level of<br />

significance is comparatively arbitrary, although<br />

high; second, one should not ignore coefficients<br />

that fall below the conventional cut-off points.<br />

This leads us into a discussion of effect size as an<br />

alternative to significance levels.<br />

Statistical significance on its own has come<br />

to be seen as an unacceptable index of effect;<br />

(Thompson 1994; 1996; 1998; 2001; 2002;<br />

Fitz-Gibbon 1997: 43; Rozeboom 1997: 335;<br />

Thompson and Snyder 1997; Wilkinson and the<br />

Task Force on Statistical Inference, APA Board of<br />

Scientific Affairs 1999; Olejnik and Algina 2000;<br />

Capraro and Capraro 2002; Wright 2003; Kline<br />

2004) because it depends on both sample size<br />

and the coefficient (e.g. of correlation). Statistical<br />

significance can be attained either by having a<br />

large coefficient together with a small sample or<br />

having a small coefficient together with a large<br />

sample. The problem is that one is not able to<br />

deduce which is the determining effect from a<br />

study using statistical significance (Coe 2000: 9).<br />

It is important to be able to tell whether it is the<br />

sample size or the coefficient that is making the<br />

difference. The effect size can do this.<br />

What is required either to accompany or to<br />

replace statistical significance is information about<br />

effect size (American Psychological Association<br />

1994: 18; 2001; Wilkinson and the Task Force<br />

on Statistical Inference, APA Board of Scientific<br />

Affairs 1999; Kline 2004). Indeed effect size is<br />

seen as much more important than significance,<br />

and many international journals either have<br />

abandoned statistical significance reporting in<br />

favour of effect size, or have insisted that statistical<br />

significance be accompanied by indications of<br />

effect size (Olejnik and Algina 2000; Capraro<br />

and Capraro 2002; Thompson 2002). Statistical<br />

significance is seen as arbitrary in its cut-off<br />

points and unhelpful – a ‘corrupt form of the<br />

scientific method’ (Carver 1978), an obstacle<br />

rather than a facilitator in educational research.<br />

It commands slavish adherence rather than<br />

addressing the subtle, sensitive and helpful notion<br />

of effect size (see Fitz-Gibbon 1997: 118). Indeed<br />

commonsense should tell the researcher that a

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!