12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

EFFECT SIZE 521<br />

differential measure of effect size is more useful<br />

than the blunt edge of statistical significance.<br />

An effect size is<br />

simply a way of quantifying the difference between<br />

two groups. For example, if one group has had an<br />

‘experimental treatment’ and the other has not (the<br />

‘control’), then the Effect Size is a measure of the<br />

effectiveness of the treatment.<br />

(Coe 2000: 1)<br />

It tells the reader ‘how big the effect is, something<br />

that the p value [statistical significance] does not<br />

do’ (Wright 2003: 125). An effect size (Thompson<br />

2002: 25) ‘characterizes the degree to which<br />

sample results diverge from the null hypothesis’; it<br />

operates through the use of standard deviations.<br />

Wood (1995: 393) suggests that effect size can<br />

be calculated by dividing the significance level<br />

by the sample size. Glass et al. (1981: 29, 102)<br />

calculate the effect size as:<br />

(mean of experimental group – mean of control group)<br />

standard deviation of the control group<br />

Coe (2000: 7), while acknowledging that there<br />

is a debate on whether to use the standard<br />

deviation of the experimental or control group<br />

as the denominator, suggests that that of the<br />

control group is preferable as it provides ‘the best<br />

estimate of standard deviation, since it consists<br />

of a representative group of the population who<br />

have not been affected by the experimental<br />

intervention’. However, Coe (2000) also suggests<br />

that it is perhaps preferable to use a ‘pooled’<br />

estimate of standard deviation, as this is more<br />

accurate than that provided by the control group<br />

alone. To calculate the pooled deviation he<br />

suggests that the formula should be:<br />

√<br />

(N E − 1)SD 2 E<br />

SD pooled =<br />

+ (N C − 1)SD 2 C<br />

N E + N C − 2<br />

where N E = number in the experimental group,<br />

N C = number in the control group, SD E =<br />

standard deviation of the experimental group and<br />

SD C = standard deviation of the control group.<br />

The formula for the pooled deviation then<br />

becomes (Muijs 2004: 136):<br />

(mean of experimental group – mean of control group)<br />

pooled standard deviation<br />

where the pooled standard deviation = (standard<br />

deviation of group 1 + standard deviation of<br />

group 2).<br />

There are several different calculations of<br />

effect size, for example (Richardson 1996;<br />

Capraro and Capraro 2002: 771): r 2 , adjusted<br />

R 2 , η 2 , ω 2 , Cramer’s V, Kendall’s W, Cohen’s<br />

d, and Eta. Different kinds of statistical<br />

treatments use different effect size calculations.<br />

For example, the formula given by Muijs (2004)<br />

here yields the statistic termed Cohen’s d.<br />

Further details of this, together with a facility<br />

which calculates it automatically, can be<br />

found at http://www.uccs.edu/∼lbecker/psy590/<br />

escalc3.htm.<br />

An effect size can lie between 0 to 1 (some<br />

formulae yield an effect size that is larger than<br />

1–seeCoe2000).InusingCohen’sd:<br />

0–0.20 = weak effect<br />

0.21–0.50 = modest effect<br />

0.51–1.00 = moderate effect<br />

>1.00 = strong effect<br />

In correlational data the coefficient of correlation<br />

is used as the effect size in conjunction with<br />

details of the direction of the association (i.e. a<br />

positive or negative correlation). The coefficient<br />

of correlation (effect size) is interpreted thus:<br />

< 0 + / − 0.1 weak<br />

< 0 + / − 0.3 modest<br />

< 0 + / − 0.5 moderate<br />

< 0 + / − 0.8 strong<br />

≥+/− 0.8 verystrong<br />

We provide more detail on interpreting correlation<br />

coefficients later in this chapter. However,<br />

Thompson (2001; 2002) argues forcibly<br />

against simplistic interpretations of effect size as<br />

‘small’, ‘medium’ and ‘large’, as to do this commits<br />

the same folly of fixed benchmarks as that<br />

of statistical significance. He writes that ‘if people<br />

Chapter 24

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!