12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

EVIDENCE-BASED EDUCATIONAL <strong>RESEARCH</strong> AND META-ANALYSIS 293<br />

2 Coding the study characteristics (e.g. date,<br />

publication status, design characteristics,<br />

quality of design, status of researcher).<br />

3 Measuring the effect sizes (e.g. locating the<br />

experimental group as a z-score in the control<br />

group distribution) so that outcomes can be<br />

measured on a common scale, controlling for<br />

‘lumpy data’ (non-independent data from a<br />

large data set).<br />

4 Correlatingeffectsizeswithcontextvariables<br />

(e.g. to identify differences between wellcontrolled<br />

and poorly-controlled studies).<br />

Effect size (e.g. Cohen’s d and eta squared) are<br />

the preferred statistics over statistical significance<br />

in meta-analyses, and we discuss this in Part Five.<br />

Effect size is a measure of the degree to which a<br />

phenomenon is present or the degree to which<br />

anullhypothesisisnotsupported.Wood(1995:<br />

393) suggests that effect-size can be calculated<br />

by dividing the significance level by the sample<br />

size. Glass et al.(1981:29,102)calculatetheeffect<br />

size as:<br />

(Mean of experimental group − mean of control group)<br />

Standard deviation of the control group<br />

Hedges (1981) and Hunter et al. (1982) suggest<br />

alternative equations to take account of<br />

differential weightings due to sample size<br />

variations. The two most frequently used indices<br />

of effect sizes are standardized mean differences<br />

and correlations (Hunter et al.1982:373),though<br />

non-parametric statistics, e.g. the median, can<br />

be used. Lipsey (1992: 93–100) sets out a series<br />

of statistical tests for working on effect sizes,<br />

effect size means and homogeneity. It is clear<br />

from this that Glass and others assume that metaanalysis<br />

can be undertaken only for a particular<br />

kind of research – the experimental type – rather<br />

than for all types of research; this might limit its<br />

applicability.<br />

Glass et al. (1981)suggestthatmeta-analysis<br />

is particularly useful when it uses unpublished<br />

dissertations, as these often contain weaker correlations<br />

than those reported in published research,<br />

and hence act as a brake on misleading, more<br />

spectacular generalizations. Meta-analysis, it is<br />

claimed (Cooper and Rosenthal 1980), is a means<br />

of avoiding Type II errors (failing to find effects<br />

that really exist), synthesizing research findings<br />

more rigorously and systematically, and generating<br />

hypotheses for future research. However, Hedges<br />

and Olkin (1980) and Co<strong>ok</strong> et al. (1992: 297)<br />

show that Type II errors become more likely as the<br />

number of studies included in the sample increases.<br />

Further, Rosenthal (1991) has indicated a<br />

method for avoiding Type I errors (finding an<br />

effect that, in fact, does not exist) that is based on<br />

establishing how many unpublished studies that<br />

average a null result would need to be undertaken<br />

to offset the group of published statistically<br />

significant studies. For one example he shows a<br />

ratio of 277:1 of unpublished to published research,<br />

thereby indicating the limited bias in published<br />

research.<br />

Meta-analysis is not without its critics (e.g. Wolf<br />

1986; Elliott 2001; Thomas and Pring 2004). Wolf<br />

(1986: 14–17) suggests six main areas:<br />

<br />

<br />

<br />

<br />

<br />

<br />

It is difficult to draw logical conclusions<br />

from studies that use different interventions,<br />

measurements, definitions of variables, and<br />

participants.<br />

Results from poorly designed studies take their<br />

place alongside results from higher quality<br />

studies.<br />

Published research is favoured over unpublished<br />

research.<br />

Multiple results from a single study are used,<br />

making the overall meta-analysis appear more<br />

reliable than it is, since the results are not<br />

independent.<br />

Interaction effects are overlo<strong>ok</strong>ed in favour of<br />

main effects.<br />

Meta-analysis may have ‘mischievous consequences’<br />

(Wolf 1986: 16) because its apparent<br />

objectivity and precision may disguise procedural<br />

invalidity in the studies.<br />

Wolf (1986) provides a robust response to<br />

these criticisms, both theoretically and empirically.<br />

Wolf (1986: 55–6) also suggests a ten-step<br />

sequence for carrying out meta-analyses rigorously:<br />

1 Make clear the criteria for inclusion and<br />

exclusion of studies.<br />

Chapter 13

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!