12.01.2015 Views

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

294 EXPERIMENTS AND META-ANALYSIS<br />

2 Search for unpublished studies.<br />

3 Develop coding categories that cover the<br />

widest range of studies identified.<br />

4 Lo<strong>ok</strong>forinteractioneffectsandexaminemultiple<br />

independent and dependent variables<br />

separately.<br />

5 Testforheterogeneityofresultsandtheeffects<br />

of outliers, graphing distributions of results.<br />

6 Checkfor inter-rater coding reliability.<br />

7 Use indicators of effect size rather than<br />

statistical significance.<br />

8 Calculateunadjusted(raw)andweightedtests<br />

and effects sizes in order to examine the<br />

influence of sample size on the results found.<br />

9 Combinequalitativeandquantitativereviewing<br />

methods.<br />

10 Report the limitations of the meta-analyses<br />

conducted.<br />

One can add to this the need to specify the<br />

research questions being asked, the conceptual<br />

frameworks being used, the review protocols being<br />

followed, the search and retrieval strategies being<br />

used, and the ways in which the syntheses of the<br />

findings from several studies are brought together<br />

(Thomas and Pring 2004: 54–5).<br />

Gorard (2001: 72–3) suggests a four-step model<br />

for conducting meta-analysis:<br />

1 Collect all the appropriate studies for<br />

inclusion.<br />

2 Weight each study ‘according to its size and<br />

quality’.<br />

3 List the outcomemeasures used.<br />

4 Selectamethodforaggregation,basedonthe<br />

nature of the data collected (e.g. counting<br />

those studies in which an effect appeared and<br />

those in which an effect did not appear, or<br />

calculating the average effect size across the<br />

studies).<br />

Evans and Benefield (2001: 533–7) set out six<br />

principles for undertaking systematic reviews of<br />

evidence:<br />

<br />

Aclearspecificationoftheresearchquestion<br />

which is being addressed.<br />

<br />

<br />

<br />

<br />

<br />

Asystematic,comprehensiveandexhaustive<br />

search for relevant studies.<br />

The specification and application of clear criteria<br />

for the inclusion and exclusion of studies,<br />

including data extraction criteria: published;<br />

unpublished; citation details; language; keywords;<br />

funding support; type of study (e.g.<br />

process or outcome-focused, prospective or<br />

retrospective); nature of the intervention; sample<br />

characteristics; planning and processes of<br />

the study; outcome evaluation.<br />

Evaluations of the quality of the methodology<br />

used in each study (e.g. the kind of experiment<br />

and sample; reporting of outcome measures).<br />

The specification of strategies for reducing bias<br />

in selecting and reviewing studies.<br />

Transparency in the methodology adopted for<br />

reviewing the studies.<br />

Gorard (2001) acknowledges that subjectivity can<br />

enter into meta-analysis. Since so much depends<br />

upon the quality of the results that are to be<br />

synthesized, there is the danger that adherents may<br />

simply multiply the inadequacies of the database<br />

and the limits of the sample (e.g. trying to compare<br />

the incomparable). Hunter et al. (1982) suggest<br />

that sampling error and the influence of other<br />

factors has to be addressed, and that it should<br />

account for less than 75 per cent of the variance<br />

in observed effect sizes if the results are to be<br />

acceptable and able to be coded into categories.<br />

The issue is clear here: coding categories have to<br />

declare their level of precision, their reliability<br />

(e.g. inter-coder reliability – the equivalent of<br />

inter-rater reliability, see Chapter 6) and validity<br />

(McGaw 1997: 376–7).<br />

To the charge that selection bias will be as strong<br />

in meta-analysis – which embraces both published<br />

and unpublished research – as in solely published<br />

research, Glass et al. (1981: 226–9) argue that<br />

it is necessary to counter gross claims made in<br />

published research with more cautious claims<br />

found in unpublished research.<br />

Because the quantitative mode of (many)<br />

studies demands only a few common variables

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!