27.11.2012 Views

What Works for Children with Literacy Difficulties? - Digital ...

What Works for Children with Literacy Difficulties? - Digital ...

What Works for Children with Literacy Difficulties? - Digital ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

How should this mass of comparative detail on impact measures be interpreted?<br />

The first thing to be said is that, given the uneven quality of the description, analysis and<br />

reporting of the studies, interpretation needs to be cautious and tentative. It is not the case<br />

that some schemes have been proven effective, and others ineffective, <strong>with</strong>out qualification.<br />

High RGs and effect sizes do show that the relevant approaches have worked <strong>for</strong> some<br />

children in some circumstances, and may work <strong>for</strong> others, if implemented <strong>with</strong> similar care in<br />

similar circumstances. Low RGs and effect sizes show only that the relevant approaches have<br />

not worked <strong>for</strong> some children in some circumstances, and have no implications <strong>for</strong> the future,<br />

but they might work <strong>for</strong> other children in different circumstances.<br />

That said, from inspection of the data and from the wider literature, it has been deduced that<br />

- RGs of exactly 1.0 represent standard progress, or ‘holding one’s own’. Anything above<br />

this represents better than standard progress (but see the next point), while anything less<br />

means that the children are slipping (further) behind;<br />

- RGs below 1.4, and effect sizes below 0.25, represent an impact that does not seem<br />

educationally significant. Pupils in these schemes did not just stay where they were, and<br />

did make some progress, in absolute terms; but it was slow, and they made little or no<br />

relative progress compared to control groups receiving no special intervention. Thus<br />

schemes (or conditions <strong>with</strong>in schemes) <strong>with</strong> impact measures of this order did not seem<br />

to produce any impact over and above ordinary teaching, unless it is argued that ‘holding<br />

their own’ was a good result <strong>for</strong> such children - in other words, that <strong>with</strong>out the<br />

intervention they would have fallen even further behind. Schemes in this group may be<br />

considered to have been ‘less effective’;<br />

- all RGs above 1.4, and almost all effect sizes above 0.25, represent impact that is at least<br />

satisfactory, and in some cases excellent. Schemes in this group may be considered to<br />

have been ‘more effective’.<br />

Given this broad distinction, there are a few discrepancies between the RG and effect size lists<br />

<strong>for</strong> reading (not <strong>for</strong> spelling). Within Reading Intervention (original), the reading-only<br />

condition (AI1) had low RGs <strong>for</strong> accuracy and comprehension, and the phonology-only<br />

condition (AI2) had a low RG <strong>for</strong> accuracy, while the effect sizes seemed satisfactory or even<br />

high. Since the statistical analyses in the original report showed that neither AI produced<br />

greater gains than the control condition, that finding and the RGs are taken here to be the<br />

more accurate. Similarly, in BRP in Worcestershire the phase 1 effect size was low but the<br />

RG was satisfactory, and again the RG is taken to be more accurate.<br />

The RG list <strong>for</strong> reading contains few values below 1.0 (‘normal progress’), and all but a few<br />

of those RGs arose from control groups. This finding is, however, circular: children receiving<br />

ordinary teaching mostly made the progress to be expected of children receiving ordinary<br />

teaching. <strong>What</strong> is more interesting is that one control group (in Paired Reading) had RGs<br />

above 1.4, in fact above 2.0, and these children were there<strong>for</strong>e making better than expected<br />

progress despite, apparently, receiving no extra intervention. Perhaps Paired Reading affected<br />

a high proportion of the schools in the area in which it took place, and there<strong>for</strong>e the<br />

experimental schools were observed by others, and influenced non-participating schools to<br />

raise their game too. If this is true, it would be an argument <strong>for</strong> implementing initiatives at a<br />

fairly high density (though it would play even more havoc <strong>with</strong> evaluation statistics).<br />

145

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!