27.11.2012 Views

What Works for Children with Literacy Difficulties? - Digital ...

What Works for Children with Literacy Difficulties? - Digital ...

What Works for Children with Literacy Difficulties? - Digital ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Secondly, most of the tests provided only reading/spelling age data and not standardised<br />

scores. Though apparently easier to interpret, reading and spelling ages are statistically<br />

unsatisfactory - <strong>for</strong> example, establishing whether a gain in test scores is statistically<br />

significant is more difficult <strong>for</strong> reading and spelling ages than <strong>for</strong> standardised scores.<br />

Reading and spelling age data do allow the calculation of the ratio gain - but this is in itself<br />

not a very useful statistic, especially <strong>for</strong> low-attaining groups. Pupils in such groups might<br />

not be expected to make a month’s gain in reading or spelling age in one calendar month, <strong>for</strong><br />

perfectly valid reasons. Standardised scores allow much more direct comparisons of amount<br />

of gain. Ratio gains have nevertheless been used in much of this analysis because <strong>for</strong> most of<br />

the studies they were the only impact measure which could be calculated.<br />

Thirdly, <strong>for</strong> many of the tests used it was impossible to calculate effect sizes, which are<br />

statistically much more satisfactory than ratio gains. If a standardised test is used, an effect<br />

size can be calculated even in the absence of an explicit control group; but if a nonstandardised<br />

test is used then an effect size can be calculated only if control group data,<br />

including the standard deviation, are reported.<br />

More generally, it is noteworthy that the evaluations included here number just 25. As pointed<br />

out in chapter 1, they represent more studies and approaches than that, because <strong>for</strong> Paired<br />

Reading and Parental Involvement in particular one study stands <strong>for</strong> many, and because<br />

several studies contained control groups and/or alternative interventions. But the total is still<br />

not impressive, and it is to be hoped that anyone currently devising an intervention will<br />

automatically consider the necessity <strong>for</strong>, and commission, an evaluation. The Government is<br />

setting an example here, <strong>with</strong> its evaluations of Summer <strong>Literacy</strong> Schools and of the National<br />

<strong>Literacy</strong> Strategy.<br />

4.3 Recommendations<br />

Whenever an educational innovation is devised and tried out, an evaluation should be<br />

commissioned.<br />

All evaluations should be based on the gathering of quantitative attainment data, and the data<br />

should come from the use of standardised tests, and not non-standardised instruments such as<br />

reading- and spelling-age tests.<br />

Properly defined control groups should be set up, through random assignment or at least by<br />

matching.<br />

All evaluations should report as a minimum the following in<strong>for</strong>mation:<br />

• the date when the evaluation was carried out (in addition to the date of reporting)<br />

• the exact age-range of the children involved<br />

• salient characteristics of the children involved, <strong>for</strong> example whether they had special<br />

educational needs<br />

• the numbers of children in the experimental and control groups and in any alternative<br />

intervention groups<br />

61

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!