28.06.2013 Views

Aggregate versus Disaggregate Data in Measuring School Quality

Aggregate versus Disaggregate Data in Measuring School Quality

Aggregate versus Disaggregate Data in Measuring School Quality

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

esults validate this hypothesis. However, aggregate data are more likely to suffer from errors <strong>in</strong><br />

measurement than disaggregate data. As stated <strong>in</strong> the <strong>in</strong>troduction, this is due to student mobility,<br />

and <strong>in</strong> general, the fact that averages are not taken over the same group of students.<br />

Table 3 shows the results for samples with mean school size of 20 and a variance of 250,<br />

which implies that 70% of schools will have sizes between 10 and 50 students. This is done to<br />

consider the case when policy makers require evaluations at the grade rather than school level.<br />

Results are as before; the aggregate estimator is better than OLS and only slightly worse than the<br />

disaggregate estimator.<br />

2<br />

As school size <strong>in</strong>creases, the variation <strong>in</strong> averaged residuals due to students ( σ e / n )<br />

becomes <strong>in</strong>significant and the averages come closer to their true means. This implies that<br />

aggregation becomes less of a concern for estimat<strong>in</strong>g school effects and heteroskedasticity is<br />

almost <strong>in</strong>significant. The problem with small or large schools be<strong>in</strong>g consistently rewarded almost<br />

disappears. In fact, table 4 shows results for a mean school size of 300 and variance of 100,000.<br />

Differences among rank<strong>in</strong>g measures have narrowed for all estimators, and OLS, the only<br />

estimator that does not rely on estimat<strong>in</strong>g variance components performs at its best.<br />

4. Conclusions<br />

Researchers argue that value-added multilevel models provide the most accurate<br />

measures of school quality. But most states cont<strong>in</strong>ue to use aggregate data (usually not <strong>in</strong> a value<br />

added framework) to rank and reward schools. Research criticiz<strong>in</strong>g aggregate models, by<br />

compar<strong>in</strong>g them with disaggregate models, have used ord<strong>in</strong>ary least squares rather than<br />

maximum likelihood estimators. This article shows that the criticisms of aggregate models have<br />

been overstated.<br />

17

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!