12.07.2015 Views

Testing More, Teaching Less - American Federation of Teachers

Testing More, Teaching Less - American Federation of Teachers

Testing More, Teaching Less - American Federation of Teachers

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Table 1BAssessment Inventory, Midwestern School DistrictMinutes Per Test, Number <strong>of</strong> Annual Administrations and Total Annual <strong>Testing</strong> MinutesLocal Assessments Grade Minutes Annual MinutesKindergarten Test K Developmental indicators 30 3x 90DIBELS K-3 Reading diagnostic 10 3x 304-5 10 1x 10SRI (Scholastic) 6-9 Reading 30 3x 90TerraNova K-1 Reading, math 120 1x 1202 130 1x 130Curriculum-Based Assessments (CBA) 2-3 Math 50 3x 1506-12 Math 50 2x 100Reading 75 2x 150Science 110 2x 220Social studies 110 3x 330PSAT 10,11 Preliminary SAT 135 1x 135NOCTI 12 Career/technical EOC 300 1x 300GRADE (Pearson) 3-11 Literacy (federal grant) 90 3x 270Interim/Benchmark TestsCurriculum-Based Assessments (CBA) 3-5 Math 50 4x 200Classroom Diagnostic (CDT) 6-11 Reading 75 5x 375State-Mandated TestsMath, reading, science 270 3x 810Grades 3-8, 11 3-8 Math, reading 395 1x 3954, 8 Science 160 1x 1605, 8 Writing 330 1x 33011 Math, reading, writing,scienceEnd-<strong>of</strong>-Course* 9-12 Math, reading, socialstudies, science*Assumes students take two end-<strong>of</strong>-course assessments per year for four years.990 1x 990120 2x 240that while nearly all respondents believed that thepurpose <strong>of</strong> interim assessments was to guide andimprove instruction, 90 percent <strong>of</strong> the same respondentsbelieved they measure progress toward theend-<strong>of</strong>-year state test, 80 percent believed they werealso diagnostic and 90 percent thought they wereformative (Heppen et al., 2011).Despite the ubiquity <strong>of</strong> interim/benchmark testing,research on these tests has failed to demonstratea positive and statistically significant impact onimproving student achievement on end-<strong>of</strong>-year tests,which is the tests sole purpose. 1 The studies that havefound some positive impacts focused more on datause and accessibility.” 2 It is not clear whether the1. A national study conducted by Mathematica Policy Research (Furgeson et al., 2012) <strong>of</strong> 22 Charter Management Organizations (CMOs) found that “frequent formativestudent assessments” (referring to interim/benchmarking assessments) had no impact on positive achievement impacts among CMOs. In a study <strong>of</strong> Formative Assessments<strong>of</strong> Student Thinking in Reading (FAST-R), teachers in Boston were given test data aligned to the state assessments every three to 12 weeks, and data coaches helpedteachers interpret and use the data. A two-year evaluation <strong>of</strong> 21 elementary schools found small but statistically insignificant effects (Quint et al., 2008). A one-year study<strong>of</strong> benchmark assessments in 22 Massachusetts middle schools also showed no effect (Henderson et al., 2007). May and Robinson (2007) studied a benchmark assessmentprogram used in high schools to prepare students for the Ohio Graduation Tests and found no statistically significant impact from the benchmarking for students takingthe graduation test for the first time.<strong>Testing</strong> <strong>More</strong>, <strong>Teaching</strong> <strong>Less</strong> 11

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!