New Thinking in Response to Intervention - Renaissance Learning
New Thinking in Response to Intervention - Renaissance Learning
New Thinking in Response to Intervention - Renaissance Learning
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>New</strong> <strong>Th<strong>in</strong>k<strong>in</strong>g</strong> <strong>in</strong> <strong>Response</strong><br />
<strong>to</strong> <strong>Intervention</strong><br />
A Comparison of Computer-Adaptive Tests and<br />
Curriculum-Based Measurement With<strong>in</strong> RTI<br />
Edward S. Shapiro<br />
Center for Promot<strong>in</strong>g Research <strong>to</strong> Practice<br />
Lehigh University<br />
Practice Brief
Core Progress, STAR Early Literacy, STAR Math, STAR Read<strong>in</strong>g, and <strong>Renaissance</strong> Learn<strong>in</strong>g are trademarks of <strong>Renaissance</strong><br />
Learn<strong>in</strong>g, Inc., and it’s subsidiaries, registered, common law, or pend<strong>in</strong>g registration <strong>in</strong> the United States and other countries.<br />
Please note: Reports are regularly reviewed and may vary from those shown as enhancements are made.<br />
© 2012 by <strong>Renaissance</strong> Learn<strong>in</strong>g, Inc. All rights reserved. Pr<strong>in</strong>ted <strong>in</strong> the United States of America.<br />
This paper was commissioned for production by <strong>Renaissance</strong> Learn<strong>in</strong>g but represents solely the op<strong>in</strong>ion of the author.<br />
Correspondence concern<strong>in</strong>g this paper should be addressed <strong>to</strong> Edward S. Shapiro, Ph.D., Direc<strong>to</strong>r, Center for Promot<strong>in</strong>g Research<br />
<strong>to</strong> Practice, Lehigh University, L-111 Iacocca Hall, 111 Research Dr., Bethlehem, PA 18015. Email: ed.shapiro@lehigh.edu.<br />
This publication is protected by U.S. and <strong>in</strong>ternational copyright laws. It is unlawful <strong>to</strong> duplicate or reproduce any copyrighted<br />
material without authorization from the copyright holder. For more <strong>in</strong>formation, contact:<br />
RENAISSANCE LEARNING, INC.<br />
P.O. Box 8036<br />
Wiscons<strong>in</strong> Rapids, WI 54495-8036<br />
(800) 338-4204<br />
www.renlearn.com<br />
answers@renlearn.com<br />
1/12
Contents<br />
Introduction............................................................................................................................................................6<br />
<strong>Response</strong> <strong>to</strong> <strong>Intervention</strong> Def<strong>in</strong>ed..........................................................................................................................7<br />
Comparison of CAT and CBM..............................................................................................................................10<br />
Discussion............................................................................................................................................................19<br />
Conclusion............................................................................................................................................................20<br />
References...........................................................................................................................................................21<br />
Acknowledgements..............................................................................................................................................22<br />
Figures<br />
Figure 1: RTI Pyramid.............................................................................................................................................6<br />
Figure 2: STAR Screen<strong>in</strong>g Reports.......................................................................................................................12<br />
Figure 3: CBM Screen<strong>in</strong>g Report.........................................................................................................................13<br />
Figure 4: CBM Screen<strong>in</strong>g Reports.......................................................................................................................13<br />
Figure 5: CBM Progress-Moni<strong>to</strong>r<strong>in</strong>g Report.........................................................................................................14<br />
Figure 6: STAR Goal Sett<strong>in</strong>g Wizard.....................................................................................................................15<br />
Figure 7: STAR Progress-Moni<strong>to</strong>r<strong>in</strong>g Report........................................................................................................16<br />
Figure 8: Understand<strong>in</strong>g STAR Scaled Score......................................................................................................17<br />
Figure 9: STAR Diagnostic Report........................................................................................................................18<br />
Figure 10: STAR Instructional Plann<strong>in</strong>g Report....................................................................................................18<br />
Table<br />
Table 1: Components of RTI...................................................................................................................................8<br />
3
Dear Educa<strong>to</strong>r,<br />
When I was asked <strong>to</strong> consider writ<strong>in</strong>g this practice brief, I was, at the time, consult<strong>in</strong>g with a number of<br />
elementary schools <strong>in</strong> the plann<strong>in</strong>g and implementation of <strong>Response</strong> <strong>to</strong> <strong>Intervention</strong> (RTI) models.<br />
One of the first questions tackled by those schools was what measure should be selected for universal<br />
screen<strong>in</strong>g and progress moni<strong>to</strong>r<strong>in</strong>g. The research literature as well as my own experience immediately turned<br />
<strong>to</strong> Curriculum-Based Measurement (CBM) as a logical recommendation for these schools. When the schools<br />
exam<strong>in</strong>ed the nature of CBMs, especially for read<strong>in</strong>g and early literacy, many asked whether these were<br />
the only measures available, and if there were other options that shared equally strong research support. In<br />
consult<strong>in</strong>g the National Center on <strong>Response</strong> <strong>to</strong> <strong>Intervention</strong> Screen<strong>in</strong>g and Progress Moni<strong>to</strong>r<strong>in</strong>g Tools Charts,<br />
it was evident that there were many other choices.<br />
CBM has served those implement<strong>in</strong>g RTI models very well for many years, especially RTI models focused on<br />
read<strong>in</strong>g and early literacy. Indeed, the research support for CBM with<strong>in</strong> RTI models rema<strong>in</strong>s very strong. It is<br />
likely that CBM will cont<strong>in</strong>ue <strong>to</strong> be used extensively <strong>in</strong> RTI models <strong>in</strong> the future.<br />
However, when we look at the national <strong>in</strong>vestment of the U.S. Department of Education that is currently<br />
underway <strong>in</strong> the development of next-generation statewide assessment measures, Computer-Adaptive<br />
Test<strong>in</strong>g is emerg<strong>in</strong>g as the foundation of the construction of these tests. As RTI models evolve, it is likely that<br />
educa<strong>to</strong>rs will be look<strong>in</strong>g <strong>to</strong> Computer-Adaptive Test<strong>in</strong>g for universal screen<strong>in</strong>g and progress moni<strong>to</strong>r<strong>in</strong>g <strong>to</strong><br />
enhance more immediate <strong>in</strong>structional decision-mak<strong>in</strong>g processes called for with<strong>in</strong> those models.<br />
My purpose <strong>in</strong> writ<strong>in</strong>g this practice brief is <strong>to</strong> show educa<strong>to</strong>rs that while CBM has and cont<strong>in</strong>ues <strong>to</strong> serve us<br />
well as key measures for RTI models, there are alternatives <strong>to</strong> CBM that can provide the support needed <strong>to</strong><br />
make <strong>in</strong>formed <strong>in</strong>structional decisions.<br />
The STAR measures, a set of Computer-Adaptive Tests for read<strong>in</strong>g and math, are one option that can help<br />
move RTI <strong>in</strong><strong>to</strong> next-generation test<strong>in</strong>g us<strong>in</strong>g the assessment measures of the future, <strong>to</strong>day.<br />
Edward S. Shapiro, Ph.D.<br />
Oc<strong>to</strong>ber 14, 2011<br />
5
Introduction<br />
Over the past few years, there has been <strong>in</strong>creas<strong>in</strong>g <strong>in</strong>terest <strong>in</strong> the process known as <strong>Response</strong> <strong>to</strong> <strong>Intervention</strong><br />
(RTI) with<strong>in</strong> the education community. RTI is a multi-tiered system of supports that organizes services <strong>to</strong> all<br />
students <strong>in</strong> an effort <strong>to</strong> <strong>in</strong>tervene early with those students whose performance is predictive of subsequent<br />
failure. Applied <strong>to</strong> both academics and behavior, RTI is <strong>in</strong>creas<strong>in</strong>gly viewed as an important and major effort<br />
<strong>to</strong> reform the delivery of <strong>in</strong>struction, especially at the elementary level.<br />
Key <strong>to</strong> the RTI process is the use of assessment<br />
<strong>to</strong>ols that provide universal screen<strong>in</strong>g and progress<br />
moni<strong>to</strong>r<strong>in</strong>g. These assessments work <strong>to</strong>gether <strong>to</strong> help<br />
educa<strong>to</strong>rs identify students with trajec<strong>to</strong>ries of learn<strong>in</strong>g<br />
that are not likely <strong>to</strong> lead <strong>to</strong> grade-level goals. They<br />
also provide formative feedback about the success<br />
of <strong>in</strong>terventions used <strong>to</strong> shift a student’s trajec<strong>to</strong>ry <strong>to</strong><br />
meet grade-level goals.<br />
Equat<strong>in</strong>g CBM with RTI is<br />
unfortunate as it is not the only<br />
assessment process that meets<br />
the requirements of an effective<br />
RTI system.<br />
Among the many methods of assessment, Curriculum-Based Measurement (CBM) has been most closely<br />
associated with RTI models. CBM has a long and well-established research base that makes it a logical and<br />
natural choice. However, equat<strong>in</strong>g CBM with RTI is unfortunate as it is not the only assessment process that<br />
meets the requirements of an effective RTI system. The purpose of this paper is <strong>to</strong> compare two researchbased<br />
systems of assessment useful with<strong>in</strong> an RTI model: Computer-Adaptive Test<strong>in</strong>g (CAT) and Curriculum-<br />
Based Measurement.<br />
The paper beg<strong>in</strong>s with a brief exam<strong>in</strong>ation of RTI, presents the key components of RTI, and details the<br />
assessments used with<strong>in</strong> the RTI model. Examples that show both the CAT and CBM approaches are<br />
presented, with a f<strong>in</strong>al brief discussion contrast<strong>in</strong>g the two systems.<br />
Students move between tiers based on response<br />
Tier 3<br />
Tier 2<br />
Intensity of <strong>in</strong>tervention<br />
Tier 1<br />
Figure 1. RTI Pyramid<br />
6
<strong>Response</strong> <strong>to</strong> <strong>Intervention</strong> Def<strong>in</strong>ed<br />
What we do <strong>in</strong> the early years of formal school<strong>in</strong>g makes a huge difference <strong>in</strong> the academic health of a child’s<br />
development. In the earliest years of school, children must learn <strong>to</strong> read, do math, and write. We know that<br />
students who struggle <strong>in</strong> first grade, especially <strong>in</strong> read<strong>in</strong>g, have a very high probability of academic struggles<br />
throughout school. In one of the most widely cited studies, Juel (1988) showed that children who fail <strong>to</strong> master<br />
basic read<strong>in</strong>g skills at the end of first grade have an<br />
88% probability of still show<strong>in</strong>g struggles <strong>in</strong> fourth<br />
grade. Many others over the past two decades have<br />
replicated these f<strong>in</strong>d<strong>in</strong>gs (e.g., Abbott, Bern<strong>in</strong>ger, &<br />
Fayol, 2010; Silberglitt, Burns, Madyun, & Lail, 2006;<br />
Snow, Porche, Tabors, & Harris, 2007).<br />
As a part of the reauthorization of the Individuals<br />
with Disabilities Education Act (IDEA), Congress<br />
recognized the importance of the early identification<br />
of children who are potentially at risk of academic<br />
failure. IDEA allows schools <strong>to</strong> use a process for<br />
identify<strong>in</strong>g students with specific learn<strong>in</strong>g disabilities<br />
In one of the most widely cited<br />
studies, Juel (1988) showed that<br />
children who fail <strong>to</strong> master basic<br />
read<strong>in</strong>g skills at the end of first<br />
grade have an 88% probability<br />
of still show<strong>in</strong>g struggles <strong>in</strong><br />
fourth grade.<br />
(SLD) that exam<strong>in</strong>es the degree <strong>to</strong> which students respond <strong>to</strong> high quality, research-based <strong>in</strong>terventions.<br />
The provision is considered a landmark shift <strong>in</strong> the conceptualization of SLD. It enables educa<strong>to</strong>rs <strong>to</strong> use<br />
a process that differentiates between students who respond <strong>to</strong> <strong>in</strong>struction designed <strong>to</strong> shift their learn<strong>in</strong>g<br />
trajec<strong>to</strong>ry, from those who need much more <strong>in</strong>tensive forms of specialized <strong>in</strong>struction <strong>to</strong> susta<strong>in</strong> academic<br />
success (i.e., those <strong>in</strong> need of special education). The term “<strong>Response</strong> <strong>to</strong> <strong>Intervention</strong>” emerged as the way <strong>to</strong><br />
describe this process.<br />
Although RTI began as a way <strong>to</strong> assist educa<strong>to</strong>rs <strong>in</strong> determ<strong>in</strong><strong>in</strong>g the presence of a specific learn<strong>in</strong>g disabillity,<br />
it quickly emerged as much more. RTI is viewed as a method for reform<strong>in</strong>g the way academic services are<br />
delivered <strong>to</strong> all students.<br />
RTI is often conceptualized as a triangle (Figure 1). The <strong>in</strong>structional practices at the base of the triangle,<br />
known as core <strong>in</strong>struction or Tier 1, are the foundation on which all future academic success rests. When<br />
the Tier 1 <strong>in</strong>struction is research based and delivered accurately, most students achieve academic success<br />
without additional <strong>in</strong>tervention.<br />
Despite strong core <strong>in</strong>struction, some students show evidence that if their learn<strong>in</strong>g trajec<strong>to</strong>ry cont<strong>in</strong>ues at the<br />
current rate, they will not meet the expected academic goals. These students need additional <strong>in</strong>structional<br />
supports supplemental <strong>to</strong> core <strong>in</strong>struction. Typically called Strategic or Tier 2 support, the <strong>in</strong>structional<br />
<strong>in</strong>terventions at this tier add a dimension of teach<strong>in</strong>g <strong>to</strong> address the identified needs of students. The objective<br />
is <strong>to</strong> correct a student’s trajec<strong>to</strong>ry of learn<strong>in</strong>g and erase the gap between targeted students and peers. This is<br />
done by focus<strong>in</strong>g on three key components: small-group <strong>in</strong>struction, a specified number of m<strong>in</strong>utes per day/<br />
week, and teachers tra<strong>in</strong>ed <strong>to</strong> deliver the remedial <strong>in</strong>struction.<br />
Students who do not respond <strong>to</strong> Tiers 1 and 2 are placed <strong>in</strong> an <strong>in</strong>tensive, or Tier 3 level of support, which<br />
is supplemental <strong>to</strong> Tier 1. Students <strong>in</strong> Tier 3 were not responsive <strong>to</strong> Tier 2 <strong>in</strong>terventions or were identified<br />
earlier (based on universal screen<strong>in</strong>g measures) as <strong>in</strong> need of immediate <strong>in</strong>tensive <strong>in</strong>terventions. Students<br />
identified as need<strong>in</strong>g Tier 3 support are at high risk for future academic failure. They need a level of support<br />
that requires focused <strong>in</strong>struction, very small groups, and extensive <strong>in</strong>tervention. This is best accomplished by<br />
school personnel who have expertise work<strong>in</strong>g with students requir<strong>in</strong>g more <strong>in</strong>tensive <strong>in</strong>tervention. Students<br />
at Tier 3 who are found <strong>to</strong> not respond <strong>to</strong> <strong>in</strong>terventions at rates that would significantly alter their learn<strong>in</strong>g<br />
trajec<strong>to</strong>ry are considered as potentially eligible for determ<strong>in</strong>ation as students with a specific learn<strong>in</strong>g disability.<br />
7
Components of RTI<br />
An RTI process consists of key components, <strong>in</strong>clud<strong>in</strong>g universal screen<strong>in</strong>g, high-quality core <strong>in</strong>struction,<br />
progress moni<strong>to</strong>r<strong>in</strong>g, tiered <strong>in</strong>terventions, collaborative data-based decision mak<strong>in</strong>g, parent <strong>in</strong>volvement, and<br />
adm<strong>in</strong>istrative support (Table 1). A full description, discussion, and details of these components is beyond the<br />
scope of this paper and can be found <strong>in</strong> excellent practitioner resources such as Burns and Gibbons (2008),<br />
Qu<strong>in</strong>n (2009), and Wright (2007). This paper focuses on assessment with<strong>in</strong> RTI.<br />
Table 1. Components of RTI<br />
Universal Screen<strong>in</strong>g<br />
• All children assessed at beg<strong>in</strong>n<strong>in</strong>g, middle, and end of school year on<br />
skills identified as highly predictive of future success or failure<br />
• Assesses the overall success of the core <strong>in</strong>struction (Tier 1)<br />
High-Quality, Standards-<br />
Aligned Instruction<br />
• Core <strong>in</strong>struction delivered <strong>to</strong> all students us<strong>in</strong>g a research-based,<br />
empirically supported program that is closely aligned <strong>to</strong> state standards<br />
and/or Common Core State Standards<br />
Progress Moni<strong>to</strong>r<strong>in</strong>g<br />
• Conducted on an ongo<strong>in</strong>g basis over time for students who are <strong>in</strong> need<br />
of tiered supports beyond Tier 1<br />
• Assessment frequency is more periodic than universal screen<strong>in</strong>g,<br />
usually at least once per week<br />
• Data are sensitive <strong>to</strong> student improvement over time; sufficient data <strong>to</strong><br />
establish a reliable trend must be collected<br />
Tiered <strong>Intervention</strong>s<br />
Collaborative, Data-Based<br />
Decision Mak<strong>in</strong>g<br />
Parental Involvement<br />
Adm<strong>in</strong>istrative Support<br />
• Supplemental, research-based <strong>in</strong>structional <strong>in</strong>terventions <strong>to</strong> core<br />
<strong>in</strong>struction derived from a problem-solv<strong>in</strong>g process and focused on<br />
student needs<br />
• Usually delivered <strong>in</strong> small groups, with larger groups at Tier 2 than Tier 3<br />
• Teams of school professionals exam<strong>in</strong>e multiple data sources <strong>to</strong> discuss<br />
the appropriate <strong>in</strong>tervention <strong>to</strong> impact the child<br />
• The number and nature of the team structure is often def<strong>in</strong>ed by the<br />
local context (i.e., school-level versus grade-level data teams)<br />
• Engagement of parents <strong>in</strong> the process of understand<strong>in</strong>g and<br />
support<strong>in</strong>g the efforts <strong>to</strong> provide <strong>in</strong>structional support<br />
• Ma<strong>in</strong>ta<strong>in</strong> close and frequent communication with parents about<br />
student progress<br />
• Leadership at central, build<strong>in</strong>g, and teacher levels provide key<br />
supports <strong>to</strong> the process<br />
• Adm<strong>in</strong>istrative support for <strong>in</strong>frastructure, schedule, materials,<br />
ongo<strong>in</strong>g professional development, and build<strong>in</strong>g consensus<br />
8
Assessment Processes with<strong>in</strong> RTI<br />
RTI is only successful when all the components listed <strong>in</strong> Table 1 are simultaneously <strong>in</strong> place. However,<br />
assessment plays a pivotal role. The entire RTI process relies on the accurate and effective use of assessment<br />
for universal screen<strong>in</strong>g and progress moni<strong>to</strong>r<strong>in</strong>g. These two processes provide the focus for decisions made<br />
by collaborative teams and directions for <strong>in</strong>structional changes needed <strong>to</strong> improve student performance.<br />
Universal screen<strong>in</strong>g provides periodic w<strong>in</strong>dows <strong>in</strong><strong>to</strong> student performance by compar<strong>in</strong>g aga<strong>in</strong>st the<br />
performance of peers. The measures are obta<strong>in</strong>ed at <strong>in</strong>tervals across the school year (usually beg<strong>in</strong>n<strong>in</strong>g,<br />
middle, and end of year) and used <strong>to</strong> establish the expected level of growth of typical perform<strong>in</strong>g peers.<br />
Benchmark assessments should be relatively brief, <strong>in</strong>expensive, easily adm<strong>in</strong>istered, and easily scored.<br />
With screen<strong>in</strong>g, it is unders<strong>to</strong>od that more students will be identified as at risk than truly are. Likewise,<br />
there will be some students not identified who are found <strong>to</strong> be hav<strong>in</strong>g difficulties. As such, the benchmark<br />
measure must be comb<strong>in</strong>ed with additional data <strong>in</strong> mak<strong>in</strong>g important diagnostic decisions about<br />
student performance.<br />
Progress moni<strong>to</strong>r<strong>in</strong>g plays a critical role <strong>in</strong> evaluat<strong>in</strong>g how<br />
well a student is respond<strong>in</strong>g <strong>to</strong> <strong>in</strong>tervention. Progressmoni<strong>to</strong>r<strong>in</strong>g<br />
measures must be frequent, sensitive <strong>to</strong><br />
<strong>in</strong>structional change over a short period of time, predictive<br />
of overall success as measured by the benchmark<br />
assessment, and able <strong>to</strong> drive <strong>in</strong>structional decisions.<br />
Progress-moni<strong>to</strong>r<strong>in</strong>g measures must assist educa<strong>to</strong>rs <strong>in</strong><br />
determ<strong>in</strong><strong>in</strong>g if <strong>in</strong>terventions are effective and, perhaps<br />
more importantly, what <strong>to</strong> do <strong>in</strong> cases when they are not.<br />
Progress-moni<strong>to</strong>r<strong>in</strong>g measures<br />
must assist educa<strong>to</strong>rs <strong>in</strong><br />
determ<strong>in</strong><strong>in</strong>g if <strong>in</strong>terventions<br />
are effective and, perhaps<br />
more importantly, what <strong>to</strong> do <strong>in</strong><br />
cases when they are not.<br />
9
Comparison of CAT and CBM<br />
Curriculum-Based Measurement<br />
Probably the most used and accepted assessment <strong>to</strong>ol<br />
<strong>to</strong> date with<strong>in</strong> RTI is Curriculum-Based Measurement<br />
(CBM). CBM measures are rate based, efficient, and<br />
easily adm<strong>in</strong>istered. The read<strong>in</strong>g portion is adm<strong>in</strong>istered <strong>to</strong><br />
students <strong>in</strong>dividually <strong>in</strong> approximately 1-2 m<strong>in</strong>utes per child.<br />
The math section is given <strong>to</strong> students <strong>in</strong> small groups and<br />
takes 5-10 m<strong>in</strong>utes per group. As a rate-based measure,<br />
fluency (i.e., accurate responses per unit of time) is the key<br />
outcome of student performance.<br />
Common Misconceptions<br />
A common misconception <strong>in</strong> RTI is that<br />
the skills taught <strong>in</strong> <strong>in</strong>tervention must be<br />
assessed separately. On the contrary,<br />
there are several ways <strong>to</strong> moni<strong>to</strong>r a<br />
student’s progress <strong>in</strong> a skill area, <strong>in</strong>clud<strong>in</strong>g<br />
computer-adaptive measures that assess<br />
a broad range of skills with each test. In<br />
other words, it is possible <strong>to</strong> get a good<br />
<strong>in</strong>dication of student progress <strong>in</strong> phonics<br />
with a test that assesses phonemic<br />
awareness, phonics, vocabulary, and<br />
comprehension at one time.<br />
First developed by Deno and his colleagues (e.g., Deno,<br />
1985, Deno, Mars<strong>to</strong>n, & T<strong>in</strong>dal, 1985), CBM was designed<br />
<strong>to</strong> serve as an <strong>in</strong>dex of overall growth and skill development across curriculum objectives. The measures<br />
were viewed as s<strong>in</strong>gle <strong>in</strong>dices <strong>to</strong> signal change <strong>in</strong> student learn<strong>in</strong>g <strong>in</strong> the area be<strong>in</strong>g assessed. For example,<br />
a measure of the oral read<strong>in</strong>g fluency (words read correctly with<strong>in</strong> one m<strong>in</strong>ute) was found <strong>to</strong> be a very<br />
strong <strong>in</strong>dica<strong>to</strong>r of overall acquisition of read<strong>in</strong>g skills (Deno, Mirk<strong>in</strong>, & Chiang, 1982). It reflects a student’s<br />
overall performance <strong>in</strong> the skills embedded with<strong>in</strong> learn<strong>in</strong>g <strong>to</strong> read: phonemic awareness, phonics, fluency,<br />
vocabulary development, and comprehension. A student’s score on an Oral Read<strong>in</strong>g Fluency (ORF) measure<br />
is then used <strong>to</strong> <strong>in</strong>dex how well read<strong>in</strong>g <strong>in</strong>struction is result<strong>in</strong>g <strong>in</strong> improved read<strong>in</strong>g performance. Substantial<br />
and repeated research has shown that measures of oral read<strong>in</strong>g fluency are highly predictive and related <strong>to</strong> all<br />
skills embedded <strong>in</strong> read<strong>in</strong>g performance, even well <strong>in</strong><strong>to</strong> middle school (Den<strong>to</strong>n et al., 2011; Petscher & Kim,<br />
2011), although the correlations <strong>to</strong> comprehension reduces significantly after fourth grade (Shapiro, Solari, &<br />
Petscher, 2008). In addition, CBM measures <strong>in</strong> read<strong>in</strong>g have been found <strong>to</strong> be highly predictive of outcomes<br />
on state assessments (e.g., McGl<strong>in</strong>chey & Hixson, 2004; Shapiro, Keller, Edwards, Lutz, & H<strong>in</strong>tze, 2006;<br />
Silberglitt, Burns, Madyun, & Lail, 2006).<br />
It is important <strong>to</strong> understand that while CBMs <strong>in</strong>dex<br />
overall growth, they are not demonstrations of specific<br />
skills. Similarly, while CBM read<strong>in</strong>g <strong>in</strong>dexes whether the<br />
<strong>in</strong>structional <strong>in</strong>terventions are hav<strong>in</strong>g the desired impact,<br />
one would not directly teach children <strong>to</strong> read out loud faster<br />
as a way <strong>to</strong> <strong>in</strong>crease performance.<br />
In other words, while fluency is what we measure,<br />
<strong>in</strong>creas<strong>in</strong>g fluency <strong>in</strong> read<strong>in</strong>g is not necessarily the<br />
skill that needs <strong>to</strong> be targeted. Many educa<strong>to</strong>rs us<strong>in</strong>g<br />
CBM measures make this critical mistake and believe<br />
that because we measure a student’s fluency <strong>to</strong> gauge<br />
overall read<strong>in</strong>g performance, a student’s fluency must<br />
then be the target for improvement. Although fluency<br />
may be an appropriate concern for improv<strong>in</strong>g a student’s<br />
read<strong>in</strong>g skills, one would not make that determ<strong>in</strong>ation on<br />
the CBM metric alone. CBM measures tell the educa<strong>to</strong>r<br />
that a student is struggl<strong>in</strong>g <strong>in</strong> read<strong>in</strong>g, but do not po<strong>in</strong>t<br />
specifically at the needed skills for <strong>in</strong>tervention.<br />
Common Misconceptions<br />
It is a common misconception that CBMs<br />
assess specific skills. For example, it is<br />
often suggested that a Nonsense Word<br />
Fluency (NWF) probe assesses phonics.<br />
This is not the case. NWF assesses a<br />
student’s overall ability <strong>to</strong> decode but<br />
does not identify which phonics skills a<br />
student may lack. Is the student struggl<strong>in</strong>g<br />
<strong>in</strong> <strong>in</strong>itial consonant sounds Is the student<br />
struggl<strong>in</strong>g with medial vowels A student’s<br />
NWF score doesn’t answer these questions.<br />
One would never design an <strong>in</strong>tervention<br />
<strong>to</strong> teach students <strong>to</strong> read nonsense<br />
words aloud faster. Therefore, determ<strong>in</strong><strong>in</strong>g<br />
<strong>in</strong>terventions based on the NWF score<br />
requires additional diagnostic <strong>in</strong>formation.<br />
10
Computer-Adaptive Test<strong>in</strong>g<br />
Recently, Computer-Adaptive Test<strong>in</strong>g (CAT) has emerged as an important option for assessment with<strong>in</strong> RTI<br />
models. Based on the Item <strong>Response</strong> Theory (IRT) approach <strong>to</strong> test construction, CATs adjust the items<br />
adm<strong>in</strong>istered based on student responses and the difficulty of the items. In other words, when a student<br />
answers a question correctly, the student is then given a more difficult question. Student responses cue shifts<br />
<strong>in</strong> subsequent items and result <strong>in</strong> an <strong>in</strong>dication of the skills atta<strong>in</strong>ed and not atta<strong>in</strong>ed across an assessment<br />
doma<strong>in</strong>. These assessments <strong>in</strong>clude large item banks,<br />
are not timed, and are based on accuracy of student<br />
responses. Because CAT measures are carefully<br />
calibrated, the test quickly p<strong>in</strong>po<strong>in</strong>ts the skill sets that<br />
represent a student’s academic achievement level. Some<br />
CAT assessments can be given <strong>in</strong> 10 <strong>to</strong> 25 m<strong>in</strong>utes and<br />
are <strong>in</strong>dividually adm<strong>in</strong>istered via computer.<br />
Recently, Computer-Adaptive<br />
Test<strong>in</strong>g (CAT) has emerged<br />
as an important option for<br />
assessment with<strong>in</strong> RTI models.<br />
It is important <strong>to</strong> understand that Computer-Adaptive Tests adjust based on a student’s accuracy <strong>in</strong> answer<strong>in</strong>g<br />
items, regardless of the time needed <strong>to</strong> answer the question. In contrast <strong>to</strong> CBM measures that use fluency<br />
as the key <strong>in</strong>dica<strong>to</strong>r of student performance, CATs use a student’s actual level of right and wrong answers <strong>to</strong><br />
select the next question from a vast item bank that spans multiple doma<strong>in</strong>s. As such, CAT measures are<br />
much more focused on skills with<strong>in</strong> various doma<strong>in</strong>s and sub-doma<strong>in</strong>s of academic areas compared <strong>to</strong><br />
CBM measures.<br />
Although CBM has been widely accepted and adopted<br />
with<strong>in</strong> RTI models, other assessments also meet the def<strong>in</strong><strong>in</strong>g<br />
characteristics of screen<strong>in</strong>g and progress moni<strong>to</strong>r<strong>in</strong>g—<br />
<strong>in</strong>clud<strong>in</strong>g some Computer-Adaptive Tests. These measures<br />
can serve as universal screen<strong>in</strong>g <strong>to</strong>ols. They report on<br />
the relative stand<strong>in</strong>g of students compared <strong>to</strong> their peers<br />
at a s<strong>in</strong>gle po<strong>in</strong>t <strong>in</strong> time and across time. For example,<br />
the technical manuals for STAR Read<strong>in</strong>g and STAR Math<br />
(<strong>Renaissance</strong> Learn<strong>in</strong>g, 2011) report moderate <strong>to</strong> strong<br />
correlations with many state tests. Additionally, the reliability<br />
and concurrent validity of STAR measures were found <strong>to</strong> be<br />
consistently moderate <strong>to</strong> strong as reviewed by the National<br />
Center for <strong>Response</strong> <strong>to</strong> <strong>Intervention</strong> (2010b).<br />
In addition, CAT measures can exam<strong>in</strong>e change over short<br />
<strong>in</strong>tervals consistent with progress moni<strong>to</strong>r<strong>in</strong>g. Probably most<br />
importantly, Computer-Adaptive Tests provide <strong>in</strong>structional<br />
direction <strong>in</strong> decision mak<strong>in</strong>g. These measures not only<br />
tell educa<strong>to</strong>rs how a student is do<strong>in</strong>g, but have a dist<strong>in</strong>ct<br />
advantage over other measures, <strong>in</strong>clud<strong>in</strong>g CBM, <strong>in</strong> po<strong>in</strong>t<strong>in</strong>g<br />
educa<strong>to</strong>rs <strong>to</strong>ward next steps.<br />
The rema<strong>in</strong>der of this paper will exam<strong>in</strong>e and illustrate the use<br />
of one particular set of CAT measures, the STAR assessments<br />
developed by <strong>Renaissance</strong> Learn<strong>in</strong>g. The application of these<br />
measures with<strong>in</strong> the RTI framework will be contrasted <strong>to</strong> the<br />
CBM measures <strong>in</strong> read<strong>in</strong>g and mathematics.<br />
11
Screen<strong>in</strong>g with CAT<br />
STAR Early Literacy (STAR-EL), STAR Read<strong>in</strong>g (STAR-R), and STAR Math (STAR-M) by <strong>Renaissance</strong> Learn<strong>in</strong>g<br />
are a suite of CAT measures that meet the requirements of RTI. STAR generates screen<strong>in</strong>g reports <strong>to</strong> assist<br />
data teams <strong>in</strong> determ<strong>in</strong><strong>in</strong>g which students appear <strong>to</strong> be on track and off track <strong>to</strong>ward grade-level goals.<br />
STAR’s Screen<strong>in</strong>g report can be run with standard RTI categories (benchmark, on watch, <strong>in</strong>tervention, and<br />
urgent <strong>in</strong>tervention) or a state’s AYP categories. Both represent important po<strong>in</strong>ts <strong>in</strong> the distribution that are<br />
predictive of future success or failure.<br />
The STAR Screen<strong>in</strong>g Reports <strong>in</strong> STAR-EL, STAR-R, and STAR-M provide educa<strong>to</strong>rs with clear <strong>in</strong>formation<br />
about the stand<strong>in</strong>g of any s<strong>in</strong>gle student relative <strong>to</strong> his/her peers. The reports also provide a list<strong>in</strong>g of the<br />
specific students <strong>in</strong> the grade who fall <strong>in</strong><strong>to</strong> each category, mak<strong>in</strong>g decisions from the data useful for group<strong>in</strong>g<br />
students dur<strong>in</strong>g data team meet<strong>in</strong>gs. These data are the basis upon which teams <strong>in</strong> an RTI model identify<br />
students <strong>in</strong> need of <strong>in</strong>tervention.<br />
STAR Screen<strong>in</strong>g Reports can also be used <strong>to</strong> exam<strong>in</strong>e changes over time <strong>in</strong> grade-level performance. For<br />
example, Figure 2 shows the outcomes of a fall and w<strong>in</strong>ter STAR Read<strong>in</strong>g benchmark assessment of grade<br />
5. In Scenario A, there is an <strong>in</strong>crease of students mov<strong>in</strong>g <strong>in</strong><strong>to</strong> the benchmark area (green) and a reduction<br />
<strong>in</strong> those at the highest level of risk (red). Scenario B shows the opposite outcome with overall scores mov<strong>in</strong>g<br />
<strong>in</strong> the opposite direction of what would be desired. This type of report gives educa<strong>to</strong>rs <strong>in</strong>sight <strong>in</strong><strong>to</strong> the<br />
effectiveness of <strong>in</strong>tervention and also the effectiveness of Tier 1 core <strong>in</strong>struction.<br />
Fall<br />
1 of 7<br />
Screen<strong>in</strong>g Report<br />
District Benchmark<br />
Pr<strong>in</strong>ted Friday, January 13, 2012 4:13:04 PM<br />
School: Lake View School Report<strong>in</strong>g Period: 1/09/2012-1/13/2012<br />
(W<strong>in</strong>ter Screen<strong>in</strong>g)<br />
Grade: 5<br />
950<br />
900<br />
850<br />
800<br />
750<br />
700<br />
650<br />
600<br />
550<br />
500<br />
450<br />
STAR Read<strong>in</strong>g Scaled Score<br />
W<strong>in</strong>ter<br />
400<br />
350<br />
300<br />
250<br />
200<br />
Students<br />
Benchmark<br />
Students<br />
Categories / Levels<br />
Scaled Score Percentile Rank Number Percent<br />
At/Above Benchmark<br />
At/Above Benchmark<br />
At/Above 520 SS At/Above 40 PR<br />
146 69%<br />
Category Total<br />
146 69%<br />
Below Benchmark<br />
On Watch<br />
Below 520 SS Below 40 PR<br />
25 12%<br />
<strong>Intervention</strong><br />
Below 452 SS Below 25 PR<br />
36 17%<br />
Urgent <strong>Intervention</strong><br />
Below 356 SS Below 10 PR<br />
4 2%<br />
Category Total<br />
65 31%<br />
Students Tested<br />
211<br />
1 of 7<br />
Screen<strong>in</strong>g Report<br />
District Benchmark<br />
Pr<strong>in</strong>ted Friday, September 23, 2011 4:13:04 PM<br />
School: Lake View School Report<strong>in</strong>g Period: 9/19/2011-9/23/2011<br />
(Fall Screen<strong>in</strong>g)<br />
Grade: 5<br />
850<br />
800<br />
750<br />
700<br />
650<br />
600<br />
550<br />
500<br />
450<br />
400<br />
STAR Read<strong>in</strong>g Scaled Score<br />
350<br />
300<br />
250<br />
200<br />
Students<br />
Benchmark<br />
Students<br />
Categories / Levels<br />
Scaled Score Percentile Rank Number Percent<br />
At/Above Benchmark<br />
At/Above Benchmark<br />
At/Above 479 SS At/Above 40 PR<br />
125 59%<br />
Category Total<br />
125 59%<br />
Below Benchmark<br />
On Watch<br />
Below 479 SS Below 40 PR<br />
36 17%<br />
<strong>Intervention</strong><br />
Below 414 SS Below 25 PR<br />
41 19%<br />
Urgent <strong>Intervention</strong><br />
Below 326 SS Below 10 PR<br />
9 4%<br />
Category Total<br />
86 41%<br />
Students Tested<br />
211<br />
Key questions <strong>to</strong> ask based on this and other <strong>in</strong>formation: Are you satisfied with the number of students at the highest<br />
level of performance Next, consider the level or score that <strong>in</strong>dicates proficiency. Which students just above proficiency are<br />
you "worried about" and what support with<strong>in</strong> or beyond core <strong>in</strong>struction is warranted What support is needed for students<br />
just below Do all students represented by your lowest level need urgent <strong>in</strong>tervention<br />
1 of 7<br />
Screen<strong>in</strong>g Report<br />
District Benchmark<br />
Pr<strong>in</strong>ted Friday, January 13, 2012 4:13:04 PM<br />
School: Lake View School Report<strong>in</strong>g Period: 1/09/2012-1/13/2012<br />
(W<strong>in</strong>ter Screen<strong>in</strong>g)<br />
Grade: 5<br />
950<br />
900<br />
850<br />
800<br />
750<br />
700<br />
650<br />
600<br />
550<br />
500<br />
450<br />
STAR Read<strong>in</strong>g Scaled Score<br />
W<strong>in</strong>ter<br />
400<br />
350<br />
300<br />
250<br />
200<br />
Students<br />
Benchmark<br />
Students<br />
Categories / Levels<br />
Scaled Score Percentile Rank Number Percent<br />
At/Above Benchmark<br />
At/Above Benchmark<br />
At/Above 520 SS At/Above 40 PR<br />
85 40%<br />
Category Total<br />
85 40%<br />
Below Benchmark<br />
On Watch<br />
Below 520 SS Below 40 PR<br />
70 33%<br />
<strong>Intervention</strong><br />
Below 452 SS Below 25 PR<br />
47 22%<br />
Urgent <strong>Intervention</strong><br />
Below 356 SS Below 10 PR<br />
9 4%<br />
Category Total<br />
126 60%<br />
Students Tested<br />
211<br />
Key questions <strong>to</strong> ask based on this and other <strong>in</strong>formation: Are you satisfied with the number of students at the highest<br />
level of performance Next, consider the level or score that <strong>in</strong>dicates proficiency. Which students just above proficiency are<br />
you "worried about" and what support with<strong>in</strong> or beyond core <strong>in</strong>struction is warranted What support is needed for students<br />
just below Do all students represented by your lowest level need urgent <strong>in</strong>tervention<br />
Scenario A<br />
This report shows a positive w<strong>in</strong>ter scenario.<br />
Some students have moved out of <strong>in</strong>tervention<br />
and <strong>in</strong><strong>to</strong> above benchmark between the fall<br />
and w<strong>in</strong>ter screen<strong>in</strong>g periods.<br />
Key questions <strong>to</strong> ask based on this and other <strong>in</strong>formation: Are you satisfied with the number of students at the highest<br />
level of performance Next, consider the level or score that <strong>in</strong>dicates proficiency. Which students just above proficiency are<br />
you "worried about" and what support with<strong>in</strong> or beyond core <strong>in</strong>struction is warranted What support is needed for students<br />
just below Do all students represented by your lowest level need urgent <strong>in</strong>tervention<br />
Scenario B<br />
This report shows a negative w<strong>in</strong>ter<br />
scenario. Fewer students are at<br />
benchmark and the ‘on watch’ category<br />
has expanded.<br />
Figure 2. Screen<strong>in</strong>g Reports for STAR Read<strong>in</strong>g <strong>in</strong> fall and w<strong>in</strong>ter, compar<strong>in</strong>g positive and negative scenarios.<br />
12
Screen<strong>in</strong>g with CBM<br />
Also for fall benchmark<strong>in</strong>g, data obta<strong>in</strong>ed from a<br />
CBM early literacy measure—Nonsense Word Fluency<br />
(NWF)—is shown for 1st grade <strong>in</strong> Figure 3. The<br />
data are divided <strong>in</strong><strong>to</strong> three categories: average or<br />
above average = benchmark (green); below<br />
average = some risk (yellow); and well below<br />
average = at risk (red). These categories<br />
correspond <strong>to</strong> scores at or above the 25th<br />
percentile (benchmark), between the 10th and 24th<br />
percentile (below average), and those below the<br />
10th percentile (much below average). Each CBM<br />
measure reports a rate-based metric, such<br />
as number of words read correct per m<strong>in</strong>ute for<br />
R-CBM. With the CBM model, different measures<br />
are adm<strong>in</strong>istered depend<strong>in</strong>g on the grade.<br />
Words Sounds Read Correct Correct<br />
225<br />
200<br />
175<br />
150<br />
125<br />
100<br />
75<br />
50<br />
25<br />
0<br />
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36<br />
Students<br />
Average or Above<br />
>25th %tile<br />
Below Average Aberage<br />
10th -25th %tile<br />
Much Below Average<br />
25th %tile<br />
Words Read Correct<br />
250<br />
225<br />
200<br />
175<br />
150<br />
125<br />
100<br />
W<strong>in</strong>ter<br />
Average or Above<br />
>25th %tile<br />
Below Average Aberage<br />
10th -25th %tile<br />
Much Below Average<br />
Progress Moni<strong>to</strong>r<strong>in</strong>g with CBM<br />
Once a team decides a student needs supplemental <strong>in</strong>structional support, appropriate <strong>in</strong>terventions are<br />
selected. In order <strong>to</strong> determ<strong>in</strong>e if the <strong>in</strong>structional support is successfully impact<strong>in</strong>g the student’s learn<strong>in</strong>g<br />
process, progress moni<strong>to</strong>r<strong>in</strong>g is conducted frequently while the student is receiv<strong>in</strong>g the tiered support. Both<br />
STAR and CBM measures provide a similar process for progress moni<strong>to</strong>r<strong>in</strong>g.<br />
An example of progress moni<strong>to</strong>r<strong>in</strong>g with a CBM, Figure 5 illustrates progress moni<strong>to</strong>r<strong>in</strong>g for a 1st grade<br />
student us<strong>in</strong>g Nonsense Word Fluency. Because the student was found <strong>to</strong> be <strong>in</strong> need of supplemental<br />
<strong>in</strong>structional support <strong>in</strong> the beg<strong>in</strong>n<strong>in</strong>g of grade 1,<br />
the team placed the student <strong>in</strong><strong>to</strong> an <strong>in</strong>tervention<br />
focused on improv<strong>in</strong>g knowledge of letter-sound<br />
comb<strong>in</strong>ations. Weekly progress moni<strong>to</strong>r<strong>in</strong>g of the<br />
student on NWF was conducted s<strong>in</strong>ce NWF is the<br />
measure that reflects overall growth <strong>in</strong> phonics<br />
for students <strong>in</strong> grade 1. The goal set by the team<br />
was for the student <strong>to</strong> reach 63 correct sounds<br />
per m<strong>in</strong>ute by the end of the school year, a level<br />
that reflects achievement at the 50th percentile for<br />
students among the national sample of CBM users<br />
of the NWF measure. Over the course of 34 weeks,<br />
this was an expected rate of improvement of<br />
0.85 (Rate of Improvement = (63-34)/34<br />
weeks = 0.85 sounds per m<strong>in</strong>ute per week).<br />
Exam<strong>in</strong>ation of the report <strong>in</strong> Figure 5 showed that<br />
the student made little <strong>in</strong>itial progress across the first<br />
5 weeks of <strong>in</strong>tervention. After consult<strong>in</strong>g with the data decision team, the <strong>in</strong>tervention was shifted <strong>to</strong> focus<br />
more on blend<strong>in</strong>g and decod<strong>in</strong>g, <strong>in</strong>dicated on the figure by a vertical red l<strong>in</strong>e. Follow<strong>in</strong>g implementation of the<br />
<strong>in</strong>tervention change, the student’s rate of progress was greater than expected, reach<strong>in</strong>g a level of 1.1 sounds<br />
correct per m<strong>in</strong>ute per week. The team exam<strong>in</strong>ed the data periodically and the successful response <strong>to</strong> the<br />
<strong>in</strong>tervention was evident. Similar progress moni<strong>to</strong>r<strong>in</strong>g processes are used for R-CBM as well as CBM Math<br />
Computation and Math Concepts/Applications.<br />
Sounds Correct Per M<strong>in</strong><br />
80<br />
70<br />
60<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
29-Sep<br />
Trendl<strong>in</strong>e<br />
29-Oct<br />
Change<br />
<strong>Intervention</strong><br />
29-Nov<br />
29-Dec<br />
Trendl<strong>in</strong>e<br />
Aiml<strong>in</strong>e<br />
Figure 5. Progress Moni<strong>to</strong>r<strong>in</strong>g with CBM for Nonsense<br />
Word Fluency<br />
29-Jan<br />
28-Feb<br />
31-Mar<br />
30-Apr<br />
Goal<br />
31-May<br />
14
Progress Moni<strong>to</strong>r<strong>in</strong>g with CAT<br />
An important aspect of us<strong>in</strong>g STAR for progress moni<strong>to</strong>r<strong>in</strong>g is the nature of Computer-Adaptive Test<strong>in</strong>g.<br />
Because the measure adjusts the difficulty of the items presented <strong>to</strong> students depend<strong>in</strong>g on the accuracy<br />
of their responses, the items answered correctly reflect a broad range of skills acquired by students.<br />
For students <strong>in</strong> need of <strong>in</strong>tervention, STAR measures are repeated frequently (as often as weekly), produc<strong>in</strong>g<br />
data that reflect if students are respond<strong>in</strong>g <strong>to</strong> the <strong>in</strong>tervention. When students fail <strong>to</strong> respond adequately,<br />
educa<strong>to</strong>rs need guidance as <strong>to</strong> where the learn<strong>in</strong>g is “break<strong>in</strong>g down.” STAR measures produce data reports<br />
that are similar <strong>to</strong> those produced <strong>in</strong> RTI models us<strong>in</strong>g CBM, however, because STAR is a Computer-Adaptive<br />
Test, the data go beyond tell<strong>in</strong>g educa<strong>to</strong>rs how effective their <strong>in</strong>tervention is and suggest next steps<br />
for <strong>in</strong>struction.<br />
Before progress moni<strong>to</strong>r<strong>in</strong>g with STAR, educa<strong>to</strong>rs use the Goal Sett<strong>in</strong>g Wizard <strong>to</strong> determ<strong>in</strong>e expected rate of<br />
improvement. Figure 6 illustrates goal sett<strong>in</strong>g with Juanita Vargas, a first grade student with a scaled score<br />
of 503, which represents the 18th percentile and places her <strong>in</strong> the category of need<strong>in</strong>g <strong>in</strong>tervention (for more<br />
about scaled score, see page 17). The team chose the moderate rate of improvement suggested by STAR’s<br />
Goal Sett<strong>in</strong>g Wizard. If Juanita grows at this rate, she will reach the 27th percentile by January.<br />
STAR’s suggested rates of improvement come from a growth model, which is a very large database of student<br />
growth patterns. With growth models, educa<strong>to</strong>rs can make <strong>in</strong>formed decisions about their expectations for<br />
growth. STAR Read<strong>in</strong>g’s growth model <strong>in</strong>cludes over 4.5 million students. The growth model for STAR Math<br />
conta<strong>in</strong>s almost 1.1 million students, and STAR Early Literacy’s model <strong>in</strong>cludes almost 200,000 students. Many<br />
state tests also use growth models.<br />
Figure 6. The Goal Sett<strong>in</strong>g Wizard suggests a personalized rate of<br />
improvement for each student us<strong>in</strong>g a research-based growth model.<br />
15
To moni<strong>to</strong>r Juanita’s progress, STAR-EL was<br />
adm<strong>in</strong>istered weekly as seen <strong>in</strong> Figure 7. At<br />
first, Juanita did not show a growth rate equal<br />
<strong>to</strong> her goal. The team decided <strong>to</strong> change the<br />
<strong>in</strong>tervention after 5 weeks.<br />
The change <strong>in</strong> <strong>in</strong>tervention resulted <strong>in</strong> substantial<br />
improvement <strong>in</strong> Juanita’s performance that<br />
actually surpassed the level of expected growth<br />
STAR scaled scores represent a<br />
set of skills students have a high<br />
probability of know<strong>in</strong>g. This is<br />
a fundamental element of CAT:<br />
student performance is placed on a<br />
developmental scale.<br />
<strong>in</strong>itially set for her. Based on her w<strong>in</strong>ter universal screen<strong>in</strong>g, the team will exam<strong>in</strong>e Juanita’s data and decide if<br />
she is still <strong>in</strong> need of the tiered <strong>in</strong>tervention support. STAR-R and STAR-M also provide similar opportunities <strong>to</strong><br />
conduct progress moni<strong>to</strong>r<strong>in</strong>g for students who are receiv<strong>in</strong>g supplemental tiered <strong>in</strong>terventions.<br />
STAR moni<strong>to</strong>rs progress us<strong>in</strong>g scaled scores. STAR scaled scores represent a set of skills students have a<br />
high probability of know<strong>in</strong>g. As a result, educa<strong>to</strong>rs can run reports that suggest the skills students scor<strong>in</strong>g at<br />
470 versus 400 should have mastered, thus identify<strong>in</strong>g potential targets for <strong>in</strong>tervention. This is a fundamental<br />
element of all CAT measures: Student performance is placed on a developmental scale such that all students<br />
who score at the same level, regardless of grade, are essentially at the same po<strong>in</strong>t <strong>in</strong> the progression of skills<br />
result<strong>in</strong>g <strong>in</strong> that score.<br />
Student Progress Moni<strong>to</strong>r<strong>in</strong>g Report<br />
Pr<strong>in</strong>ted Thursday, January 13, 2011 3:15:23 PM<br />
School: East Elementary Report<strong>in</strong>g Period: 9/16/2009-1/22/2010<br />
(Cus<strong>to</strong>m)<br />
1 of 2<br />
Vargas, Juanita<br />
Grade: 1<br />
ID: P234U8<br />
Class: G1 - Davos<br />
Teacher: Davidson, M.<br />
700<br />
650<br />
STAR Early Literacy Scaled Score<br />
600<br />
550<br />
500<br />
450<br />
Sep-10 Oct-10 Nov-10 Dec-10 Jan-11<br />
Test score<br />
Trend l<strong>in</strong>e Goal l<strong>in</strong>e Goal<br />
<strong>Intervention</strong> change<br />
Juanita's Current Goal<br />
Goal: 581 SS Goal End Date: 1/22/2010<br />
Expected Growth Rate: 5.6 SS/Week<br />
Figure 7. Progress moni<strong>to</strong>r<strong>in</strong>g on STAR-EL for a student <strong>in</strong><br />
need of tiered support.<br />
16
Understand<strong>in</strong>g the STAR Scaled Score<br />
To fully understand the STAR measures and their potential use <strong>in</strong> RTI, one must understand the STAR scaled<br />
score. A scaled score is an <strong>in</strong>dication of a student’s placement on a test scale. When we weigh ourselves,<br />
we rely on a scale of “pounds” <strong>to</strong> differentiate our weight, regardless of our age. If we weigh an <strong>in</strong>fant or a<br />
middle-age adult, we can easily communicate the relative differences between their weight because there are<br />
particular weights that def<strong>in</strong>e normal given a person’s age. Although we weigh <strong>in</strong>fants and adults on the same<br />
scale (i.e., pounds), our expectation for where they fall on the scale is different depend<strong>in</strong>g on their ages.<br />
STAR Read<strong>in</strong>g and STAR Math use a scale of 1400 scaled score po<strong>in</strong>ts. STAR Early Literacy’s scale ranges<br />
from 300 <strong>to</strong> 900, spann<strong>in</strong>g pre-K–3. Because the STAR scale spans the entire school life of a child, the STAR<br />
scaled score tells us where <strong>in</strong> the learn<strong>in</strong>g process the student falls. Although all students are placed on the<br />
same 0 <strong>to</strong> 1400 scale, our expectation for where a child should fall on the scale is related <strong>to</strong> their grade<br />
<strong>in</strong> school.<br />
As illustrated <strong>in</strong> Figure 8, the expected STAR Read<strong>in</strong>g scaled score varies by grade and also represents<br />
a smooth and gradually <strong>in</strong>creas<strong>in</strong>g level of performance across grades. For example, students at the 50th<br />
percentile <strong>in</strong> spr<strong>in</strong>g of 2nd grade have a 334 scaled score. In 3rd grade, the scaled score for the 50th<br />
percentile <strong>in</strong> spr<strong>in</strong>g is 436. In 4th grade, it’s 515, and so forth.<br />
When STAR is used for universal screen<strong>in</strong>g, the scaled score tells the educa<strong>to</strong>r how much a student is beh<strong>in</strong>d<br />
relative <strong>to</strong> his/her peers. For example, look<strong>in</strong>g at Figure 8, a 4th grade student who achieves a scaled score<br />
of 400 <strong>in</strong> spr<strong>in</strong>g is just below the 25th percentile compared <strong>to</strong> other fourth graders. Assum<strong>in</strong>g that we want 4th<br />
graders <strong>to</strong> achieve at least at the 40th percentile <strong>in</strong> the spr<strong>in</strong>g, or a score of 470, educa<strong>to</strong>rs can see there is a<br />
gap between where a student should be and where they currently are function<strong>in</strong>g, putt<strong>in</strong>g the student at some<br />
risk for academic failure.<br />
Grade<br />
1<br />
2<br />
3<br />
4<br />
Percentile<br />
Fall<br />
September<br />
Scaled<br />
Score<br />
Est.<br />
ORF a<br />
Scaled<br />
Score<br />
W<strong>in</strong>ter<br />
January<br />
Est.<br />
ORF a<br />
Scaled<br />
Score<br />
Spr<strong>in</strong>g<br />
May<br />
Est.<br />
ORF a<br />
10 59 5 70 14 81 22<br />
20 64 9 76 18 92 27<br />
25 66 11 78 19 102 30<br />
40 72 15 88 25 150 41<br />
50 78 19 99 29 181 49<br />
10 84 24 106 31 174 45<br />
20 100 30 161 42 227 58<br />
25 110 32 181 47 247 63<br />
40 166 43 232 60 299 78<br />
50 197 51 263 68 334 87<br />
10 184 49 222 55 260 62<br />
20 236 57 274 66 315 74<br />
25 257 62 294 70 337 79<br />
40 310 73 352 82 394 95<br />
50 344 80 384 92 436 105<br />
10 266 61 291 67 319 73<br />
20 321 73 351 81 377 88<br />
25 344 79 372 87 403 94<br />
40 402 94 441 102 470 108<br />
50 445 103 475 110 515 119<br />
Figure 8. Example of STAR Read<strong>in</strong>g scaled scores. STAR Math and STAR Early<br />
Literacy also span across grades.<br />
a<br />
Est. ORF: Estimated Oral Read<strong>in</strong>g Fluency is only reported for grades 1-4.<br />
17
Instructional Plann<strong>in</strong>g with STAR<br />
The STAR assessments offer <strong>in</strong>formation about skill<br />
development by signal<strong>in</strong>g where students fall on the<br />
Core Progress learn<strong>in</strong>g progression l<strong>in</strong>ked <strong>to</strong> the<br />
assessment. A learn<strong>in</strong>g progression represents the<br />
prerequisite skills required <strong>to</strong> move through specific<br />
skills. Curriculum-Based Measures po<strong>in</strong>t educa<strong>to</strong>rs<br />
<strong>to</strong> the need <strong>to</strong> conduct such diagnostic work,<br />
whereas STAR provides the specifics of <strong>in</strong>structional<br />
plann<strong>in</strong>g as part of the rout<strong>in</strong>e assessment process.<br />
Figure 9 shows the w<strong>in</strong>ter report from STAR-EL of<br />
a hypothetical 1st grade student, Juanita Vargas.<br />
Juanita’s scaled score of 583 places her below<br />
benchmark, <strong>in</strong> the range identified as a late<br />
emergent reader. As a result, the RTI data team<br />
identified her as someone for whom supplemental<br />
<strong>in</strong>tervention was needed. The report shows the<br />
nature of skill development embedded with<strong>in</strong> each<br />
of the literacy doma<strong>in</strong>s assessed by the STAR<br />
measure. For example, Juanita’s report shows<br />
that <strong>in</strong> phonemic awareness, blend<strong>in</strong>g word parts<br />
and phonemes would be the right targets for skill<br />
development. With<strong>in</strong> comprehension, read<strong>in</strong>g and<br />
understand<strong>in</strong>g words and complete sentences are<br />
focal po<strong>in</strong>ts for <strong>in</strong>tervention development. The<br />
report shows that she has generally good read<strong>in</strong>ess<br />
skills and these skills need not be emphasized<br />
with<strong>in</strong> the <strong>in</strong>tervention.<br />
Figure 9. STAR Early Literacy Diagnostic Report for<br />
student <strong>in</strong> grade 1.<br />
STAR also provides reports for <strong>in</strong>structional plann<strong>in</strong>g.<br />
The Instructional Plann<strong>in</strong>g reports <strong>in</strong> STAR are based<br />
on the Core Progress learn<strong>in</strong>g progression and<br />
suggest the skills students are ready <strong>to</strong> learn next.<br />
An exam<strong>in</strong>ation of the Instructional Plann<strong>in</strong>g Report<br />
<strong>in</strong> Figure 10 shows the extent of <strong>in</strong>structional detail<br />
offered by the STAR-M measure. In this case, the<br />
student, Brandon Bollig, has a score of 588, which<br />
is <strong>in</strong> the “on watch” area. The Instructional Plann<strong>in</strong>g<br />
Report identified specific skills with<strong>in</strong> doma<strong>in</strong>s of 4th<br />
grade math on which <strong>in</strong>struction should be focused.<br />
For example, <strong>in</strong> Algebra, the STAR assessment<br />
recommended an emphasis on determ<strong>in</strong><strong>in</strong>g the<br />
operation needed for a given situation as well as<br />
determ<strong>in</strong><strong>in</strong>g a multiplication or division sentence<br />
for a given situation. If Brandon’s learn<strong>in</strong>g is not<br />
accelerated, he will rema<strong>in</strong> <strong>in</strong> the on watch area by<br />
the end of the year as predicted by STAR.<br />
Figure 10. STAR Math Instructional Plann<strong>in</strong>g Report for<br />
student <strong>in</strong> grade 4.<br />
18
Discussion<br />
The purpose of this paper was <strong>to</strong> exam<strong>in</strong>e the premise that the CBM system equates <strong>to</strong> implement<strong>in</strong>g RTI.<br />
As shown throughout the paper, this is a myth. The nature of the measurement system does not def<strong>in</strong>e the<br />
RTI model. RTI is about sound decision mak<strong>in</strong>g and targeted <strong>in</strong>struction based on good data—this can be<br />
accomplished with several assessment systems <strong>in</strong>clud<strong>in</strong>g CBM and Computer-Adaptive Tests.<br />
Both STAR and CBM measures can work with<strong>in</strong> RTI models <strong>to</strong> provide universal screen<strong>in</strong>g and progress<br />
moni<strong>to</strong>r<strong>in</strong>g. Certa<strong>in</strong>ly, there are advantages and disadvantages of each system. One advantage of STAR is<br />
the l<strong>in</strong>k <strong>to</strong> <strong>in</strong>struction embedded <strong>in</strong> the assessments. As a result, there is more guidance about what could<br />
be done <strong>to</strong> improve student performance. Conversely, CBM requires educa<strong>to</strong>rs <strong>to</strong> determ<strong>in</strong>e the <strong>in</strong>structional<br />
implications of a student’s performance. An advantage of CBM over STAR is the length of the assessment.<br />
Whereas CBM measures usually take anywhere from 1 <strong>to</strong> 8 m<strong>in</strong>utes depend<strong>in</strong>g on the doma<strong>in</strong> be<strong>in</strong>g<br />
assessed, STAR measures usually take between 10 and 25 m<strong>in</strong>utes. This difference <strong>in</strong> time is not as large,<br />
however, when one adds the extra time needed with CBM <strong>to</strong> consider the nature of the <strong>in</strong>structional programs<br />
that are implied by the student performance.<br />
Another difference between the measures is that CBM read<strong>in</strong>g measures (Early Literacy and R-CBM) require<br />
<strong>in</strong>dividual adm<strong>in</strong>istration by tra<strong>in</strong>ed school personnel. CBM math measures can be adm<strong>in</strong>istered <strong>to</strong> small<br />
groups of students. STAR measures are computer adm<strong>in</strong>istered. Although both measures <strong>in</strong>volve educational<br />
personnel <strong>in</strong> the adm<strong>in</strong>istration process, CBM<br />
measures tend <strong>to</strong> be more person-<strong>in</strong>tensive than<br />
STAR measures. Certa<strong>in</strong>ly, with STAR measures,<br />
school personnel must be sure that students<br />
rema<strong>in</strong> fully engaged with the computer dur<strong>in</strong>g<br />
the assessment process. However, there is<br />
significantly less burden on school personnel<br />
<strong>in</strong> terms of adm<strong>in</strong>istration time—especially for<br />
schoolwide screen<strong>in</strong>g three times a year.<br />
RTI is about sound decision<br />
mak<strong>in</strong>g and targeted <strong>in</strong>struction<br />
based on good data—this can<br />
be accomplished with several<br />
assessment systems, <strong>in</strong>clud<strong>in</strong>g<br />
CBM and Computer-Adaptive Tests.<br />
A particularly important difference between CBM<br />
and STAR measures is the nature of the measure.<br />
CBM <strong>in</strong> read<strong>in</strong>g uses measures of fluency (correct per unit time) rather than accuracy (number correct) <strong>to</strong><br />
reflect overall performance. In read<strong>in</strong>g, a student’s overall performance is reflected <strong>in</strong> the rate at which he/she<br />
performs on the CBM. CBM math tends <strong>to</strong> use <strong>to</strong>tal correct (accuracy) but it can also be used as a fluency<br />
measure because one exam<strong>in</strong>es <strong>to</strong>tal accuracy per unit of time. In contrast, STAR measures reflect a student’s<br />
accurate response <strong>to</strong> items represent<strong>in</strong>g various skills. Because STAR is computer adaptive, each test adjusts<br />
based on the accuracy of the student’s response <strong>to</strong> questions represent<strong>in</strong>g multiple skills. As a result, the<br />
STAR measures can give us substantial <strong>in</strong>formation about the nature of specific skills that students possess.<br />
The emphasis on skills provides opportunity for <strong>in</strong>structional plann<strong>in</strong>g that can be derived directly from the<br />
STAR measures. This approach is very different than assess<strong>in</strong>g <strong>in</strong>dividual skills accord<strong>in</strong>g <strong>to</strong> the number of<br />
tasks completed correctly <strong>in</strong> a certa<strong>in</strong> amount of time.<br />
19
Conclusion<br />
At the end of the day, users of STAR measures can be confident that every aspect of RTI is easily met by the<br />
various reports generated through the STAR assessments. Users of STAR need <strong>to</strong> become comfortable and<br />
fluent with the STAR scaled score, which on the surface, can seem a bit abstract compared <strong>to</strong> CBMs direct<br />
measures of student performance (i.e., the number of correct sounds, the number of words read per m<strong>in</strong>ute,<br />
the number of items correct on a math test). However, the STAR<br />
scaled score represents where a student’s skills fall across a<br />
longitud<strong>in</strong>al scale spann<strong>in</strong>g the entire school spectrum from<br />
k<strong>in</strong>dergarten through grade 12. Placement of students on a<br />
longitud<strong>in</strong>al scale is not part of the CBM measurement system.<br />
STAR measures offer an<br />
important and potentially<br />
valuable contribution <strong>to</strong> RTI.<br />
Those consider<strong>in</strong>g STAR measures should be confident that the data produced by STAR assessments are<br />
accurate, reliable, and valuable for <strong>in</strong>form<strong>in</strong>g decisions that are a part of the RTI process. The STAR measures<br />
provide a level of <strong>in</strong>structional plann<strong>in</strong>g <strong>in</strong>formation that exceeds what is produced by CBM alone, a level<br />
of support educa<strong>to</strong>rs will certa<strong>in</strong>ly f<strong>in</strong>d useful with<strong>in</strong> the RTI model. STAR offers the added and important<br />
advantage of l<strong>in</strong>ks directly <strong>to</strong> <strong>in</strong>structional targets compared <strong>to</strong> CBM.<br />
The objective of effective RTI systems is <strong>to</strong> provide <strong>in</strong>structional <strong>in</strong>terventions <strong>to</strong> students who are identified<br />
<strong>to</strong> potentially not meet grade-level expectations. Assessment <strong>to</strong>ols are critical <strong>to</strong> RTI. Both STAR and<br />
CBM systems measure student performance on the key components of universal screen<strong>in</strong>g and progress<br />
moni<strong>to</strong>r<strong>in</strong>g <strong>to</strong> reflect a student’s response <strong>to</strong> <strong>in</strong>tervention. Clearly, RTI does not equal CBM. STAR measures<br />
offer an important and potentially valuable contribution <strong>to</strong> RTI.<br />
20
References<br />
Abbott, R. D., Bern<strong>in</strong>ger, V. W., & Fayol, M. (2010). Longitud<strong>in</strong>al relationships of levels of language <strong>in</strong> writ<strong>in</strong>g and between writ<strong>in</strong>g<br />
and read<strong>in</strong>g <strong>in</strong> grades 1 <strong>to</strong> 7. Journal of Educational Psychology, 102, 281-298. doi: 10.1037/a0019318<br />
Burns, M. K., & Gibbons, K. A. (2008). Implement<strong>in</strong>g <strong>Response</strong>-<strong>to</strong>-<strong>Intervention</strong> <strong>in</strong> Elementary and Secondary Schools. <strong>New</strong> York:<br />
Routledge Taylor & Francis Group.<br />
Deno, S. L. (1985). Curriculum-based measurement: The emerg<strong>in</strong>g alternative. Exceptional Children, 52, 219-232.<br />
Deno, S. L, Mars<strong>to</strong>n, D., T<strong>in</strong>dal, G. (1985-1986). Direct and frequent curriculum-based measurement: An alternative for educational<br />
decision mak<strong>in</strong>g. Special Services <strong>in</strong> the Schools, 2, 5-27. doi: 10.1300/J008v02n02_02<br />
Deno, S. L., Mirk<strong>in</strong>, P. K., & Chiang, B. (1982). Identify<strong>in</strong>g valid measures of read<strong>in</strong>g. Exceptional Children, 49, 36-45.<br />
Den<strong>to</strong>n, C. A., Barth, A. E., Fletcher, J. M., Wexler, J., Vaughn, S., Cir<strong>in</strong>o, P. T., Roma<strong>in</strong>, M., & Francis, D. J. (2011). The relations<br />
among oral and silent read<strong>in</strong>g fluency and comprehension <strong>in</strong> middle school: Implications for identification and <strong>in</strong>struction<br />
of students with read<strong>in</strong>g difficulties. Scientific Studies of Read<strong>in</strong>g, 15, 109-135. doi: 10.1080/10888431003623546<br />
Juel, C. (1988). Learn<strong>in</strong>g <strong>to</strong> read and write: A longitud<strong>in</strong>al study of 54 children from first through fourth grades. Journal of<br />
Educational Psychology, 80, 437-447. doi: 10.1037/0022-0663.80.4.437<br />
McGl<strong>in</strong>chey, M. T., & Hixon, M. D., (2004). Us<strong>in</strong>g curriculum-based measurement <strong>to</strong> predict performance on state assessments <strong>in</strong><br />
read<strong>in</strong>g. School Psychology Review, 33(2), 193–203.<br />
National Center on <strong>Response</strong> <strong>to</strong> <strong>Intervention</strong>. (2010b). Screen<strong>in</strong>g <strong>to</strong>ols chart. Retrieved March 15, 2011, from<br />
http://www.rti4success.org/screen<strong>in</strong>gTools<br />
Petscher, Y., & Kim, Y. (2011). The utility and accuracy of oral read<strong>in</strong>g fluency score types <strong>in</strong> predict<strong>in</strong>g read<strong>in</strong>g comprehension.<br />
Journal of School Psychology, 49, 107-129. doi: 10.1016/j.jsp.2010.09.004<br />
Qu<strong>in</strong>n, P. (2009). Ultimate RTI: Everyth<strong>in</strong>g a teacher needs <strong>to</strong> know <strong>to</strong> implement RTI. Sl<strong>in</strong>ger, WI: Ideas Unlimited Sem<strong>in</strong>ars.<br />
Shapiro, E. S., Keller, M. A., Edwards, L., Lutz, G., & H<strong>in</strong>tze, J. M. (2006). General outcome measures and performance on state<br />
assessment and standardized tests: Read<strong>in</strong>g and math performance <strong>in</strong> Pennsylvania. Journal of Psychoeducational<br />
Assessment, 42(1), 19–35.<br />
Shapiro, E. S., Solari, E., & Petscher, Y. (2008). Use of a measure of read<strong>in</strong>g comprehension <strong>to</strong> enhance prediction on the state<br />
high stakes assessment. Learn<strong>in</strong>g and Individual Differences, 18, 316-328. doi: 10.1016/j.l<strong>in</strong>dif.2008.03.002<br />
Silberglitt, B., Burns, M. K., Madyun, N. H., & Lail, K. E. (2006). Relationship of read<strong>in</strong>g fluency assessment data with state<br />
accountability test scores: A longitud<strong>in</strong>al comparison of grade levels. Psychology <strong>in</strong> the Schools, 43(5), 527-535.<br />
doi: 10.1002/pits.20175<br />
Snow, C., Porche, M. V., Tabors, P. O., & Harris, S. R. (2007). Is literacy enough Pathways <strong>to</strong> academic success for adolescents.<br />
Baltimore, MD: Paul H. Brookes Publish<strong>in</strong>g.<br />
Wright, J. (2007). RTI <strong>to</strong>olkit: A practical guide for schools. Port Chester, NY: Dude Publish<strong>in</strong>g.<br />
21
Acknowledgements<br />
Author<br />
Edward S. Shapiro, Ph.D., is professor of school psychology and direc<strong>to</strong>r of the Center for Promot<strong>in</strong>g<br />
Research <strong>to</strong> Practice <strong>in</strong> the College of Education at Lehigh University. He is the 2006 w<strong>in</strong>ner of the American<br />
Psychological Association’s Division of School Psychology Senior Scientist Award. Professor Shapiro has<br />
authored 14 books and is best known for his work <strong>in</strong> curriculum-based assessment and <strong>in</strong>terventions for<br />
academic skills problems. Among his many projects, Shapiro recently completed a federal project focused<br />
on the development of a multi-tiered, RTI model <strong>in</strong> two districts <strong>in</strong> Pennsylvania, and currently directs a<br />
U.S. Department of Education grant <strong>to</strong> tra<strong>in</strong> school psychologists as facilita<strong>to</strong>rs of RTI processes. He also<br />
collaborates with the Pennsylvania Department of Education <strong>in</strong> develop<strong>in</strong>g and facilitat<strong>in</strong>g the implementation<br />
of the state’s RTI methodology.<br />
Reviewers<br />
Matthew K. Burns, Ph.D., is a professor of educational psychology, coord<strong>in</strong>a<strong>to</strong>r of the School Psychology<br />
program, and co-direc<strong>to</strong>r of the M<strong>in</strong>nesota Center for Read<strong>in</strong>g Research at the University of M<strong>in</strong>nesota. Dr.<br />
Burns has published over 150 articles and book chapters <strong>in</strong> national publications, and has co-authored or<br />
co-edited several books. He is also the edi<strong>to</strong>r of School Psychology Review and past edi<strong>to</strong>r of Assessment<br />
for Effective <strong>Intervention</strong>. Specific areas <strong>in</strong> which Dr. Burns has conducted research <strong>in</strong>clude response <strong>to</strong><br />
<strong>in</strong>tervention, assess<strong>in</strong>g the <strong>in</strong>structional level, academic <strong>in</strong>terventions, and facilitat<strong>in</strong>g problem-solv<strong>in</strong>g teams.<br />
Pat Qu<strong>in</strong>n is known nationally as “The RTI Guy” and is the author of a widely circulated newsletter dedicated<br />
<strong>to</strong> help<strong>in</strong>g teachers implement <strong>Response</strong> <strong>to</strong> <strong>Intervention</strong> (over 10,000 subscribers). His onl<strong>in</strong>e tra<strong>in</strong><strong>in</strong>g —<br />
“<strong>Response</strong> <strong>to</strong> <strong>Intervention</strong> Made Easy”—has been used by thousands of teachers around the country. Mr.<br />
Qu<strong>in</strong>n’s latest book is titled Ultimate RTI.<br />
Mike Vanderwood, Ph.D., is currently an associate professor of school psychology at the University<br />
of California–Riverside. He conducts research related <strong>to</strong> multi-tiered systems and English Language<br />
Learners. Professor Vanderwood has been <strong>in</strong>volved <strong>in</strong> education reform for over 15 years and has helped<br />
implement multi-tiered systems <strong>in</strong> general and special education throughout the country. Most of his current<br />
research focuses on assess<strong>in</strong>g and improv<strong>in</strong>g the quality of assessment and <strong>in</strong>tervention <strong>to</strong>ols used <strong>in</strong> a<br />
multi-tiered approach.<br />
Jim Ysseldyke, Ph.D., is Emma Birkmaier Professor of Educational Leadership <strong>in</strong> the Department of<br />
Educational Psychology at the University of M<strong>in</strong>nesota. Professor Ysseldyke has been educat<strong>in</strong>g school<br />
psychologists and researchers for more than 35 years. He has served the University of M<strong>in</strong>nesota as direc<strong>to</strong>r<br />
of the M<strong>in</strong>nesota Institute for Research on Learn<strong>in</strong>g Disabilities, direc<strong>to</strong>r of the National School Psychology<br />
Network, direc<strong>to</strong>r of the National Center on Educational Outcomes, direc<strong>to</strong>r of the School Psychology<br />
Program, and associate dean for research. Ysseldyke’s research and writ<strong>in</strong>g have focused on enhanc<strong>in</strong>g the<br />
competence of <strong>in</strong>dividual students and enhanc<strong>in</strong>g the capacity of systems <strong>to</strong> meet students’ needs. He is an<br />
author of major textbooks and more than 300 journal articles. Ysseldyke is conduct<strong>in</strong>g a set of <strong>in</strong>vestigations<br />
on the use of technology-enhanced progress-moni<strong>to</strong>r<strong>in</strong>g systems <strong>to</strong> track the performance and progress of<br />
students <strong>in</strong> urban environments. He chaired the task forces that produced the three Bluepr<strong>in</strong>ts on the<br />
Future of Tra<strong>in</strong><strong>in</strong>g and Practice <strong>in</strong> School Psychology, and he is former edi<strong>to</strong>r of Exceptional Children, the<br />
flagship journal of the Council for Exceptional Children. Ysseldyke has received awards for his research from<br />
the School Psychology Division of the American Psychological Association, the American Educational<br />
Research Association, and the Council for Exceptional Children. The University of M<strong>in</strong>nesota presented him<br />
a dist<strong>in</strong>guished teach<strong>in</strong>g award, and he received a dist<strong>in</strong>guished alumni award from the University of Ill<strong>in</strong>ois.<br />
22
Acknowledgements<br />
Author<br />
Edward S. Shapiro, Ph.D., is a<br />
professor of school psychology<br />
and direc<strong>to</strong>r of the Center for<br />
Promot<strong>in</strong>g Research <strong>to</strong> Practice<br />
<strong>in</strong> the College of Education at<br />
Lehigh University.<br />
Reviewer<br />
Mike Vanderwood, PhD., is<br />
currently an associate professor<br />
of school psychology at the<br />
University of California-Riverside.<br />
Reviewer<br />
Matthew K. Burns, Ph.D.,is<br />
a professor of educational<br />
psychology, coord<strong>in</strong>a<strong>to</strong>r of the<br />
School Psychology program,<br />
and co-direc<strong>to</strong>r of the M<strong>in</strong>nesota<br />
Center for Read<strong>in</strong>g Research at<br />
the University of M<strong>in</strong>nesota.<br />
Reviewer<br />
James Ysseldyke, Ph.D., is<br />
Emma Birkmaier Professor of<br />
Educational Leadership <strong>in</strong> the<br />
Department of Educational<br />
Psychology at the University<br />
of M<strong>in</strong>nesota.<br />
Pat Qu<strong>in</strong>n is nationally known as<br />
“The RTI Guy.” He is the author of<br />
a widely read newsletter. His<br />
latest book is titled Ultimate RTI.<br />
Reviewer<br />
L2726.0112.RN.2.5M<br />
R54794<br />
24