12.07.2015 Views

CMT-CAPT Skills Checklist Technical Manual. - NAAC

CMT-CAPT Skills Checklist Technical Manual. - NAAC

CMT-CAPT Skills Checklist Technical Manual. - NAAC

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Appendix C) approved the standard setting plan proposed by Measurement Incorporated(MI), the contractor for the <strong>CMT</strong> and the second generation <strong>Skills</strong> <strong>Checklist</strong>. Twenty-twogeneral and special education teachers, curriculum coordinators and school administratorsparticipated in the three day standard setting activity January 25, 26, and 27, 2006, alongwith staff from the department and two consultants to the project. A five member teamof psychometricians and staff from Measurement Incorporated conducted the standardsetting activities.In the absence of student data or completed checklists, the standards could be set usingone of two approaches. A method that includes a bit of both approaches was employed totake advantage of the more positive aspects of both. Therefore a two-round process wasutilized. In Round 1, panelists studied the checklists and the Performance LevelDescriptors (PLDs) (see Appendix I) and then constructed sample profiles that matchborderline points for Levels 1-2 (Basic-Proficient) and 2-3 (Proficient-Independent). InRound 2, panelists reviewed one another’s Round 1 sample profiles and classified theminto one of the three levels.For round 1, the panelists were divided into four groups: Grades 3-4, Grades 5-6, Grades7-8-High School Language Arts, and Grades 7-8-High School Mathematics. Afterorientation and some practice, these four groups worked in two-person teams to createsample student profiles that illustrated the performances of students just below or justabove an imaginary cut score for Proficient. They then repeated the exercise for studentsjust below or just above an imaginary cut score for Independent. By the end of Round 1,panelists generated enough hypothetical student profiles to form small distributionsaround each cut score.During round 2, panelists reviewed and discussed profiles created during Round 1.Several holistic rating methods were considered, including the judgmental policy capture(JPC) method (Jaeger, 1995) and dominant profile judgment (DPJ) method (Plake,Hambleton, &, Jaeger, 1997). A generalized holistic method (cf. Cizek, Bunch, &Koons, 2004) seemed to satisfy the requirements of the present situation. The methodmakes no special assumptions about the model (it can be either compensatory orconjunctive, but it was used exclusively by Measurement Incorporated in a compensatorymodel for other standard-setting activities) or data analytic techniques (MI used simplemeans or medians and midpoints, as in the contrasting-groups method, as described inCizek, Bunch, & Koons, 2004). The method treats the rating of profiles exactly as theJPC or DPJ methods would but provided a more straightforward data analytic procedurethan either. Therefore, the generalized holistic method was used.Measurement Incorporated created a form to be used with the panelists. Each formincluded all the profiles generated during Round 1. To complete the form, the panelistsconsidered each profile, along with the Performance Level Descriptor for each level.After studying the Performance Level Descriptor and the profile, the panelists made anentry in the final column of the form. After Round 2, MI facilitators tallied the ratingsprovided by the panelists and reported the results. In the discussion about follow-up9

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!