chosen by AEC/BETA and were external to the state. The major deliverable for thisalignment study was a clear and readable report delineating the results of the study.Throughout the alignment institute, reviewers concentrated on the four criteria central tothe Webb Alignment method:• Categorical Concurrence – the criterion of categorical concurrence betweenstandards and assessment is met if the same or consistent categories of contentappear in both standards and assessment documents;• Depth-of Knowledge (DOK) consistency – Depth-of-knowledge consistencybetween standards and assessment indicates alignment if what is elicited fromstudents on the assessment is as demanding cognitively as what students areexpected to know and do as stated in the standards;• Range-of-Knowledge (ROK) correspondence – The range-of-knowledgecorrespondence criterion is used to judge whether a comparable span ofknowledge expected of students by a standard is the same as, or corresponds to,the span of knowledge that students need in order to correctly answer theassessment items/activates; and• Balance of Representation – The balance-of-representation criterion is used toindicate the degree to which one objective is given more emphasis on theassessment than another.ResultsThe results of this alignment study of the <strong>CMT</strong>/<strong>CAPT</strong> <strong>Skills</strong> <strong>Checklist</strong> determined that“for the alternate assessment, the content standards and the assessment items are verywell aligned with respect to all four alignment criteria – categorical concurrence, depthof-knowledge,range-of-knowledge, and balance of representation.” (Alignment Analysisof Connecticut’s Standards and Assessments: Executive Summary, Assessment andEvaluation Concepts Inc., April 2006, Page 18) A copy of this report is included inAppendix J.STANDARD SETTINGStandard setting methodologyThe standards for the <strong>CMT</strong>/<strong>CAPT</strong> <strong>Skills</strong> <strong>Checklist</strong> were set in January 2006, prior totesting because of a deadline imposed on the CSDE by one of the testing contractors.The <strong>Checklist</strong>s were to be administered for the first time in March 2006. Unlike thestandard <strong>CMT</strong> and <strong>CAPT</strong> where there are five achievement levels (below basic, basic,proficient, goal and advanced), on the <strong>Skills</strong> <strong>Checklist</strong> there are only three achievementlevels (basic, proficient, independent). Thus, it was necessary to establish performancestandards (cut scores) for the achievement levels on the <strong>Checklist</strong> prior to theiroperational use. In their January meeting, the <strong>Technical</strong> Advisory Committee (see8
Appendix C) approved the standard setting plan proposed by Measurement Incorporated(MI), the contractor for the <strong>CMT</strong> and the second generation <strong>Skills</strong> <strong>Checklist</strong>. Twenty-twogeneral and special education teachers, curriculum coordinators and school administratorsparticipated in the three day standard setting activity January 25, 26, and 27, 2006, alongwith staff from the department and two consultants to the project. A five member teamof psychometricians and staff from Measurement Incorporated conducted the standardsetting activities.In the absence of student data or completed checklists, the standards could be set usingone of two approaches. A method that includes a bit of both approaches was employed totake advantage of the more positive aspects of both. Therefore a two-round process wasutilized. In Round 1, panelists studied the checklists and the Performance LevelDescriptors (PLDs) (see Appendix I) and then constructed sample profiles that matchborderline points for Levels 1-2 (Basic-Proficient) and 2-3 (Proficient-Independent). InRound 2, panelists reviewed one another’s Round 1 sample profiles and classified theminto one of the three levels.For round 1, the panelists were divided into four groups: Grades 3-4, Grades 5-6, Grades7-8-High School Language Arts, and Grades 7-8-High School Mathematics. Afterorientation and some practice, these four groups worked in two-person teams to createsample student profiles that illustrated the performances of students just below or justabove an imaginary cut score for Proficient. They then repeated the exercise for studentsjust below or just above an imaginary cut score for Independent. By the end of Round 1,panelists generated enough hypothetical student profiles to form small distributionsaround each cut score.During round 2, panelists reviewed and discussed profiles created during Round 1.Several holistic rating methods were considered, including the judgmental policy capture(JPC) method (Jaeger, 1995) and dominant profile judgment (DPJ) method (Plake,Hambleton, &, Jaeger, 1997). A generalized holistic method (cf. Cizek, Bunch, &Koons, 2004) seemed to satisfy the requirements of the present situation. The methodmakes no special assumptions about the model (it can be either compensatory orconjunctive, but it was used exclusively by Measurement Incorporated in a compensatorymodel for other standard-setting activities) or data analytic techniques (MI used simplemeans or medians and midpoints, as in the contrasting-groups method, as described inCizek, Bunch, & Koons, 2004). The method treats the rating of profiles exactly as theJPC or DPJ methods would but provided a more straightforward data analytic procedurethan either. Therefore, the generalized holistic method was used.Measurement Incorporated created a form to be used with the panelists. Each formincluded all the profiles generated during Round 1. To complete the form, the panelistsconsidered each profile, along with the Performance Level Descriptor for each level.After studying the Performance Level Descriptor and the profile, the panelists made anentry in the final column of the form. After Round 2, MI facilitators tallied the ratingsprovided by the panelists and reported the results. In the discussion about follow-up9
- Page 1 and 2: DRAFT: October 19, 2006CONNECTICUT
- Page 3 and 4: CMT/CAPT SKILLS CHECKLISTTECHNICAL
- Page 5 and 6: SECTION I—OVERVIEW, BACKGROUND, A
- Page 7 and 8: In addition, the purpose of Connect
- Page 9 and 10: Thus, we - the public as well as ed
- Page 11 and 12: assessment should provide similar i
- Page 13 and 14: The Second Generation CMT/CAPT Skil
- Page 15 and 16: Description of students and alterna
- Page 17 and 18: from a variety of contexts or to co
- Page 19 and 20: illustrates how to link all of the
- Page 21 and 22: Table 2TOTAL DOWNWARD EXTENSIONS BY
- Page 23 and 24: 3. What contributes to understandin
- Page 25 and 26: Orientation meetingIn early June 20
- Page 27 and 28: Guided item development processIn t
- Page 29 and 30: Who administers the Checklist?The C
- Page 31 and 32: • Implications for the inclusion
- Page 33 and 34: • a CD with the materials utilize
- Page 35 and 36: Video recordings selected to serve
- Page 37 and 38: Completing the CMT Skills Checklist
- Page 39 and 40: Completing the CMT Skills Checklist
- Page 41 and 42: The specific attestation in Section
- Page 43 and 44: Database audit trackingAll student
- Page 45: (A&EC), along with its sister compa
- Page 49 and 50: identifies the student’s performa
- Page 51 and 52: match between the test items for su
- Page 53 and 54: Table 4Internal consistency (alpha)
- Page 55 and 56: Table 6Internal consistency (alpha)
- Page 57 and 58: APPENDIX ACMT/CAPT Skills Checklist
- Page 59 and 60: Patricia MoranCharlene Tate Nichols
- Page 61 and 62: CMT/CAPT Alternate Assessment Skill
- Page 63 and 64: Bill WalkerBeth WenzelJoanne WhiteC
- Page 65 and 66: Technical Advisory CommitteePeter B
- Page 67 and 68: Table 1ITEM BY STRAND (STANDARD): M
- Page 69 and 70: Table 3TOTAL POSSIBLE POINTS: MATH
- Page 71 and 72: Table 5TOTAL DOWNWARD EXTENSIONS: L
- Page 73 and 74: APPENDIX EExamples of Downward Exte
- Page 75 and 76: B. Students interpret, analyze and
- Page 77 and 78: PARAMETERS OF ITEMS (I.E., DOWNWARD
- Page 79 and 80: APPENDIX GStandard Terminology41
- Page 81 and 82: APPENDIX HCMT/CAPT Skills Checklist
- Page 83 and 84: APPENDIX JAlignment Analysis of Con
- Page 85 and 86: TABLE OF CONTENTSIntroduction......
- Page 87 and 88: • Pamela Brucker, Southern Connec
- Page 89 and 90: Mathematics Framework were reformat
- Page 91 and 92: corresponding to a standard had to
- Page 93 and 94: • Use punctuation marks correctly
- Page 95 and 96: Other Level 3 activities include dr
- Page 97 and 98:
Objective 2.1.a.Construct polygons
- Page 99 and 100:
The alignment analysis for grade fi
- Page 101 and 102:
The alignment analysis for grade te
- Page 103 and 104:
The alignment analysis for grade fo
- Page 105 and 106:
The alignment analysis for grade ei
- Page 107 and 108:
ReferencesValencia, S. W., & Wixson
- Page 109 and 110:
Table 1`Language Arts: Reading and
- Page 111 and 112:
Table 3Standards by Performance by
- Page 113 and 114:
Table 5Connecticut Alternate Assess
- Page 115 and 116:
APPENDIX MLearner Characteristics I
- Page 117 and 118:
APPENDIX ORelating Items from the C
- Page 119 and 120:
9/10-1 Activate prior knowledge, es
- Page 121 and 122:
APPENDIX PScope and Sequence Tables