12.07.2015 Views

Art Un ticle I.1 ited Sta In the ates News - Woodring College of ...

Art Un ticle I.1 ited Sta In the ates News - Woodring College of ...

Art Un ticle I.1 ited Sta In the ates News - Woodring College of ...

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

pr<strong>of</strong>iciency is <strong>of</strong>ten underestimated (Collier; 1988;Cummins, 1989; Valdés, 1998), and studentsmay appear to be more pr<strong>of</strong>icient with English than <strong>the</strong>y are, <strong>the</strong>y may be expected to take testsin English long before <strong>the</strong>y are fully pr<strong>of</strong>icient with <strong>the</strong> kind <strong>of</strong> academic language needed toperform well. All <strong>of</strong> <strong>the</strong>se factors should be considered in designing appropriate assessment andinstruction.<strong>Sta</strong>ndardization itself rules out any contextualization <strong>of</strong> assessment, meaning that linguisticdifferences among students cannot be accounted for adequately. The practice <strong>of</strong> assessingEnglish language learners with <strong>the</strong> same mechanisms as <strong>the</strong>ir English-only counterparts mayseriously compromise <strong>the</strong> validity <strong>of</strong> results and lead to misleading interpretations and unfairdecisions affecting <strong>the</strong>ir futures (August & Hakuta, 1997; García, 1991;LaCelle-Peterson &Rivera, 1994; Miramontes, Nadeau & Commins, 1997;Valdés & Figueroa, 1994).English-language assessment prompts that make extensive use <strong>of</strong> complex or idiomatic languagepenalize English language learners who may access important concepts in <strong>the</strong>ir first language,but not yet in English, or <strong>the</strong>y may access <strong>the</strong>m more slowly in English (Abedi, 2001; Abedi,Leon, & Mirocha,2001, Figueroa & García, 1994, García, 1991, Heubert & Hauser,1999).Misinterpretations <strong>of</strong> <strong>the</strong> directions or text <strong>of</strong> an assessment task can lead to flawedconceptualization <strong>of</strong> <strong>the</strong> problem to be solved and consequent failure to devise a correct solution(Durán, 1989). <strong>In</strong> such cases, teachers or o<strong>the</strong>rs who grade or score an English language learner’sperformance may falsely underestimate that student’s level <strong>of</strong> understanding or skill.A move to standards-based performance assessments and o<strong>the</strong>r forms <strong>of</strong> “au<strong>the</strong>ntic” assessment(Valencia, Hiebert, & Afflerbach, 1994; Wiggins, 1993)does not ensure assessment validity forELLs. These kinds <strong>of</strong> assessments are subject to <strong>the</strong> same sources <strong>of</strong> error, particularly given<strong>the</strong>ir increased language demand in comparison to multiple choice or short answer tests(August& Hakuta, 1997; Shepard, 1993). Farr and Trumbull (1997) point out that, “good instruction andassessment should look different in different environments, depending on <strong>the</strong> students served”(p. 2).Assessments designed for native English speakers will simply not meet standards <strong>of</strong> validity forEnglish language learners. According to <strong>the</strong> <strong>Sta</strong>ndards for Educational and PsychologicalTesting (American Educational Research Association, American Psychological Association, &National Council on Measurement in Education, 1999):For all test takers, any test that employslanguage is, in part, a measure <strong>of</strong> <strong>the</strong>ir language skills. . . . Language dominance is notnecessarilyan indicator <strong>of</strong> language competence in taking a test, and some accommodation may benecessary even when administering <strong>the</strong> test in <strong>the</strong> more familiar language. Therefore, it isimportant to consider language background in developing, selecting, and administering tests andin interpreting test performance. (p. 91)Specific research studies on <strong>the</strong> validity <strong>of</strong> performance-based assessments for ELLs havefocused primarily on accommodations (e.g., increasing <strong>the</strong> time allotted for completion, allowing<strong>the</strong> use <strong>of</strong> dictionaries, or modifying <strong>the</strong> language <strong>of</strong> test prompts) ra<strong>the</strong>r than on major changesin <strong>the</strong> instruments <strong>the</strong>mselves (Abedi, 1997; Abedi, Lord, H<strong>of</strong>stetter, & Baker,2000; Kiplinger,Haug, & Abedi, 2000; Kopriva, 1997; Olson & Goldstein,1997; Sweet, 1997). Some have madeefforts to make fairer <strong>the</strong> scoring <strong>of</strong> ELLs’ responses (Wong Fillmore & Lara, 1996). However,little research literature exists on performance assessments for English language learners.© 2008 Dr. Ca<strong>the</strong>rine CollierAll Rights Reserved165

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!