04.08.2013 Views

here - TIMSS and PIRLS Home - Boston College

here - TIMSS and PIRLS Home - Boston College

here - TIMSS and PIRLS Home - Boston College

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

74 chapter 2: performance at international benchmarks<br />

particular benchmark, <strong>and</strong>, for multipoint items, the analysis differentiated<br />

between partial-credit <strong>and</strong> full-credit responses.<br />

T<strong>here</strong> were 126 items in the assessment, about half (64) assessing<br />

“literary experience” <strong>and</strong> half (62) assessing “acquire <strong>and</strong> use information”.<br />

Please see Appendix A for the distribution of items by reading purpose <strong>and</strong><br />

process category.<br />

About half the <strong>PIRLS</strong> 2006 items required students to construct their<br />

own answers to the questions (with no help from those administering the<br />

assessment). The constructed-response questions took three different forms:<br />

▶<br />

▶<br />

▶<br />

For 1-point items, responses were scored as acceptable if they<br />

included all elements required by the questions <strong>and</strong> were determined<br />

to be accurate based on ideas <strong>and</strong> information in the text.<br />

For 2-point items, responses that were given full credit<br />

demonstrated complete comprehension by providing appropriate<br />

inferences <strong>and</strong> interpretations consistent with the text <strong>and</strong> adequate<br />

textually-based support if required. Responses were given partial<br />

credit (1 point), if they included only some of the information or<br />

demonstrated only a literal underst<strong>and</strong>ing when an inference or<br />

interpretation was required.<br />

For 3-point items, responses were given full credit if they<br />

demonstrated extensive comprehension by presenting relatively<br />

complex, abstract ideas or by providing substantial textual support<br />

for inferences <strong>and</strong> interpretations. Responses were considered<br />

satisfactory <strong>and</strong> given 2 points if they contained all the required<br />

elements but did not provide complex or abstract ideas, were more<br />

literal than interpretive, or were weak in textually-based support.<br />

Minimal responses (1 point) contained some but not all of the<br />

required elements.<br />

For students to demonstrate achievement in the reading comprehension<br />

process being assessed by multipoint items, usually the response needed<br />

to receive full credit. That is, a more literal response to an item requiring<br />

interpretation, integration, or evaluation of ideas in the text did provide text-<br />

To ensure reliable scoring, <strong>PIRLS</strong> developed scoring guides for each constructed-response item <strong>and</strong> conducted training in how<br />

to apply the guides. To monitor reliability within countries, across countries, <strong>and</strong> between the 2001 <strong>and</strong> 2006 assessments,<br />

subsamples of students’ systematic responses were scored independently by more than one reader (see Appendix A).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!