02.11.2012 Views

Adult Literacy in America - National Center for Education Statistics ...

Adult Literacy in America - National Center for Education Statistics ...

Adult Literacy in America - National Center for Education Statistics ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

per<strong>for</strong>mance of a sample of exam<strong>in</strong>ees can be summarized on a series of<br />

subscales even when different respondents have been adm<strong>in</strong>istered different<br />

items. Conventional scor<strong>in</strong>g methods are not suited <strong>for</strong> assessments like the<br />

national survey. <strong>Statistics</strong> based on the number of correct responses, such as<br />

proportion of correct responses, are <strong>in</strong>appropriate <strong>for</strong> exam<strong>in</strong>ees who receive<br />

different sets of items. Moreover, item-by-item report<strong>in</strong>g ignores similarities of<br />

subgroup comparisons that are common across items. F<strong>in</strong>ally, us<strong>in</strong>g average<br />

percent correct to estimate means of proficiencies of exam<strong>in</strong>ees with<strong>in</strong><br />

subpopulations does not provide any other <strong>in</strong><strong>for</strong>mation about the distribution of<br />

skills among the exam<strong>in</strong>ees.<br />

The limitations of conventional scor<strong>in</strong>g methods can be overcome by the<br />

use of item response theory (IRT) scal<strong>in</strong>g. When several items require similar<br />

skills, the response patterns should have some uni<strong>for</strong>mity. Such uni<strong>for</strong>mity can<br />

be used to characterize both exam<strong>in</strong>ees and items <strong>in</strong> terms of a common scale<br />

attached to the skills, even when all exam<strong>in</strong>ees do not take identical sets of items.<br />

Comparisons of items and exam<strong>in</strong>ees can then be made <strong>in</strong> reference to a scale,<br />

rather than to percent correct. IRT scal<strong>in</strong>g also allows distributions of groups of<br />

exam<strong>in</strong>ees to be compared.<br />

Scal<strong>in</strong>g was carried out separately <strong>for</strong> each of the three doma<strong>in</strong>s of literacy<br />

(prose, document, and quantitative). The NAEP read<strong>in</strong>g scale, used <strong>in</strong> the young<br />

adult survey, was dropped because of its lack of relevance to the current NAEP<br />

read<strong>in</strong>g scale. The scal<strong>in</strong>g model used <strong>for</strong> the national survey is the threeparameter<br />

logistic (3PL) model from item response theory. 2 It is a mathematical<br />

model <strong>for</strong> estimat<strong>in</strong>g the probability that a particular person will respond<br />

correctly to a particular item from a s<strong>in</strong>gle doma<strong>in</strong> of items. This probability is<br />

given as a function of a parameter characteriz<strong>in</strong>g the proficiency of that person,<br />

and three parameters characteriz<strong>in</strong>g the properties of that item.<br />

Overview of L<strong>in</strong>k<strong>in</strong>g the <strong>National</strong> <strong>Adult</strong> <strong>Literacy</strong> Survey (NALS)<br />

Scales to the Young <strong>Adult</strong> <strong>Literacy</strong> Survey (YALS) Scales<br />

Prose, document, and quantitative literacy results <strong>for</strong> the <strong>National</strong> <strong>Adult</strong><br />

<strong>Literacy</strong> Survey are reported on scales that were established <strong>in</strong> the Young <strong>Adult</strong><br />

<strong>Literacy</strong> Survey. For each scale, a number of new items unique to the national<br />

survey were added to the item pool that was adm<strong>in</strong>istered <strong>in</strong> the orig<strong>in</strong>al young<br />

adult survey. The NALS scales are l<strong>in</strong>ked to the YALS scales based upon the<br />

commonality of the two assessments, namely, the orig<strong>in</strong>al young adult survey<br />

2 A. Birnbaum. (1968). “ Some Latent Trait Models.” In F.M. Lord and M.R. Novick, Statistical Theories of<br />

Mental Test Scores. Read<strong>in</strong>g, MA: Addison-Wesley. F.M. Lord. (1980). Applications of Item Response Theory<br />

to Practical Test<strong>in</strong>g Problems. Hillsdale, NJ: Erlbaum.<br />

Appendix A ......127

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!