09.12.2012 Views

I__. - International Military Testing Association

I__. - International Military Testing Association

I__. - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Context Effects on Multiple-Choice Test Performance<br />

Lawrence 5. Buck*<br />

Planning Research Corporation, System Services<br />

Introduction<br />

It has long been a tenet of test construction theory and practice that test items<br />

measuring the same content or behavioral objectives should be grou,ped within a<br />

test. For example, Tinkelman (1971) stated:<br />

If items measuring different content objectives or different behavioral<br />

objectives are included in the same test, consideration should be given to<br />

grouping the items by type. Usually the continuity of thought that such<br />

grouping allows on the part of the examinee is found to enhance the<br />

quality of his/her performance.<br />

Other rationales for grouping similar items include such viewpoints as: test anxiety<br />

may be reduced by grouping items on a test, examinees will concentrate better if<br />

they do not jump from subject to subject, and examinees might glean information<br />

from certain questions in a set of questions that will facilitate the answering of other<br />

questions in the set (Gohmann & Spector, 1989).<br />

A majority of the studies addressing item positioning have centered on the effects of<br />

ordering questions by difficulty level rather than by content. (For a representative<br />

sample, see: Hodson, 1984; Sax & Cromack, 1966; Leary & Dorans, 1985; and Plake,<br />

1980.) Numerous other studies, primarily in the educational arena, have addressed<br />

the effects of randomizing items in tests rather than presenting the items in the<br />

order that the information is covered in the classroom or in the textbook(s). (For a<br />

representative sample, see: Gohmann & Spector, 1989; Taub & Bell, 1975; and<br />

Bresnock, Graves, & White, 1989).<br />

The primary focus of this study is the effect on part and total test performance of<br />

randomizing the items on multiple-choice tests normally constructed with the items<br />

grouped by content areas or domains. A secondary objective was to evaluate the<br />

effects on the individual item statistics. The items in the tests in question are<br />

normally presented from easiest to most difficult within each domain.<br />

Two tests were selected for this study, Rigging and Weight <strong>Testing</strong> (BM-0110) and<br />

Outside Electrical (EM-4613). These tests are part of a testing program which<br />

develops, administers, and maintains Journeyman Navy Enlisted Classification (JNEC)<br />

exams for the Navy’s Intermediate Maintenance Activity (IMA) community. The tests<br />

are part of the qualification process for special classification codes. Both the BM-<br />

0110 and EM-4613 examinations consist of 120, four-choice, multiple-choice test<br />

questions spread across six domains as indicated in Table I below.<br />

Table I<br />

Test Item-Domain Breakdown<br />

Domains<br />

Test # of Items 1 2 3 4 5 6<br />

BM-0110 120 18 30 14 12 30 16<br />

EM-461 3 120 10 6 14 55 22 13<br />

*The author wishes to thank Norma Molina-laggard for her able assistance with the data analyses<br />

274

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!