09.12.2012 Views

I__. - International Military Testing Association

I__. - International Military Testing Association

I__. - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

,<br />

frequency and importance of supervisory tasks from a special administration of<br />

the Leader Requirements Survey, collection and content analysis of critical<br />

incidents, and interviews with MOS incumbents.<br />

The resulting job domain included supervisory, common, and MOS-specific<br />

tasks and behaviors. Army policy designates certain tasks as being part of<br />

the job for corporals and sergeants; tasks at lower skill levels were included<br />

in the domain because of the Army's policy that soldiers are responsible for<br />

such tasks, and tasks at higher skill levels were included if there was<br />

evidence that soldiers in fact performed such tasks.<br />

Instrument Development 2<br />

Information collected using the critical incident methodology was used<br />

to construct a series of rating scales for each MOS, as well as scales that<br />

were not specific to any one MOS but rather reflected Army-wide behaviors.<br />

These scales were used to measure behaviors on all three components of the job<br />

domain -- supervisory, common, and MOS-specific -- by means of ratings<br />

collected from soldiers' supervisors. The 7-point rating scales were<br />

behaviorally-anchored, that is, short descriptions of behaviors that<br />

characterize the low, middle, and high points of each of the scales were<br />

provided. Army-wide supervisory behaviors (e.g., Monitoring, Organizing<br />

Missions and Operations) were addressed by 12 of the scales, 9 scales were<br />

Army-wide and non-supervisory (or common, e.g., Following Regulations and<br />

Orders, Physical Fitness), and for each MOS there were between 7 and 14 MOSspecific<br />

dimensions.<br />

For the task-based information, judgments were obtained from subject<br />

matter experts (SMES) on several task parameters, including performance<br />

difficulty, performance variability, and criticality. The task list for each<br />

MOS was clustered into functional areas, and a second panel of SMEs selected<br />

proportional systematic samples from the task population. These task samples<br />

were subjected to formal reviews by the proponent.<br />

At this point, the task-based instrument development process diverged<br />

into four separate approaches: Job knowledge (written) tests, hands-on job<br />

sample tests, role-play simulations, and written situational judgment tests.<br />

Multiple-choice job knowledge test items were constructed for all of the MOSspecific<br />

and common tasks selected for each MOS. These tests are<br />

characterized by their orientation on task performance and by the extensive<br />

use of graphics and job-relevant contextual information. For each MOS, a onehour<br />

test of both common and MOS-specific tasks was prepared, comprising<br />

approximately 120 items. Two scores were constructed, for common tasks and<br />

*Details of instrument development are presented in J. P. Campbell (Ed.),<br />

Buildina the Career Force, First Year Report (in preparation). Rating scales<br />

development and Situational Judgment Test development were directed by W. C.<br />

Borman and M. Hanson of Personnel Designs Research Institute, Inc. Role-play<br />

development was directed by E. D. Pulakos of Human Resources Research<br />

Organization and D. Whetzel of the American Institutes for Research.<br />

Development of hands-on and job knowledge tests was directed by C. H. Campbell<br />

and R. C. Campbell of Human Resources Research Organization, and D. C. Felker<br />

of the American Institutes for Research.<br />

542

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!