15.11.2012 Views

Technical Report - International Military Testing Association

Technical Report - International Military Testing Association

Technical Report - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

.<br />

.<br />

and performed .the duties 01 the Primary NOS. The ‘mean co-vorker rating<br />

given by 3 En for each of the men in the particular validation sample<br />

serves as the criterion. Since a sumnary of buddy ratings in military<br />

research by Hollander (1954) emphasized the relevant values of peer<br />

ratings, this information vae instrumental in the USAEEC research<br />

decision to apply the job performance criterion.<br />

An element of anxiety may surround the peer rating mthod in aom<br />

quarterr. It has been found wantfng because of ineffestfve application<br />

in most instances. For example, young recruits could not generally be<br />

expected to rate leadership in the military setting because they do not<br />

have adequate knowledge of the role and no degree of experience in making<br />

such a judgment. The fault lies not fn the racing process but tn the<br />

Ineffective preparation for fts use. The comparison most apt may be<br />

that of trying to ahoot a bull’s eye vfth a defective weapon. A stand wss<br />

made in our program to develop a rating form which would minimize rating<br />

format influences and insufficient knowledges of co-vorkers about each<br />

other. The rem1 t being, that peer ratings of a relatively structured<br />

style, are postulated with a sense of confidence toward obtatnfng a<br />

realistic estimate of job performance on an eleven-point scale.<br />

The administration of rating wa8 prefaced by special instructions<br />

to induce the raters’ acceptance of the task in a mre Informed and<br />

responsible may. A research psychologist conducted the rating session,<br />

while the teat control officer (‘X0) at esch installation we requested<br />

to schedule all of the available racers who were qualified to rate @I<br />

in the specified HOS. Croups of about 20 to 40 men were assigned to meet<br />

in suitable place8 for the rb;ing sessions, which were usually completed<br />

in about 20 minutes.<br />

Three phases of analysis compose the substance of the test valfdation<br />

procedure: (1) analysis of the total evaluation test; (2) analysis<br />

of the valid portion of the evaluation test; and (3) providtng rccomnencled<br />

numbers of item by evaluation test outline. These phases art organized<br />

to assist in recuring the desired evaluation test speclflcations through<br />

te8 t rev16 ion.<br />

Initially, in the first phase the relationships between item<br />

statistics and test statistics are thoroughly delineated. Results of<br />

aoalyeia in tabular form show the total test, technical test and Broad<br />

Subject-Hatter Areas (B!SMA’a) by number of items with the rtapective<br />

means, standard deviations, RR-20 reliability coefficients, validity<br />

coefficienta, beta weights, the multipltR,and coirected R after shrinkage.<br />

men follow in another table the correlation coefficients betveen each of<br />

the BStU’a, the BSHA’s and the criterion, and total evaluation test and<br />

criteriorr. In u third suxnary table the rerults reported for items<br />

include item p-vaiues for the total HOS population, p-values computed<br />

107<br />

:. .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!