09.12.2012 Views

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

542<br />

too much agreement among the Soldiers in training. Finally, we narrowed down the options to<br />

about seven per situation. Where possible, a set of options were selected for a situation so that there<br />

was a wide range of keyed effectiveness values (computed as the mean of the NCO ratings).<br />

DEVELOPMENT OF THE TRAIT SCORING<br />

The theoretical basis for using the SJT to measure both traits and judgment is based upon<br />

the following model. When an examinee judges the effectiveness of an action, that judgment is<br />

determined by both the examinee’s personality and his/her knowledge/training/experience relevant<br />

to the situation. The traditional SJT score taps the examinee’s knowledge/training/experience<br />

whereas the trait scores tap part of the examinee’s personality.<br />

As mentioned above, SJT tests are heterogeneous. Therefore, we decided to measure<br />

traits at the lowest level possible: the individual option. Nineteen individuals with graduate<br />

degrees in industrial-organizational psychology were recruited to rate the traitedness of each<br />

response option. Each response option was rated by five to seven psychologists. For each traitoption<br />

combination, participants rated the degree to which the action and trait were related.<br />

Inverse relationships were given negative ratings. Each point in the rating scale represented a<br />

range of correlations. The mean (across psychologists) rating represented the traitedness for that<br />

trait on that option.<br />

PILOT TEST RESULTS: JUDGMENT SCORES<br />

Eight draft test forms were given to 319 Soldiers in U.S. Army reception battalions.<br />

These Soldiers had just entered the Army but had not been assigned to training yet. Therefore,<br />

they would be similar to applicants. Each Soldier completed one civilian SJT form (A–D) and<br />

one military SJT form (1–4). There were four pairings of forms: A-1, B-2, C-3, and D-4. Within<br />

each form-pair, the order was randomized. That is, half of the Soldiers got the military form first;<br />

the other half got the civilian form first. Most items had seven response options. The civilian<br />

forms had 14–16 items; the military forms had 11–13 items. There was no attempt to put a<br />

military item and its parallel civilian item within the same form-pair.<br />

The Soldiers responded by rating the effectiveness of each option on a 7-point scale<br />

(where higher numbers represent greater effectiveness). The judgment score for an option was<br />

computed as shown in Equation 1 below. The difference between the rating and keyed<br />

effectiveness values is subtracted from 6 so that higher values represent better scores. The<br />

judgment score for an entire test form was merely the mean of the option scores.<br />

optionEffectivenessScore = 6 – | SoldiersRating – keyedEffectiveness | (1)<br />

The reliability of the judgment scores was estimated via coefficient alpha. Table 1 shows<br />

these values for each of the eight forms. The reliability estimates are around .90. Table 1 also<br />

shows that the judgment score is measuring essentially the same thing on the civilian and<br />

military forms. The correlations between forms are almost as high as the reliability estimates.<br />

The correlation rc estimates the correlation between the constructs measured in the two forms.<br />

45 th Annual Conference of the <strong>International</strong> <strong>Military</strong> <strong>Testing</strong> <strong>Association</strong><br />

Pensacola, Florida, 3-6 November <strong>2003</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!