09.12.2012 Views

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

2003 IMTA Proceedings - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

526<br />

• Individual Pace & Intensity<br />

The Army-wide FX scales will incorporate descriptions of the anticipated future<br />

conditions listed above. Raters will read the description and rate how effectively they think the<br />

Soldier would perform under each condition. They will also provide a rating of how confident<br />

they are in those ratings. The cluster-specific scales will be based on scenarios much like those<br />

generated for the SJT-X in the NCO21 project (Knapp et al., 2002). These scenarios are more<br />

detailed and are specific to the job cluster. Again, raters will indicate how well they think the<br />

Soldiers would perform in that scenario. We anticipate developing 5-6 scenarios for the Close<br />

Combat cluster, where the anticipated future conditions are much the same for the three MOS,<br />

and 10-11 scenarios for the Surveillance, Intelligence, and Communication cluster, where there<br />

is less overlap among the MOS. Some of these scenarios may be only applicable to one or two<br />

MOS in the cluster. This process will not be as good as addressing independent themes or<br />

constructs, but it will do well at sampling the relevant content for the MOS.<br />

We will collect separate ratings for the Army-wide and cluster-specific FX scales and<br />

conduct statistical analyses to determine whether there is dimensionality in the ratings. However,<br />

because (a) each scenario is likely to involve multiple “dimensions” of performance and (b) a<br />

single dimension of performance is likely to be relevant for more than one scenario, we believe<br />

that dimensionality is very unlikely here. Therefore, for both Army-wide and cluster FX ratings,<br />

we plan to aggregate the ratings into an overall Army-wide rating and an overall cluster rating,<br />

respectively.<br />

Rater Training/Process<br />

Effective rater training is key to getting raters to use the rating scales as intended. We<br />

have considerable experience in developing rater training that has focused on evaluation errors<br />

(e.g., first impression, stereotyping) and response tendency errors (e.g., halo effect, central<br />

tendency). This experience has shown that reducing or eliminating rating error is quite difficult.<br />

Our goal with Select21 rater training is to more clearly focus raters on reading and using<br />

the scales accurately. For all raters, training will emphasize the importance of making accurate<br />

ratings and thinking about a Soldier’s relative strengths and weaknesses. To this end, we will<br />

stress the importance of accurate performance measures to the overall success of the project. In<br />

past work we have found that stressing the fact that the ratings are “for research purposes only”<br />

helps to overcome problems, such as leniency, that are common in operational performance<br />

ratings. We will focus on the importance of reading the anchors, thinking about a Soldier’s<br />

relative strengths and weaknesses, and applying that insight to the ratings. The ranking exercise<br />

described previously should help with this focus, as will the format of the rating scales. We will<br />

address response tendency and evaluation errors. In addition, while raters are working, we will<br />

have facilitators move about the room to keep an eye out for raters who seem to be falling prey<br />

to these visible errors.<br />

We expect that we will collect ratings from most of the raters in a face-to-face setting.<br />

We also expect that a fairly large proportion of raters will not be available during the data<br />

collection period. Identifying raters and collecting their ratings has been a challenge in past<br />

efforts such as this, and we expect that we will encounter the same situation in this project. It is<br />

45 th Annual Conference of the <strong>International</strong> <strong>Military</strong> <strong>Testing</strong> <strong>Association</strong><br />

Pensacola, Florida, 3-6 November <strong>2003</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!