09.12.2012 Views

I__. - International Military Testing Association

I__. - International Military Testing Association

I__. - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

MILITARY TWTING ASSOCIATION<br />

iYH) Annuiil timkrcnw<br />

FORECASTING TRAINING EFFECTIVENESS (FORTE)<br />

Mark G. Pfeiffer and Richard M. Evans<br />

.--- .-- - -- -<br />

Naval Training Systems Cent.er and Training Ferformance Data C:ent.er<br />

Orlando, FL<br />

A model was developed to simulate a variety of aviation training device<br />

evaluation outcomes. This simulation model is designed to explore sources<br />

of error threatening the sensitivity of device evaluations. Selection of ._<br />

evaluation designs is guided by a model that elicits information from<br />

experienced flight instructors. This practical knowledge is transformed<br />

into data that are used i,n simulating a training effectiveness evaluation.<br />

Effects of variables such as instructor leniency, task difficulty, and<br />

student ability are estimated by two different methods. Available in the<br />

output is an estimate of transfer ratios based on trials-to-mastery, a<br />

diagnosis of deficiencies, an exploration of possible sources of variance,<br />

and an estimate of statistical power and required sample size. Finally, all<br />

data analyses can be accomplished in less than 2 man-days and prior to the<br />

actual field experiment. Estimates of accuracy, reliability, and validity<br />

of the model are high and in an acceptable range.<br />

Backwound<br />

Major sources of error variance that can mask the true contribution of<br />

a training device to training effectiveness include instructor leniency,<br />

student ability, and task difficulty (McDaniel, Scott & Browning, 1983).<br />

First, instructors' grades are often unreliable criterion measures. Next,<br />

individual abilities among students vary widely. Finally, tasks vary<br />

greatly in difficulty level. Some tasks can be mastered by students in one<br />

or two trials, while others may require 30 trials. These sources of<br />

variance make ratings of students' performance insensitive measures of<br />

training device effectiveness. However, the magnitude can be identified<br />

with sensitivity analysis prior to actual field experiments.<br />

Sensitivity Analysis<br />

Sensitivity analysis is a planning technique (Lipsey, 1983) which<br />

focuses on the impact of variance on variables of interest. The device<br />

evaluation must be carefully planned if the results are to have practical<br />

value and show a true difference between experimental and control groups.<br />

During the planning phase for device evaluations an investment in time may<br />

help identify the problems that introduce unwanted error variance into the<br />

device evaluation. Performance data qenerated by flight instructors can be<br />

used for this purpose.<br />

The basic framework of the present "sensitivity" analysis differs from<br />

that described by Lipsey (1983) in that it employs the "insensitive"<br />

instructor's rating of students as a performance measure. Lipsey .wouid<br />

rather seek a more sensitive measure. While this rating measure may not be<br />

a particularly good psychometric measure, it is dictated by operational<br />

constraints. Instructors' ratings are used extensively in the transfer of<br />

training literature.<br />

“Approved for public release; distribution is<br />

unlimited.”<br />

191

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!