09.12.2012 Views

I__. - International Military Testing Association

I__. - International Military Testing Association

I__. - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

EVALUATING TRAINING PROGRAM MODIFICATIONS<br />

Deborah Lawson McCormick and Paul L. Jones<br />

Naval Technical Training Command<br />

Evaluating changes in training programs is never a simple<br />

task, even under laboratory conditions where threats to validity<br />

can be controlled. In operational settings evaluation may appear<br />

to be an insurmountable problem -- one in which good evaluation<br />

methodology does not seem feasible. One major problem for __<br />

evaluators in operational settings is that they are often not<br />

consulted until after training modifications have already been<br />

initiated. As a result, both experimental control and<br />

opportunities for data collection are severely limited.<br />

Even in those rare cases where evaluators are a part of the<br />

implementation from its onset, problems exist. For example, an<br />

evaluation design which uses equivalent control and experimental<br />

groups is often not possible in on-going training programs. In<br />

addition, operational settings are inherently dynamic environments;<br />

consequently, the effects of deliberate program changes are<br />

confounded with effects of other random factors which 'constantly<br />

impact the program. In these cases, isolating effects directly and<br />

unquestionably attributable to factors of the program change is<br />

impossible.<br />

This difficulty in establishing definite cause and effect<br />

relationships is sometimes used as a reason to forego evaluation.<br />

Rather than attempting a seemingly futile task, the tendency is to<br />

rely on intuition. The argument goes something like this: "These<br />

changes make sense, the students like them, the instructors like<br />

them . . . they probably work."<br />

However, increased competition for funding dollars makes the<br />

need to verify training improvement and justify additional funds<br />

crucial. Increasingly, funding sources are requiring hard data in<br />

support of dollars spent. As evaluators, we are being forced to<br />

accept that a less than perfect evaluation (that is, one which only<br />

suggests, rather than "proves," cause and effect) is better than no<br />

evaluation at all.<br />

This paper describes an evaluation model which we feel is<br />

flexible enough to prove useful in most evaluation circumstances,<br />

from the ideal condition, where evaluation has been planned in<br />

conjunction with change implementation, to those evaluation<br />

nightmares, where change implementation is complete before the<br />

evaluator is consulted. Following a brief description of the<br />

model, and application of its use is discussed.<br />

122

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!