09.12.2012 Views

I__. - International Military Testing Association

I__. - International Military Testing Association

I__. - International Military Testing Association

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

TEST DESIGN AND MINIMUM CUTOFF SCORES<br />

Sandra Ann Rudolph, Training Appraisal<br />

Chief of Naval Technical Training<br />

INTRODUCTION<br />

It has become increasing obvious in the last few years that<br />

the United States government cannot continue to operate with little<br />

concern for who will pay the bill. The apparent message is to<br />

do better with less. This means we must become more efficient in<br />

our way of conducting business. For many of us--our business,<br />

is training. Being efficient means we must use our resources<br />

wisely for the purpose intended. In training our resources are<br />

numerous--training devices, curriculum, instructors --while our<br />

purpose is solitary--provide the training necessary for graduates<br />

to perform in the fleet. While performance is the key, there is<br />

background knowledge that is necessary for the trainee to grasp the<br />

performance.<br />

BACKGROUND<br />

In the training environment of yesterday, where money was no<br />

object, training was easier. There was little concern for statistical<br />

evaluation, effectiveness, or efficiency. We trained by<br />

the seat of our pants--experience wasn't the best teacher, it was<br />

the ONLY teacher. Today, lack of attention in these areas could<br />

mean loss of training dollars. One of the big areas of concern<br />

deals with attrition--or the dropping of trainees from,a designated<br />

training program. While there are many causes for attrition,<br />

recent attrition analysis visits to such schools as Air Traffic<br />

Control School, Music School, and Boiler Technician /Machinist Mate<br />

School, indicate that testing programs may be at the very core of<br />

many of our problems. The following questions were used to<br />

determine how knowledge testing was being used to measure success:<br />

(1) Have critical course objectives been identified with<br />

corresponding emphasis on testing?<br />

(2) Have the knowledge tests been designed to measure the<br />

objectives to the learning level required?<br />

(3) How was the minimum cutoff score for the knowledge<br />

tests determined?<br />

(4) Has the test design and cutoff score been validated?<br />

(5) Have alternate versions of the tests been developed<br />

that are consistent with the valid test design?<br />

It became apparent that testing was a problem. It was<br />

discovered that the emphasis and training had been placed on<br />

individual test-item development and test-item analysis, not on<br />

test development and test analysis. In other words, there was no<br />

assurance that the objectives were being tested nor any evidence<br />

on how the cutoff score was determined. To standardize the<br />

approach to test design, the following process was established:<br />

204

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!