24.07.2013 Views

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The rest of this paper will first present a brief literature review on partial knowledge, testing methods, scoring<br />

methods, multiple choice, and CBTs. Then this paper will describe the configuration of a computer-based assessment<br />

system, formulate the research hypotheses, and provide the research method, experimental design, and data<br />

collection in detail. The statistics analysis and hypothesis testing results will be presented subsequently. Conclusions<br />

are finally drawn in the last section.<br />

Related literature<br />

This study first defines related domain knowledge, discusses studies applied to conventional scoring and partial<br />

scoring, compares scoring modes and investigates the principle of designing MC items, and finally summarizes the<br />

pros and cons of CBT systems.<br />

Partial knowledge<br />

Reducing the opportunities to guess and measuring partial knowledge improve the psychometric properties of a test.<br />

These methods can be classified by their ability to identify partial knowledge on a given test item (Alexander,<br />

Bartlett, Truell, & Ouwenga, 2001). Coombs et al. (1956) stated that the conventional scoring format of the MC<br />

examination cannot distinguish between partial knowledge and absence of knowledge. Ben-Simon, Budescu, &<br />

Nevo (1997) classify examinees’ knowledge for a given item as full knowledge (identifies all of the incorrect<br />

options), partial knowledge (identifies some of the incorrect options), partial misinformation (identifies the correct<br />

answer and some incorrect options), full misinformation (identifies only the correct answer), and absence of<br />

knowledge (either omits the item or identifies all options). Bush (2001) conducted a study that allows examinees to<br />

select more than one answer to a question if they are uncertain of the correct one. Negative marking is used to<br />

penalize incorrect selections. The aim is to explicitly reward examinees who possess partial knowledge as compared<br />

with those who are simply guessing.<br />

<strong>Number</strong>-scoring (NS) of multiple choice<br />

Students choose only one response. The number of correctly answered questions is composed of the number of<br />

questions to which the student knows the answer, and the number of questions to which the students correctly guess<br />

the answer. According to the classification of Ben-Simon et al. (1997), NS can only distinguish between full<br />

knowledge and absence of knowledge. A student’s score on an NS section with 25 MC questions and three points per<br />

correct response is in the range 0–75.<br />

Elimination testing (ET) of multiple choice<br />

Alternative schemes proposed for administering MC tests increase the complexity of responding and scoring, and the<br />

available information about student understanding of material (Coombs et al., 1956; Abu-Sayf, 1979; Alexander et<br />

al., 2001). Since partial knowledge is not captured in conventional NS format of an MC examination, Coombs et al.<br />

(1956) describe a procedure that instructs students to mark as many incorrect options as they can identify. One point<br />

is awarded for each incorrect choice identified, but k points are deducted (where k equals the number of options<br />

minus one) if the correct option is identified as incorrect. Consequently, a question score is in the range (–3, +3) on a<br />

question with four options, and a student’s score on an ET section with 25 MC questions with four options each is in<br />

the range (–75, +75). Bradbard and Green (1986) have classified ET scoring as follows: completely correct score<br />

(+3), partially correct score (+2 or +1), no-understanding score (0), partially incorrect score (–1 or –2), completely<br />

incorrect score (–3).<br />

Subset selection testing (SST) of multiple choice<br />

Rather than identifying incorrect options, the examinee attempts to construct subsets of item options that include the<br />

correct answer (Jaradat & Sawaged, 1986). The scoring for an item with four options is as follows: if the correct<br />

97

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!