24.07.2013 Views

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Research hypotheses<br />

This study proposes four hypotheses based on literature reviews. First, Alexander et al. (2001) explored students in a<br />

computer technology course who completed either a PPT or a CBT in a proctored computer lab. The test scores were<br />

similar, but students in the computer-based group, particularly freshmen, completed the test in the least amount of<br />

time. Bodman and Robinson (2004) investigated the effect of several different modes of test administration on scores<br />

and completion times, and the results of the study indicate that undergraduates completed the computer-based tests<br />

faster than the paper-based tests with no difference in scores. Stated formally:<br />

Hypothesis 1. The average score of CBTs is equivalent to that of PPTs.<br />

Second, the statistical analysis of Bradbard et al. (2004) indicates that the Coombs procedure is a viable alternative to<br />

the standard scoring procedure. In Ben-Simon et al.’s (1997) classification, NS performed very poorly in<br />

discriminating between full knowledge and absence of knowledge. Stated formally:<br />

Hypothesis 2. ET detects partial knowledge of examinees more effectively than NS.<br />

Third, Bradbard and Green (1986) indicated that elimination testing lowers the amount of guesswork, and the<br />

influence increases throughout the grading period. Stated formally:<br />

Hypothesis 3. ET lowers the number of unexpected responses for examinees more effectively than NS.<br />

Finally, most items in mathematics and chemistry testing need manual calculation. The need to write down and<br />

calculate answers on draft paper might lower the answering speed (Ager, 1993). Stated formally:<br />

Hypothesis 4. Different types of question content, such as calculation and concept, influence the performance of<br />

examinees on PPTs or CBTs.<br />

Research variables<br />

The independent variables used in this study and the operational definitions are as follows:<br />

Scoring mode: including partial scoring and conventional dichotomous scoring to analyze the influence of<br />

different answering and scoring schemes on partial knowledge.<br />

Testing tool: comparing conventional PPTs with CBTs and recognizing appropriate question types for CBTs.<br />

The dependent variable adopted in this study is the students’ performance, which is determined by test scores.<br />

Experimental design<br />

Tests were provided according to the two scoring modes and two testing tools, which were combined to form four<br />

treatments. Table 2 lists the multifactor design. Treatment 1 (T1) was CBTs, using the ET scoring method;<br />

Treatment 2 (T2) was CBTs, using NS scoring method; Treatment 3 (T3) was PPTs, using ET scoring method; and<br />

Treatment 4 (T4) was PPTs, using NS scoring method.<br />

Data collection<br />

Table 2. Multifactor design<br />

CBTs PPTs<br />

ET T1 T3<br />

NS T2 T4<br />

The subjects of the experiment were <strong>10</strong>2 students in an introductory operations management module, which is a<br />

required course for students of the two junior classes in the Department of Information Management at National<br />

Kaohsiung First University of Science and <strong>Technology</strong> in Taiwan. All students were required to take all four exams<br />

<strong>10</strong>3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!