24.07.2013 Views

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

course, more comparison tests could be performed on other content subjects to induce those suited for adopting ET<br />

and CBTs to replace conventional NS and PPTs.<br />

Since the trend in e-learning technologies and system development is toward the creation of standards-based<br />

distributed computing applications. To maintain a high-quality testbank and promote the sharing and reusing of test<br />

items, further research should consider incorporating the IMS question and test interoperability (QTI) specification,<br />

which describes a basic structure for the representation of assessment data groups, questions, and results, and allows<br />

the CBT system to go further by using the shareable content object reference model (SCORM) and IMS metadata<br />

specification.<br />

References<br />

Abu-Sayf, F. K. (1979). Recent developments in the scoring of multiple-choice items. <strong>Educational</strong> Review, 31, 269–<br />

270.<br />

Ager, T. (1993). Online placement testing in mathematics and chemistry. Journal and Computer-Based Instruction,<br />

20 (2), 52–57.<br />

Akeroyd, F. M. (1982). Progress in multiple-choice scoring methods. Journal of Further and Higher Education, 6,<br />

87–90.<br />

Alexander, M. W., Bartlett, J. E., Truell, A. D., & Ouwenga, K. (2001). Testing in a computer technology course: An<br />

investigation of equivalency in performance between online and paper and pencil methods. Journal of Career and<br />

Technical Education, 18 (1), 69–80.<br />

Baker, F. B. (1992). Item response theory: Parameter estimation techniques, New York, NY: Marcel Dekker.<br />

Ben-Simon, A., Budescu, D. V., & Nevo, B. (1997). A comparative study of measures of partial knowledge in<br />

multiple-choice tests. Applied Psychological Measurement, 21 (1), 65–88.<br />

Bodmann, S. M. & Robinson, D. H. (2004). Speed and performance differences among computer-based and paperpencil<br />

tests. Journal of <strong>Educational</strong> Computing Research, 31 (1), 51–60.<br />

Bond, T. G. & Fox, C. M. (2001), Applying the Rasch model: Fundamental measurement in the human sciences,<br />

Mahwah, New Jersey: Lawrence Erlbaum Associates.<br />

Bradbard, D. A. & Green, S. B. (1986). Use of the Coombs elimination procedure in classroom tests. Journal of<br />

Experimental Education, 54, 68–72.<br />

Bradbard, D. A., Parker, D. F., & Stone, G. L. (2004). An alternate multiple-choice scoring procedure in a<br />

macroeconomics course. Decision Sciences Journal of Innovative Education, 2 (1), 11–26.<br />

Bugbee, A. C. (1996). The equivalence of PPTs and computer-based testing. Journal of Research on Computing in<br />

Education, 28 (3), 282–299.<br />

Burton, R. F. (2002). Misinformation, partial knowledge and guessing in true/false tests. Medical Education, 36,<br />

805–811.<br />

Bush, M. (2001). A multiple choice test that rewards partial knowledge. Journal of Further and Higher Education,<br />

25 (2), 157–163.<br />

Chan, N., & Kennedy, P. E. (2002). Are multiple-choice exams easier for economics students? A comparison of<br />

multiple-choice and “equivalent” constructed-response exam questions. Southern Economic Journal, 68 (4), 957–<br />

971.<br />

<strong>10</strong>8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!