25.07.2013 Views

Download - Educational Technology & Society

Download - Educational Technology & Society

Download - Educational Technology & Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Another similar evaluation was reported in (Miller & Butz 2004). This study has evaluated the usability and<br />

effectiveness of Interactive Multimedia Intelligent System (IMITS), a system designed to tutor second year electrical<br />

engineering undergraduates. IMITS was evaluated from two perspectives, usability of the software and effectiveness<br />

of the software. The Usability Questionnaire was used to gather information concerning students’ reactions to the<br />

software. Also, usability data was obtained from the system’s log files. Similar technique as in DEPTHS was used to<br />

examine the impact of IMITS on student learning. The authors have used a quasi-experimental design, with a control<br />

and an experimental group of students. Analysis of variance (ANOVA) was used to test the hypothesis that the<br />

students who were using IMITS learned more than their counterparts under control conditions. Overall, students’<br />

responses were favorable. IMITS improved performance on at least one classroom achievement measure.<br />

Regression analyses revealed that the more students used IMITS to learn some engineering concept (with usage<br />

defined as percentage of the questions encountered on a particular engineering concept), the better they learned that<br />

concept. Comparing to IMITS, we used the similar evaluation design in DEPTHS. We have reported on the systems<br />

usability and its effectiveness. Both evaluations have shown that the students’ responses were favorable. However,<br />

we go step further in evaluation of the system effectiveness. Contrary to IMITS evaluation of DEPTHS introduce a<br />

comparison of pre-test and post-test results in experimental group that lacks in IMITS.<br />

Our approach to evaluation of DEPTHS was influenced by the work reported in (Barker et al. 2002). They have<br />

applied a qualitative approach when evaluating their adaptive multimedia tutoring system. The authors have pointed<br />

out that an evaluation in real-world settings presents so many uncontrollable variables that traditional quantitative<br />

methods can give misleading results. They have suggested combining qualitative and quantitative analysis and have<br />

shown the advantages of this combined approach in real-world contexts. Their quantitative ANOVA results have<br />

shown that certain differences in mean values are unlikely to stem from random variations among students. Such<br />

strong rejection of the random-variations hypothesis would be impossible with a purely qualitative analysis. While<br />

agreeing with this approach, we have also adopted Kirkpatrick’s model of evaluation as a foundation of our<br />

evaluation design in DEPTHS.<br />

We found an interesting approach for evaluating the accuracy of the student model in (Millan & Perez de la Cruz<br />

2002). The approach is based on the usage of simulated students since the cognitive state of a simulated student,<br />

unlike that of a real student, can be precisely determined, so the evaluation results are more accurate. However, as<br />

the authors pointed out, simulations could not provide the same degree of validity as experiments with real human<br />

students. This work is a good example of separating the evaluation of the accuracy of a student model from the<br />

evaluation of the adaptations efficiency based on the student model. By evaluating an intelligent tutoring system<br />

modularly with simulated students, the effects of each component can be separated from one another and the weakest<br />

component(s) could be easily identified and improved in order to increase the effectiveness of an adaptive system. In<br />

the future, we are planning to apply the similar approach with DEPTHS, in order to evaluate its modules, namely<br />

student model, pedagogical module, as well as the accuracy of student model assessment and adaptation<br />

effectiveness. We believe that this approach could give us valuable results on design and efficiency of each<br />

DEPTHS’s module.<br />

Conclusions<br />

This paper presents the evaluation of DEPTHS, an intelligent tutoring system (ITS) for teaching/learning software<br />

design patterns. The adopted evaluation approach is quite general, and can be equally well applied for evaluation of<br />

other ITSs. In particular, we used first two levels (reaction and learning) from the well-known Kirkpatrick’s model<br />

(Kirkpatrick 1979). The conducted evaluation studies targeted primarily the effectiveness of the DEPTHS system as<br />

well as the accuracy of its assumptions about the students’ knowledge level. The semester-long evaluation study has<br />

provided us with insights into strengths and weaknesses of the DEPTHS system. It has also made clear directions for<br />

future actions.<br />

We reported several advantages of the DEPTHS system over the traditional approach to teaching/learning design<br />

patterns. Students who learned with DEPTHS found that the system helped them to learn a lot about design patterns.<br />

They were especially pleased with the system’s ability to provide them with many useful information, feedback<br />

messages and advice for further work. Students’ responses indicated the need for regular communication with<br />

teachers and other students as the underpinning priorities for successful completion of online learning.<br />

127

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!