24.07.2013 Views

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

October 2007 Volume 10 Number 4 - Educational Technology ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

information- and communication technology in order to facilitate training and feedback without necessarily<br />

increasing the workload of the personnel (Mattheos et al., 2004b).<br />

The present study aims to describe the model of the Interactive Examination and present the results from a<br />

multicentre evaluation study with undergraduate students in the Faculty of Odontology (OD) and School of Teacher<br />

Education (LUT) at Malmö University. It should be emphasized from the start, however, that this study does not aim<br />

for direct comparison of the two student groups, as differences in educational context and experimental settings<br />

would make this task meaningless. Rather, what is attempted is a “parallel execution”, where differences and<br />

similarities in the two institutions can be identified, leading to improvements of the methodology, as well as giving<br />

rise to new questions for further investigation.<br />

Material and method<br />

General Principle of the “Interactive Examination”<br />

In principle, the methodology is based on six explicit stages:<br />

1. Quantitative self-assessment. At the beginning of the process, the students assess their own competence through a<br />

number of Likert-scale questions, graded from 1 (poor) to 6 (excellent). In addition there are three open text fields,<br />

where the students can elaborate further on their self-assessment. When possible, the self-assessments are compared<br />

with the instructors’ judgements of students’ competence, and feedback is given – a process that to some extent can<br />

be automatized by the software. The purpose of this comparison is to highlight differences between student’s and<br />

instructor’s judgement, and not to constitute a judgement per se. Possible deviations between self-assessment and<br />

instructor’s assessment are only communicated to the students as a subject for reflection or a possible discussion<br />

issue with the instructor.<br />

2. Personal task. After the completion of the initial self-assessment, students receive a personal task in the form of a<br />

problem which they might experience during their professional life. This is an interactive part of the examination,<br />

where the interaction takes place between the student and the different affordances provided (such as links, pictures,<br />

background data etc.). The students have to come up with a solution strategy and elaborate their choices in written<br />

text.<br />

3. Comparison task. After the personal task, the students receive a document representing the way an “expert” in the<br />

field chose to deal with the same task. This “expert” answer does not correspond to the best or the only solution, but<br />

rather to a justified rationale from an experienced colleague, which remains open to discussion. The “expert”<br />

documents have been written in advance and the students are given access to them as they submit their responses to<br />

the personal task. This is a way of dealing with the problem of providing timely feedback to a large number of<br />

students, but the “expert” answers also provide a kind of social interaction, although in a fixed (or “frozen”) form.<br />

The stance taken here is thus that, although interaction is needed in order for learning to take place, this interaction<br />

does not necessarily involve direct communication or collaboration between humans (cf. Wiberg in this issue), but<br />

the interaction could also be mediated by technology.<br />

By the aid of the “expert” answer, the students can, according to the concept of “the zone of proximal development”<br />

(Vygotsky, 1978), potentially reach further than they can on their own, thus making the assessment dynamic.<br />

Dynamic assessment means that interaction can take place, and feedback can be given, during the assessment or<br />

examination, which separates it from more ”traditional assessments” (Swanson & Lussier, 2001). In this way,<br />

dynamic assessment provides the possibility to learn from the assessment, but also to assess the student’s potential<br />

(”best performance”), rather than (or together with) his or her ”typical performance” (Gipps, 2001). Empirical<br />

studies has shown that dynamic assessment indeed help to improve student performance, and also that lowperforming<br />

students are those who benefit the most, thus making the difference between high- and low-performing<br />

students less pronounced (Swanson & Lussier, 2001).<br />

After receiving the “expert” document, the students must, within a week, prepare a comparison document, where<br />

they identify differences between their own and the “expert” answer. The students are also expected to reflect on the<br />

reasons for these differences and try to identify own needs for further learning. This comparison document is a part<br />

18

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!