29.03.2013 Views

October 2006 Volume 9 Number 4

October 2006 Volume 9 Number 4

October 2006 Volume 9 Number 4

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Execution phase<br />

Execution phase activities are carried out every time an e-learning system must be evaluated. They include two<br />

major jobs: a systematic inspection and a user-based evaluation.<br />

Systematic inspection is performed by evaluators. During the inspection, the evaluator uses the ATs to perform a<br />

rigorous and systematic analysis and produces a report in which the discovered problems are described, as<br />

suggested in the AT. The list of ATs provides a systematic guidance to the evaluator on how to inspect an<br />

application. Most evaluators are very good in analysing certain features of interactive applications; however,<br />

they often neglect some other features, strictly dependent on the specific application category. Exploiting a set of<br />

ATs ready for use allows evaluators with limited experience in a particular domain to perform a more accurate<br />

evaluation.<br />

User-based evaluation is conducted only when there is disagreement among the evaluators on some inspection<br />

findings, so that validation with real users becomes necessary. ATs are still useful since they indicate how to<br />

define the Concrete Tasks (CTs for short), i.e. the actual tasks that users are required to perform during the test.<br />

A CT is thus simply formulated by considering the activity description item of the AT whose application<br />

provided contrasting findings; this description is not general as in the AT but it explicitly refers to the application<br />

to be evaluated.<br />

Since the AT activity description is a formulisation of the user tasks, starting from this it is immediately possible<br />

to formulate experimental tasks which can guide users in the critical situations encountered by the evaluators<br />

during inspection. CTs are therefore conceived as a means of actually verifying the impact, upon the users, of the<br />

specific points of the application that are supposed to be critical for e-learning quality. In this sense, they make<br />

user-based evaluation better focused, so optimizing exploitation of the users resources and helping to obtain a<br />

more precise feedback for designers.<br />

During evaluation execution, a sample of users is observed while they are executing CTs and relevant data are<br />

collected (users’ actions, users’ errors, time for executing actions, etc.). The outcome of this is therefore a<br />

collection of raw data. In the result summary, these data are coded and organized in a synthetic manner and then<br />

analyzed.<br />

The last activity of the execution phase aims at providing the designers and developers of the application with an<br />

organised evaluation feedback. The result of this activity is an evaluation report describing the problems<br />

detected, possibly revised in the light of the user testing outcome, using the terminology provided in the AT for<br />

referring to system objects or interface elements, and for describing critical incidents. This standardised language<br />

increases the precision of the report and decreases the risk of misunderstandings.<br />

AT inspection validation<br />

The advantage of the AT inspection for e-learning systems over other evaluation techniques has been<br />

demonstrated by a controlled experiment. The study involved seventy-three senior students of a Human-<br />

Computer Interaction (HCI) class at the University of Bari in Italy. They were divided in three groups that were<br />

asked to evaluate a commercial e-learning system by applying the AT inspection, or the traditional heuristic<br />

evaluation, or a thinking aloud technique. Each group was assigned to one of the three experimental conditions.<br />

The heuristic inspection group had to perform an heuristic evaluation exploiting the “learning with software”<br />

heuristics (Squires and Preece, 1999). In the user testing group, every evaluator observed a student during the<br />

interaction with the e-learning application using the thinking aloud technique. Finally, the AT inspection group<br />

used the inspection technique with ATs proposed by eLSE methodology.<br />

In addition, we recruited 25 students of another computer science class, who use the e-learning system, thus<br />

acting as users for the user testing.<br />

A week before the experiment, all participants were given a 1-hour demonstration of the application to be<br />

evaluated. A few summary indications about the application content and the main functions were introduced,<br />

without providing too many details. A couple of days before the experiment, a training session of about one hour<br />

introduced participants with the conceptual tools to be used during the experiment. Each group participated in<br />

their specific training session.<br />

50

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!