06.05.2013 Views

pdf 820Kb - INSEAD CALT

pdf 820Kb - INSEAD CALT

pdf 820Kb - INSEAD CALT

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Evaluation report of the use of Onto-Logging<br />

platform in the user site<br />

Deliverable ID: D8b<br />

Page : 15 of 110<br />

Version: 1.0<br />

Date: 27 january 2004<br />

Status: Final<br />

Confid.: Public<br />

Cognitive walkthrough. Cognitive walkthrough is an evaluation technique that considers<br />

psychology theory into the informal and subjective walkthrough technique, proposed by<br />

(Polson et al., 1992). It aims to evaluate the design in terms how well its supports the user as<br />

he learns how to perform the required task. The designer or an expert in cognitive psychology<br />

performs the walkthrough. The expert works through the design for a particular task, step by<br />

step, identifying potential problems against psychological criteria. The analysis is focused on<br />

the users goals and knowledge. The cognitive walkthrough must show if, and how the<br />

interface will guide the user to generate the correct goal to perform the required task, and to<br />

select the necessary action to fulfil a goal.<br />

Heuristic evaluation. Heuristic evaluation (proposed by Nielsen and Molic h) involves<br />

experts assessing the design. In this approach a set of usability criteria or heuristic is<br />

identified to guide design decision. Evaluators independently run through the performance of<br />

the task set with the design and assesses its conformance to the criteria at each stage. Nielsen<br />

suggests that around five evaluators found about 75% of potential usability evaluation.<br />

2.3.6 Test, Experiment and Usability Engineering methods<br />

Tests, experiment and assessments evaluation methods consist in setting up a more formal<br />

setting for the evaluation, and in particular the organization of evaluation sessions with clear<br />

evaluation objectives.<br />

Tests and assessments can be useful tools in evaluation to measure the impact of the system<br />

on the participants’ processes. The tests are conducted successively without and with the use<br />

of the system, and the results are compared.<br />

Self-reports participants write reports on how much the system has contributed to improve<br />

their work process (more than an opinion, users are asked for an explanation).<br />

Usability engineering. Usability engineering is an approach to system design where the<br />

usability issues of a system are specified quantitatively in advance. The development of the<br />

system is based in these metrics, which can be used as cr iteria for testing the usability of a<br />

system or prototype. Examples of such metrics are 'percentage of errors', and 'number of<br />

commands used'.<br />

Experiments. Experiments are the most formal methods of usability testing that requires<br />

preparation and knowledge to evaluate the design of a system or prototype by quantifying<br />

user performance such as 'time on task', 'number of errors', etc.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!