21.07.2013 Views

User Interface Design and Ergonomics - National Open University of ...

User Interface Design and Ergonomics - National Open University of ...

User Interface Design and Ergonomics - National Open University of ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

ecruit test users with more similar background, <strong>and</strong> you can try to brief test users to<br />

bring them close to some common level <strong>of</strong> preparation for their tasks.<br />

Differences in procedure, how you actually conduct the test, will also add to variability.<br />

If you help some test users more than others, for example, you are asking for trouble.<br />

This reinforces the need to make careful plans about what kind <strong>of</strong> assistance you will<br />

provide. Finally, if people don't underst<strong>and</strong> what they are doing your variability will<br />

increase. Make your instructions to test users <strong>and</strong> your task descriptions as clear as you<br />

can.<br />

f. Debriefing Test <strong>User</strong>s<br />

It has been stressed that it is unwise to ask specific questions during a thinking aloud test,<br />

<strong>and</strong> during a bottom-line study. But what about asking questions in a debriefing session<br />

after test users have finished their tasks? There's no reason not to do this, but do not<br />

expect too much. People <strong>of</strong>ten don't remember very much about problems they have<br />

faced, even after a short time. Clayton remembers vividly watching a test user battle with<br />

a text processing system for hours, <strong>and</strong> then asking afterwards what problems they had<br />

encountered. "That wasn't too bad, I don't remember any particular problems," was the<br />

answer. He interviewed a real user <strong>of</strong> a system who had come within one day <strong>of</strong> quitting<br />

a good job because <strong>of</strong> failure to master a new system; they were unable to remember any<br />

specific problem they had had. Part <strong>of</strong> what is happening appears to be that if you work<br />

through a problem <strong>and</strong> eventually solve it, even with considerable difficulty, you<br />

remember the solution but not the problem.<br />

There is an analogy here to those hidden picture puzzles you see on kids' menus at<br />

restaurant: there are pictures <strong>of</strong> three rabbits hidden in this picture, can you find them?<br />

When you first look at the picture you can't see them. After you find them, you can not<br />

help seeing them. In somewhat the same way, once you figure out how something works<br />

it can be hard to see why it was ever confusing.<br />

Something that might help you get more info out <strong>of</strong> questioning at the end <strong>of</strong> a test is<br />

having the test session on video so you can show the test user the particular part <strong>of</strong> the<br />

task you want to ask about. But even if you do this, don't expect too much: the user may<br />

not have any better guess than you have about what they were doing.<br />

Another form <strong>of</strong> debriefing that is less problematic is asking for comments on specific<br />

features <strong>of</strong> the interface. People may <strong>of</strong>fer suggestions or have reactions, positive or<br />

negative, that might not otherwise be reflected in your data. This will work better if you<br />

can take the user back through the various screens they've seen during the test.<br />

4.0 CONCLUSION<br />

In this unit, you have been introduced to evaluating designs with users. Choosing the<br />

users to test, getting the users to know what to do, providing the necessary systems <strong>and</strong><br />

188

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!