21.01.2014 Views

User-Centered Evaluation of Information Retrieval - ideals

User-Centered Evaluation of Information Retrieval - ideals

User-Centered Evaluation of Information Retrieval - ideals

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

100 <strong>Evaluation</strong> <strong>of</strong> Public Services ir Personnel<br />

analysis, design, and evaluation that is contextual in nature is needed;<br />

such a framework is both interpretive and phenomenological. It implies<br />

that information retrieval tasks are embedded in everyday life, and that<br />

meanings<br />

arise from individuals and from situations and are not<br />

generalizable except in a very limited sense. <strong>User</strong>s are diverse, and their<br />

situations are diverse as well. Their needs differ depending on their<br />

situation in time and space.<br />

<strong>Information</strong> systems may therefore differ, <strong>of</strong>fering diverse<br />

capabilities <strong>of</strong>ten simultaneously within the same system which<br />

provide an array <strong>of</strong> options the user can select. For example, such systems<br />

may <strong>of</strong>fer interfaces tailored to many skill and knowledge levels; they<br />

may allow users to customize their access by adding their own entry<br />

vocabularies or remembering preferred search parameters; or they may<br />

provide a variety <strong>of</strong> output and display options. In order to move beyond<br />

the present-day large, rather brittle systems which are designed to be<br />

evaluated on precision and recall, evaluation studies must be conducted<br />

that can be used in the design <strong>of</strong> new systems. By focusing on users<br />

as the basis for evaluative criteria, new systems that are more responsive<br />

and adaptive to diverse situations can be created.<br />

<strong>User</strong>-centered criteria affective measures such as user satisfaction<br />

and situational factors such as context are beginning to be used in<br />

research and evaluation. But this is just a beginning. Librarians and<br />

researchers alike must retain and refine their powers <strong>of</strong> critical<br />

observation about user behavior and attempt to look at both the<br />

antecedents and the results <strong>of</strong> information retrieval.<br />

The methods used to gain insight into these issues are frequently<br />

case studies, focus groups, or in-depth interviews which, when combined<br />

with objective measures, can afford effective methods <strong>of</strong> research and<br />

evaluation. When placing the user at the center <strong>of</strong> evaluations, it is<br />

important not to take behaviors at face value but to probe beneath<br />

the surface. In order to do this successfully, it can mean small scale,<br />

in-depth studies carried out by astute, thoughtful individuals<br />

a combination <strong>of</strong> both practitioners and researchers.<br />

ideally,<br />

REFERENCES<br />

Bawden, D. (1990). <strong>User</strong>-oriented evaluation <strong>of</strong> information systems and services.<br />

Brookfield, VT: Gower.<br />

Bates, M. J. (1984). The fallacy <strong>of</strong> the perfect thirty-item online search. RQ, 24(1), 43-<br />

50.<br />

Cleverdon, C. W. (1962). Report on testing and analysis <strong>of</strong> an investigation into the<br />

corporate efficiency <strong>of</strong> indexing systems. Cranfield, England: College <strong>of</strong> Aeronautics.<br />

Cleverdon, C. W., & Keen, M. (1966). ASLIB Cranfield Research Project: Factors<br />

determining the performance <strong>of</strong> smaller type indexing systems: Vol. 2. Bedford,<br />

England: Cyril Cleverdon.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!