18.05.2015 Views

Health Information Management: Integrating Information Technology ...

Health Information Management: Integrating Information Technology ...

Health Information Management: Integrating Information Technology ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

STRATEGY, IMPLEMENTATION AND EVALUATION 199<br />

knowledge to the treatment of control patients as well as intervention ones.<br />

Clustered RCTs may not provide comparable control and intervention groups,<br />

given that no two professionals, departments or hospitals are likely to be the<br />

same. In both types of trial it is not possible to ‘blind’ the participants from the<br />

presence of the health care PCIS.<br />

In addition, generating these kind of ‘objective’ circumstances is often also<br />

undesirable. The specific routines, work processes, leadership patterns, cultures<br />

of professionals and departments within and between hospitals differ—and such<br />

issues are often key in understanding why systems may fail in one situation and<br />

succeed in another. While RCTs will give ‘hard data’ (in the form of establishing<br />

relations between) on a very limited set of pre-set parameters, it cannot answer<br />

the why or how questions that are often the most relevant when one wants to<br />

understand PCIS implementation. Nor can it grasp all the unanticipated<br />

consequences, like sabotage of a system because of user resistance, that are often<br />

crucial to the fate of PCISs. RCT researchers themselves often stress that their<br />

designs are of limited ‘real-world’ use due to the artificial, laboratory<br />

circumstances (e.g. simulation patients, inexperienced subjects) in which the data<br />

are produced. Knowing more about these issues requires methods that are able to<br />

understand the way information systems work in daily practices and the<br />

interaction between the system and the user. In much recent literature on ICT<br />

evaluation, qualitative methods like interviews, participant observations and<br />

focus group meetings are seen as much more suitable for this goal (see also<br />

further).<br />

Finally, social scientists have argued that ‘scientificness’ comes in many guises,<br />

of which hypothesis testing is only one. Building a theory explaining a specific<br />

social phenomenon, for example, is an equally scientific endeavour, which may<br />

or may not be amenable to quantification, or to ‘testing’ through an RCT.<br />

FORMATIVE EVALUATION OF PATIENT CARE<br />

INFORMATION SYSTEMS<br />

Partly due to this historical entwinement with the RCT as ‘gold standard’, the PCIS<br />

evaluation literature still sharply divides ‘objectivist’ and ‘subjectivist’<br />

approaches to PCIS evaluation. The objectivist position starts with the<br />

assumption that the merits and worth of an information system can and should be<br />

quantifiable. ‘To measure is to know’, it is often stated, and observations should<br />

be as objective as possible. In this way, it is argued, two observations of the same<br />

phenomenon will yield the same results, without being affected by how the<br />

resource under study functions or is used. Much attention, therefore, is given to<br />

the avoidance of measurement error, and the prevention of biased observation.<br />

Such issues, indeed, are crucial for any definite summative evaluation. For<br />

formative evaluations, we will argue however, an overly stringent focus on<br />

objective measurement is beside the point.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!