19.01.2021 Views

2021_Book_TextbookOfPatientSafetyAndClin

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

146<br />

(2002) argue is a more effective method of recovering<br />

from errors than identifying problematic or<br />

“flawed humans” (p. 140). However, in their small<br />

study, Mitchell et al. (2016) report that poor reporting<br />

processing, a lack of engagement on the part of<br />

medical staff to report, poor or no feedback and<br />

inaction on events reported, a lack of institution<br />

level support and funding and inadequate integration<br />

and leveraging of ever-changing health information<br />

technology remain as barriers to effective<br />

reporting and learning system development and<br />

integration.<br />

12.1.3 Error and Barriers to Safety:<br />

The Human or the System?<br />

In 1999, the Institute of Medicine (IOM) released<br />

a landmark report—to Err is Human, which many<br />

authors argue was a turning point for patient safety<br />

in the United States and more globally [20].<br />

Amongst many important recommendations, significant<br />

points outlined in the report included the<br />

fact that errors, although common and costly, can<br />

be prevented to improve patient safety, provided<br />

that the systems-related contributory factors to<br />

these errors become the focus of addressing safety<br />

issues in healthcare (IOM 1999). While many<br />

commentators argue that there is an increased<br />

appreciation of the systemic nature of errors in the<br />

healthcare setting [16, 18–20] some still assert<br />

that, unfortunately, there is a very prevalent person-centered<br />

blame culture in high reliability organizations<br />

such as aviation and healthcare, which to<br />

some extent is a “psychological tendency and an<br />

industry norm” [21]. This way of thinking and<br />

error assignment is referred to by Reason (2000)<br />

and Dekker (2002) as person approach, which<br />

holds that errors occur because of unwanted<br />

human variability and fallibility that happens in<br />

safe system. This view of error stresses that people<br />

working at the sharp end perform unsafe acts,<br />

characterized by various errors and violations that<br />

arise from abnormal cognitive processes such as<br />

forgetfulness and inattention, which can only be<br />

rectified by reducing human variability, setting<br />

better boundaries through training and discipline<br />

and possibly even naming and shaming [22, 23].<br />

G. Dagliana et al.<br />

In contrast, as highlighted by the IOM and<br />

other authors [22, 23] errors can be better understood<br />

by taking a systems approach or view. This<br />

holds that safety is an emergent property of the<br />

way in which a system is designed and not a<br />

product of the action of its individual components<br />

[21, 24]. From this perspective, errors<br />

which occur at the sharp end, are the result of a<br />

host of latent systemic conditions or design flaws,<br />

or what Reason refers to as “resident pathogens”<br />

(2000; p. 769) and active failures of people while<br />

performing their work. Therefore, it is not necessarily<br />

the human who causes the error (no matter<br />

the context) but rather the human’s interactions<br />

with the broader system (the tools, tasks, environment,<br />

other people in a certain organizational<br />

framework and context) which, if the system has<br />

latent failures, result in the occurrence of error.<br />

Woods and Cook (2002) stress that in order to<br />

recover from error there is a need to search for<br />

systemic vulnerabilities, while understanding<br />

work as it is performed at the sharp end. This<br />

enables the detection of latent failures within the<br />

design of the system by those who operate within<br />

in it, a critical step to informing decision-makers<br />

on what needs to be prioritized to improve safety<br />

and reduce the likelihood of the same thing happening<br />

again.<br />

Effectively, it is critical to understand whether<br />

there is compatibility between the social side of<br />

work (humans, their beliefs and cultures) and the<br />

technical side of work (how it is designed organized<br />

and actually executed). This requires an<br />

appreciation of sociotechnical systems theory,<br />

which is expanded below. Additionally, as articulated<br />

in the seminal paper by Rasmussen (1997),<br />

to effectively manage risk associated with work,<br />

no matter the context, there is a need to consider<br />

the various levels of stakeholders involved in the<br />

control, regulation, and execution of work. This<br />

is captured in Rasmussen’s Hierarchical Risk<br />

Management Framework, which stresses the<br />

importance of the vertical integration of knowledge<br />

and decisions across all stakeholders<br />

(which, in this model include Government,<br />

Regulators, Company executives, and management<br />

and staff at the sharp end) [25]. In other<br />

words, knowledge and actions of how work is

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!