21.01.2022 Views

Sommerville-Software-Engineering-10ed

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

418 Chapter 14 ■ Resilience engineering

Responding to threats

and vulnerabilities

Monitoring the

organization and

environment

Anticipating future

threats and

opportunities

Figure 14.4

Characteristics of

resilient organizations

Learning from experience

If potentially insecure behavior is detected, the company should respond by taking

actions to understand why this has occurred and to change employee behavior.

3. The ability to anticipate A resilient organization should not simply focus on its

current operations but should anticipate possible future events and changes that

may affect its operations and resilience. These events may include technological

innovations, changes in regulations or laws, and modifications in customer

behavior. For example, wearable technology is starting to become available,

and companies should now be thinking about how this might affect their current

security policies and procedures.

4. The ability to learn Organizational resilience can be improved by learning from

experience. It is particularly important to learn from successful responses to

adverse events such as the effective resistance of a cyberattack. Learning from

success allows good practice to be disseminated throughout the organization.

As Hollnagel says, to become resilient organizations have to address all of these

issues to some extent. Some will focus more on one quality than others. For example,

a company running a large-scale data center may focus mostly on monitoring

and responsiveness. However, a digital library that manages long-term archival

information may have to anticipate how future changes may affect its business as

well as respond to any immediate security threats.

14.2.1 Human error

Early work on resilience engineering was concerned with accidents in safetycritical

systems and with how the behavior of human operators could lead to safetyrelated

system failures. This led to an understanding of system defenses that is

equally applicable to systems that have to withstand malicious as well as accidental

human actions.

We know that people make mistakes, and, unless a system is completely automated,

it is inevitable that users and system operators will sometimes do the wrong thing.

Unfortunately, these human errors sometimes lead to serious system failures. Reason

(Reason, 2000) suggests that the problem of human error can be viewed in two ways:

1. The person approach Errors are considered to be the responsibility of the individual

and “unsafe acts” (such as an operator failing to engage a safety barrier)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!