17.05.2015 Views

14-1190b-innovation-managing-risk-evidence

14-1190b-innovation-managing-risk-evidence

14-1190b-innovation-managing-risk-evidence

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

72<br />

It is difficult to face up to the <strong>risk</strong> and uncertainty<br />

around <strong>innovation</strong>. All actions, including inaction, may<br />

have benefits and harms, but we don’t know, nor can we<br />

even imagine, everything that might happen. We cannot be<br />

confident about the valuations people might attach to the<br />

possible consequences. Nor can we confidently say how<br />

likely things are to happen — in fact, the whole meaning of<br />

such a statement can be contested.<br />

There are also multiple players who have a stake in<br />

decisions about <strong>innovation</strong>s — the public (which should not<br />

be treated as some homogenous mass, but as a diversity of<br />

‘publics’); the regulators and policy-makers; the innovators<br />

themselves; non-governmental organizations (NGOs) and<br />

so on. There may be substantial problems of trust between<br />

these actors.<br />

Communication between these stakeholders, especially<br />

when channelled through media outlets that may not<br />

be concerned with balanced reporting, is fraught with<br />

difficulties. Any progress that can be made in improving that<br />

communication should be beneficial, and hence the search<br />

for a ‘common language’.<br />

What policymakers might like in an ideal world<br />

The theory of rational decision-making in the face of<br />

uncertainty comprises four basic stages:<br />

• Structuring the list of actions, and the possible<br />

consequences of those actions.<br />

• Giving a financial or other numerical utility to those<br />

possible future outcomes.<br />

• Assigning a probability for each possible consequence,<br />

given each action.<br />

• Establishing a rational decision that maximizes the<br />

expected benefit.<br />

The real world is a messy place, however. The theory<br />

described above only holds in rarefied situations of perfect<br />

contextual knowledge, such as gambling on roulette when<br />

we know the probabilities of possible gains or losses.<br />

These pre-chance problems are the only type in which<br />

we might talk about the <strong>risk</strong>: and even then we need to<br />

make assumptions about the fairness of the wheel and the<br />

integrity of the casino.<br />

What policy-makers actually get<br />

In reasonably well-understood situations, numerical <strong>risk</strong><br />

assessments can enable appropriate decisions for prioritising<br />

actions. The Health and Safety Executive’s Tolerability of Risk<br />

framework provides appropriate guidance: in this chapter,<br />

the case study Nuclear: The Submariner’s Perspective shows<br />

how societal valuations of potential consequences can be<br />

incorporated, while Adapting regulation to changing <strong>evidence</strong><br />

on <strong>risk</strong>s: delivering changes to pig inspection illustrates that <strong>risk</strong><br />

assessment can form a basis for <strong>evidence</strong>-based regulation.<br />

But in some areas of <strong>innovation</strong> there are likely to be<br />

different groups making claims about the <strong>risk</strong>s, raising<br />

different issues with different values, and competing scientific<br />

claims based on different <strong>evidence</strong>. Even within a single<br />

group there will generally be a range of possible analyses<br />

based on different assumptions, while any predictions<br />

about how people will react to <strong>innovation</strong>s must be fairly<br />

speculative.<br />

Thus a policymaker will be faced with plural analyses<br />

that are both contingent on assumptions and inevitably<br />

inadequate, whether self-professed as such or not. The<br />

resulting uncertainty is far more than not being able to<br />

predict the future — it is as much about ‘don’t know’ as<br />

‘can’t know’.<br />

The problems of language<br />

1. Definition of terms<br />

The crucial distinction between ‘hazard’ — the potential<br />

for harm if mitigation is not put in place — and ‘<strong>risk</strong>’ is<br />

well-known. But frank discussion about <strong>risk</strong> and uncertainty<br />

is not helped by the variety of language used by people in<br />

different domains, not least in the meanings of the terms<br />

‘<strong>risk</strong>’ and ‘uncertainty’. For example, many scientists would<br />

use the term ‘uncertainty’ for everything that was not<br />

certain, including a single coin flip, and only distinguish the<br />

extent to which the uncertainty was quantifiable. In contrast,<br />

those with an economics and social science background will<br />

often adopt the distinction made by Frank Knight between<br />

‘<strong>risk</strong>’ — in which extensive data and good understanding of<br />

a controlled environment leads to agreed quantification —<br />

and ‘uncertainty’, for when this quantification is not feasible.<br />

The term ‘ambiguity’ is also, ironically, ambiguous: some use<br />

it to refer to situations in which outcomes and values are<br />

contested or unknown (see Chapter 4), while in behavioural<br />

economics it refers to uncertainty about probabilities.<br />

2. Communicating using numbers<br />

Even when numbers are agreed, there is a wide variety<br />

of ways in which probabilities may be expressed: as<br />

percentages, odds, frequencies, in graphics and so on. For<br />

example, the Safety Cases for high-hazard installations<br />

provided to the Health and Safety Executive might mention<br />

that the individual <strong>risk</strong> per annum (IRPA) = 4 x 10 -4 : it means<br />

that each worker has a 4 in 10,000, or a 1 in 2,500 <strong>risk</strong> of<br />

being killed each year, but seems almost designed to prevent<br />

comprehension.<br />

Research has shown that the way numbers are framed<br />

influences our perceptions. The 99% Campaign, an initiative<br />

that aims to dispel negative stereotypes about young people,<br />

offered one example of a positively-framed message when<br />

it issued advertisements proclaiming that “99% of young<br />

Londoners do not commit serious youth violence”. This, of<br />

The important lesson<br />

from numerical <strong>risk</strong><br />

communication is that one<br />

size does not fit all.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!