120 progress in AI had probabilities that were far from negligible. At present, there is no indication that the concerns of the public and researchers in other fields have been assuaged by the AAAI panel’s interim report or any subsequent outreach effort. What then, if anything, can we infer from these two different pieces of work? The survey suggests that some AI researchers believe that the development of advanced AI systems poses non-negligible <strong>risk</strong>s of extremely bad outcomes for humanity, whilst the AAAI panel was skeptical of radical outcomes. Under these circumstances, it is impossible to rule out the possibility of a genuine <strong>risk</strong>, making a case for deeper investigation of the potential problem and the possible responses and including long-term <strong>risk</strong>s from AI in horizon-scanning efforts by government. Challenges of <strong>managing</strong> existential <strong>risk</strong>s from emerging technology Existential <strong>risk</strong>s from emerging technologies pose distinctive challenges for regulation, for the following reasons: 1. The stakes involved in an existential catastrophe are extremely large, so even an extremely small <strong>risk</strong> can carry an unacceptably large expected cost 24 . Therefore, we should seek a high degree of certainty that all reasonable steps have been taken to minimize existential <strong>risk</strong>s with a sufficient baseline of scientific plausibility. 2. All of the technologies discussed above are likely to be difficult to control (much harder than nuclear weapons). Small states or even non-state actors may eventually be able to cause major global problems. 3. The development of these technologies may be unexpectedly rapid, catching the political world off guard. This highlights the importance of carefully considering existential <strong>risk</strong>s in the context of horizon-scanning efforts, foresight programs, <strong>risk</strong> and uncertainty assessments, and policy-oriented research. 4. Unlike <strong>risk</strong>s with smaller stakes, we cannot rely on learning to manage existential <strong>risk</strong>s through trial and error. Instead, it is important for government to investigate potential existential <strong>risk</strong>s and develop appropriate responses even when the potential threat and options for mitigating it are highly uncertain or speculative. As we seek to maintain and develop the adaptive institutions necessary to manage existential <strong>risk</strong>s from emerging technologies, there are some political challenges that are worth considering: 1. Reduction of the <strong>risk</strong> of an existential catastrophe is a global public good, because everyone benefits 25 . Markets typically undersupply global public goods, and large-scale cooperation is often required to overcome this. Even a large country acting in the interests of its citizens may have incentives to underinvest in ameliorating existential <strong>risk</strong>. For some threats the situation may be even worse, since even a single non-compliant country could pose severe problems. 2. The measures we take to prepare for existential <strong>risk</strong>s The stakes involved in an existential catastrophe are extremely large, so even an extremely small <strong>risk</strong> can carry an unacceptably large expected cost. from emerging technology will inevitably be speculative, making it hard to achieve consensus about how to respond. 3. Actions we might take to ameliorate these <strong>risk</strong>s are likely to involve regulation. The costs of such regulation would likely be concentrated on the regulators and the industries, whereas the benefits would be widely dispersed and largely invisible — a classic recipe for regulatory failure. 4. Many of the benefits of minimizing existential <strong>risk</strong>s accrue to future generations, and their interests are inherently difficult to incorporate into political decision-making. Conclusion In the coming decades, we may face existential <strong>risk</strong>s from a number of sources including the development of engineered pathogens, advanced AI, or geoengineering. In response, we must consider these potential <strong>risk</strong>s in the context of horizon-scanning efforts, foresight programs, <strong>risk</strong> and uncertainty assessments, and policy-oriented research. This may involve significant political and coordination challenges, but given the high stakes we must take reasonable steps to ensure that we fully realize the potential gains from these technologies while keeping any existential <strong>risk</strong>s to an absolute minimum.
Julia Slingo (Met Office Chief Scientist), Rowan Douglas (Chairman, Willis Research Network), Trevor Maynard (Head of Exposure Management and Reinsurance, Lloyd’s of London) 121 HIGH LEVEL CASE STUDY: NATURAL DISASTERS AND THE INSURANCE INDUSTRY
- Page 1 and 2:
INNOVATION: MANAGING RISK, NOT AVOI
- Page 3:
FOREWORD Advances in science and te
- Page 6 and 7:
EDITOR AND CHAPTER AUTHORS 6 Editor
- Page 8:
8 CASE STUDY AUTHORS Framing Risk -
- Page 11:
Global Alliance for Vaccines and Im
- Page 14 and 15:
14 INTRODUCTION In today’s global
- Page 16 and 17:
16 Investors face additional risks
- Page 18 and 19:
18 unclear patent law), or barriers
- Page 20 and 21:
More problematically, system-wide r
- Page 22 and 23:
22 is unwilling or unable to undert
- Page 24 and 25:
24 innovation process, or one secto
- Page 26 and 27:
2 1. TRENDS IN INNOVATION AND GLOBA
- Page 28 and 29:
28 the body. Nanoparticles generate
- Page 30 and 31:
access to education that are occurr
- Page 32 and 33:
to seek opportunities for more subs
- Page 34 and 35:
34 Middle East and Africa mean that
- Page 36 and 37:
At their core, decisions are about
- Page 38 and 39:
38 CASE STUDY CONSISTENCY AND TRANS
- Page 40 and 41:
40 within Her Majesty’s Inspector
- Page 42 and 43:
42 liabilities. The coalition gover
- Page 44 and 45:
44 Advances in the life sciences co
- Page 46 and 47:
SECTION 2: UNCERTAINTY, COMMUNICATI
- Page 49 and 50:
Andy Stirling (Science Policy Resea
- Page 51 and 52:
as a whole, for seeding and selecti
- Page 53 and 54:
information concerning public polic
- Page 55 and 56:
Kingdom. It has 34 participating an
- Page 57 and 58:
Cinderella, too often uninvited to
- Page 59 and 60:
have been awarded in rational choic
- Page 61 and 62:
ecomes easier to accept and justify
- Page 63 and 64:
Tim O’Riordan (University of East
- Page 65 and 66:
CASE STUDY HUMAN RIGHTS AND RISK Ka
- Page 67 and 68:
give the impression that certain ki
- Page 69 and 70: adioactive waste in many countries
- Page 71 and 72: David Spiegelhalter (University of
- Page 73 and 74: course, means that 1% are violent,
- Page 75 and 76: level was announced on 22 January 2
- Page 77 and 78: Even in situations with limited evi
- Page 79 and 80: Steve Elliott (Chemical Industries
- Page 81 and 82: THE NGO PERSPECTIVE Harry Huyton (R
- Page 84 and 85: SECTION 3: FRAMING RISK — THE HUM
- Page 87 and 88: David Halpern and Owain Service (Be
- Page 89 and 90: letter and discovered that this is
- Page 91 and 92: measures have been put in place and
- Page 93 and 94: Nick Pidgeon and Karen Henwood (Car
- Page 95 and 96: equity of risk distribution, the pe
- Page 97 and 98: individual include the emergence of
- Page 99 and 100: domain of biosecurity risks. In the
- Page 101 and 102: Edmund Penning-Rowsell (University
- Page 103 and 104: — that is, governed by random pro
- Page 105 and 106: 90% 80% Media scare stories linking
- Page 107 and 108: Judith Petts (University of Southam
- Page 109 and 110: The impact of intuition on response
- Page 111 and 112: policy has moved towards less-inten
- Page 113 and 114: more children, raising the risk of
- Page 115 and 116: Nick Beckstead (University of Oxfor
- Page 117 and 118: CASE STUDY POLICY, DECISION-MAKING
- Page 119: might be developed, and what the li
- Page 123 and 124: transport disruption, wind damage,
- Page 125 and 126: the premiums that we have received
- Page 127 and 128: Only three GM crops have been appro
- Page 129 and 130: Joyce Tait (University of Edinburgh
- Page 131 and 132: evidence used to support policy and
- Page 133 and 134: 1). In essence, the more developed
- Page 135 and 136: that engage with policy decisions o
- Page 137 and 138: Lisa Jardine (University College Lo
- Page 139 and 140: debate, which enabled the non-scien
- Page 141 and 142: again concluded that there was stil
- Page 143 and 144: we had been at such pains to educat
- Page 145 and 146: ANNEX: INTERNATIONAL CONTRIBUTIONS
- Page 147 and 148: center of Paris has led to a comple
- Page 149 and 150: production of unconventional hydroc
- Page 151 and 152: 3.3. The current difficulty faced b
- Page 153 and 154: • How can we give entrepreneurs s
- Page 155 and 156: As experienced by the World Trade O
- Page 157 and 158: nanodashboard.nano.gov/. 77. See Ma
- Page 159 and 160: 41. Vanichkorn, S. and Banchongduan
- Page 161 and 162: 22. OECD. Dynamising National Innov
- Page 163 and 164: engagement. London: Zed Books; 2005
- Page 165 and 166: science and technology. Washington
- Page 167 and 168: structural model. Risk Analysis 12(
- Page 169 and 170: Chalmers, David John. (2010). “Th
- Page 171 and 172:
171