SRA 2019 Annual Meeting

Global Catastrophic Risk Session and Poster Presentation
Society for Risk Analysis 2019 Annual Meeting
8-12 December, Arlington, VA.

Part of GCRI’s ongoing SRA presence.

Symposium: Global Catastrophic Risks
Time: Monday 9 December, 1:30-3:00
Chair: Anthony Barrett

Title: Global Catastrophic Risk Decision Analysis
Author: Seth D. Baum, Global Catastrophic Risk Institute

Title: US Policy for Reducing Global Catastrophic Risk
Author: Jared Brown, Future of Life Institute and Global Catastrophic Risk Institute

Title: Biotechnology as an Emerging Global Catastrophic Risk
Author: Gary A. Ackerman, University at Albany

Title: The Caveman and the Bomb: Psychological Obstacles to Rational Decisions About the Use of Nuclear Weapons
Author: Paul Slovic*, Decision Research and University of Oregon

Title: Regulating Best-Case Scenarios
Author: Arden Rowell*, University of Illinois College of Law

SRA Poster Session
Time: Monday 9 December

Title: Value Alignment Strategies for AI Catastrophe Risk Management
Author: Anthony Barrett, Global Catastrophic Risk Institute

***

Symposium: Global Catastrophic Risks
Chair: Anthony Barrett

Title: Global Catastrophic Risk Decision Analysis
Author: Seth D. Baum, Global Catastrophic Risk Institute

This presentation discusses issues of risk quantification in decision-making on global catastrophic risk (GCR). As very rare extreme events, GCRs are difficult to quantify. There are important decisions that can require quantification, especially where there are tradeoffs between GCRs and/or other important factors. Examples include (1) nuclear disarmament, in which there may be a tradeoff between the probability and severity of major war, (2) the use of nuclear explosives for deflecting large Earthbound asteroids, which may increase violent conflict risk, and (3) the launch of advanced artificial intelligence, which could be catastrophic itself or reduce other GCRs, depending on its design. The presentation discusses analytical options for evaluating these difficult decisions and reflects on the role of risk analysis in GCR decision-making processes.

Title: US Policy for Reducing Global Catastrophic Risk
Author: Jared Brown, Future of Life Institute and Global Catastrophic Risk Institute

This presentation discusses the current and prospective future US federal government policies and programs for reducing global catastrophic risk (GCR). Current policies and programs focus on the prevention of or response to specific threats and do not have the capacity to address global catastrophic consequences. For example, while it is a program that is conceptually applicable to GCR, the Strategic National Stockpile is generally focused on a defined set of public health emergencies with a limited capacity to mitigate truly global catastrophic consequences. Generally, absent from the existing portfolio of policies and programs is an “all-hazards” GCR reduction program that works across all GCRs, including so-called “black swan” events that do not fit within the remit of existing programs. This presentation explores how the federal government could adapt existing programs, or establish new ones, to provide broader, all-hazards GCR reduction. An all-hazards program may be especially valuable in the face of GCR from emerging technologies, which are hard to forecast and do not fit neatly within existing single-risk policies and programs.

Title: Biotechnology as an Emerging Global Catastrophic Risk
Author: Gary A. Ackerman, University at Albany

With dramatic innovations like CRISPR, bioinformatics and nanoencapsulation, biotechnology is advancing at a rate undreamt of a generation ago. This brings an urgency to assessing any associated risks – arising from either accident or malice – especially those that could qualify as catastrophic for humanity and the biosphere. Such risks could include, but are hardly limited to, novel pathogens that circumvent existing medical countermeasures, gene drives that propagate infertility and lead to the extinction of vital species, and biopeptides that alter human or animal behavior. This presentation will conceptualize in a broad framework, but employing specific examples, biotechnology as a global catastrophic risk (GCR). In the process, it will draw on other GCRs (e.g., nuclear war, climate change, asteroid collision) for lessons about risk assessment, risk communication and risk management, while highlighting those challenges unique to biotechnology. It will also discuss lessons that past attempts to resolve biotechnology risk (e.g., the Asilomar Conference on Recombinant DNA) might hold for other emerging GCRs. The presentation will provide some general approaches to assessing biotechnology risk and offer policy recommendations for ensuring that potential risks are dealt with appropriately by the scientific community, government and other stakeholders.

Title: The Caveman and the Bomb: Psychological Obstacles to Rational Decisions About the Use of Nuclear Weapons
Author: Paul Slovic*, Decision Research and University of Oregon

Shortly after the dawn of the nuclear era, psychologists and other behavioral scientists began the empirical study of the factors influencing decision making in the face of risk. The findings are worrisome, identifying cognitive quirks and limitations that challenge the ability of our leaders to make rational decisions about using nuclear weapons. In my talk I shall review psychological processes, conscious and non-conscious, active and passive, that help explain how governments and their citizens can allow nuclear war to occur. Perpetrating mass killing with nuclear weapons may arise from cognitive and social mechanisms such as psychic numbing, compassion collapse, tribalism, dehumanization of others, blaming of victims, attentional failures, and faulty decision making processes, all of which work to destroy feelings and understanding that would normally stop us from planning, executing, and tolerating inhumane acts. What reason is there to believe that we are now in a new age of enlightenment where we will no longer behave in this way? How can we prevent the vast lethal potential of nuclear weapons from being unleashed because these psychological processes, some of which have guided humans since we left our caves, have inhibited rational decision making?

Title: Regulating Best-Case Scenarios
Author: Arden Rowell*, University of Illinois College of Law

How should policymakers account for the possibility of extreme-upside events, such as might result from successfully colonizing other planets, ending malnutrition, eradicating malaria, developing autonomous vehicles, or implementing other transformational new technologies? Although there is an increasingly robust literature on catastrophic risk, which is designed to help policymakers in managing extreme-downside risks, there is surprisingly little corollary literature examining the appropriate management of extreme-upside possibilities. This talk will consider possible explanations for the general neglect of extreme-upside scenarios in policy analysis, consider the extent that existing research on extreme-downside events might be transferred to extreme-upside events, and flag reasons that “best-case scenarios” might sometimes justify distinctive policy treatment.

* Paul Slovic and Arden Rowell were placed in our session by the SRA program committee and were not recruited by GCRI. We are nonetheless delighted to have these two distinguished scholars in our session.

SRA Poster Session
Time: Monday 9 December, 6:00-8:00

Title: Value Alignment Strategies for AI Catastrophe Risk Management
Author: Anthony Barrett, Global Catastrophic Risk Institute

In this talk, I discuss key dynamics of artificial
intelligence (AI) development, risks of AI catastrophe, and value
alignment strategies for AI catastrophe risk management. I focus on
strategies involving combinations of cooperative governance methods, as
well as potential deterrence methods to provide key actors with
disincentives for risky actions. I provide qualitative analysis of these
strategies in terms of basic game theory, risk-risk trade-offs, and
policy implications.