SRA 2015 Annual Meeting

Global Catastrophic Risk Sessions
Society for Risk Analysis 2015 Annual Meeting
6-10 December, Arlington, VA.

Part of GCRI’s ongoing SRA presence.

Symposium 1: Quantifying Armed Conflict and Social Unrest
Time: Monday 7 December, 15:30-17:00
Chair: Anthony Barrett

Title: Forecasting armed conflict: Risks and interventions
Author: Elisabeth Gilmore, University of Maryland (with H Hegre, H Buhaug, K Calvin, J Nordkvelle, S Waldhoff)

Title: Risk and policy analysis of nuclear war
Authors: Seth Baum, Global Catastrophic Risk Institute (with A Barrett)

Title: Modeling risk preferences in attacker-defender games
Authors: Jun Zhang, University at Buffalo, SUNY (with V Madasseri Payyappalli, J Zhuang, V Jose)

Title: Benefit cost analysis in a strategic and risky environment
Authors: Alexander Alexeev, Indiana Uinversity (with K Krutilla)

Title: Mental models for evaluating radicalization: A complex systems approach for ideological diversity and rapid ideological change
Authors: Vanessa Schweizer, University of Waterloo

Symposium 2: Global Catastrophic Risks
Time: Tuesday 8 December, 15:30-17:00
Chair: Seth Baum

Title: Climate change as a global catastrophic risk
Author: Bilal Ayyub, University of Maryland (with J Scouras)

Title: Analyzing long term risks of artificial intelligence catastrophe
Authors: Anthony Barrett, Global Catastrophic Risk Institute (with S Baum)

Title: Geoengineering and the distant future of Earth’s climate
Authors: Jacob Haqq-Misra, Blue Marble Space Institute of Science

Title: Nuclear war as a global catastrophic risk
Authors: James Scouras, Johns Hopkins University Applied Physics Laboratory (with B Ayyub)

Title: New pathways to global catastrophic risks
Authors: Bruce Tonn, University of Tennessee (with D Stiefel)

***

Symposium 1: Quantifying Armed Conflict and Social Unrest
Chairs: Anthony Barrett and Kevin Brand

Title: Forecasting armed conflict: Risks and interventions
Author: Elisabeth Gilmore, University of Maryland (with H Hegre, H Buhaug, K Calvin, J Nordkvelle, S Waldhoff)
Projections for armed intrastate conflict (civil war) depend on expectations of socioeconomic development. For example, economic growth and higher educational rates lower the risk of armed conflict. Here, we forecast the risk of armed intrastate conflict along five alternative socioeconomic pathways, known as the Shared Socioeconomic Pathways (SSPs). First, we develop a statistical model of the historical effect of key variables – population size and socioeconomic development (GDP per capita and educational attainment) – on country-specific conflict incidence, 1960–2013. Based on this model, we then forecast the annual incidence of conflict, 2014–2100, along the five SSPs. SSPs with greater welfare improvements are associated with the largest reduction in conflict risk. The marginal effect of socioeconomic development on reducing conflict risk is also much higher for the least developed countries. Importantly, this implies that poverty alleviation and investments in human capital in poor countries are likely to be much more effective instruments to attain global peace and stability than maximizing growth in wealthier economies. Further, the SSPs contain information about challenges mitigation and adaptation to climate change. We find that the sustainable development pathway with lower challenges to mitigation and adaptation is as conducive to global peace as a higher economic growth fossil fuel based development pathway. Thus, the sustainable development pathway stands out as a “no regrets” strategy for preventing climate change and intrastate conflict.

Title: Risk and policy analysis of nuclear war
Authors: Seth Baum, Anthony Barrett, Global Catastrophic Risk Institute
Despite the end of the Cold War, there are still thousands of nuclear weapons in the world, mainly in Russia and the United States. The ongoing probability of nuclear war is not zero, and some have argued that is higher now than in the Cold War. Models have been developed to estimate the probability of nuclear war and to evaluate opportunities to reduce the probability. A key factor is the quality of relations between nuclear-armed states. The impacts of nuclear war would be catastrophic. In addition to the destruction from the explosions, there could be severe global environmental consequences, including nuclear winter. There is only limited analysis available of the human impacts of nuclear winter, focusing mainly on the possibility of famine. Risk perspectives are gaining increasing currency in international policy debates about nuclear weapons, but some changes to the status quo of nuclear arsenals, doctrine, deployment, and plans for disarmament could be valuable for reducing nuclear war risk.

Title: Modeling risk preferences in attacker-defender games
Authors: Jun Zhang, University at Buffalo, SUNY (with V Madasseri Payyappalli, J Zhuang, V Jose)
Most attacker-defender games consider players as risk-neutral, whereas in reality attackers and defenders may be risk-seeking or risk-averse. This paper studies the impact of players’ risk preferences on their equilibrium behavior. In particular, we study the effects of risk preferences by considering a single period, sequential game where a defender has a continuous range of investment levels which could be strategically chosen to potentially deter an attack. We consider both single-target and multiple-target cases. We also consider quantal responses of the attackers. This study provides insights that could be used by policy analysts and decision makers involved in investment decisions in security and safety.

Title: Benefit cost analysis in a strategic and risky environment
Authors: Alexander Alexeev, Indiana Uinversity (with K Krutilla)
Modeling security risks is a complex and multilevel problem involving different actors and strategic environments. This research uses a game theoretic model to determine the optimal strategic defensive investment of a country under threat of a terrorist attack. Both the terrorist organization and the defending country must make a decision about how much of available resources and efforts should be allocated to the attack/defense in order to maximize gain/minimize losses, respectively. The model allows for asymmetry in the effectiveness of investments in attack/defense, different economic consequences of the attack for the attacker and the defender, and differing risk perceptions. The model also relaxes assumptions about agents’ rationality. Nash solutions in pure and mixed strategies are derived and preliminary results are analyzed. The paper concludes with a discussion and recommendations for future research.

Title: Mental models for evaluating radicalization: A complex systems approach for ideological diversity and rapid ideological change
Authors: Vanessa Schweizer, University of Waterloo
Ideology is recognized as playing an influential role in numerous policy agendas and decisions as well as individual behaviors. However, research on the determinants of ideology – let alone the development of dynamic models for ideological change – is rudimentary. This presentation describes progress on the application of complex systems approaches to understanding the diversity of ideologies; potential pathways for ideological transitions or change, which can be rapid (as in the case of radicalization); and cross-level interactions between the mental models of individuals and collective identity. The implications of this work for anticipating conflict will also be discussed.

***

Symposium 2: Global Catastrophic Risks
Time: Tuesday 8 December, 15:30-17:00
Chairs: Drew Rak

Title: Climate change as a global catastrophic risk
Author: Bilal Ayyub, University of Maryland (with J Scouras)
Global catastrophic risks are associated with natural or anthropogenic events that have the potential to inflict serious damage on human well-being on a global scale, including destroying or crippling modern civilization. Global climate change is generally considered a result of increasing atmospheric concentrations of greenhouse gases, mainly due to human activity. The 2013 report from the Intergovernmental Panel on Climate Change states that nearly 75% of the total radiative forcing increase since the year 1750 is due to CO2 emissions. Most climate change effects will persist for centuries even if CO2 emissions are stopped. Climate change (i.e., global temperature increases) then in turn can lead to a global sea level rise, affecting the frequency and intensity of extreme weather and climate events, e.g., precipitation storm, heat waves, etc. Linking global changes to local geographic impacts requires a deep understanding of many physical phenomena and their interactions including deep ocean temperatures, currents, geophysics of ocean basins, gravitational forces, etc. Possible strategies for long-term decision making require the understanding of these links. In the interim, engineers may rely on adaptive techniques for design and mitigation with real options. Tradeoffs may be based on low regret criteria.

Title: Analyzing long term risks of artificial intelligence catastrophe
Authors: Anthony Barrett, Global Catastrophic Risk Institute (with S Baum)
Artificial Intelligence (AI) is increasingly recognized as presenting a significant risk at some point in the future. While AI researchers and developers typically do not intend to cause harm through their work, harm may nonetheless occur due to accidents and unintended consequences. In the absence of adequate safety mechanisms, an extremely powerful AI may even be likely to cause human extinction. Thus long term AI risk scenarios merit attention even if their probabilities are low. While the AI risk scenarios are highly uncertain, established risk analysis methodologies can make progress at characterizing the risk and informing decision making. We present an initial set of graphical models that represent core elements of major pathways to global catastrophe involving extremely powerful AI at some point in the future. The models use fault tree and influence diagram conventions to depict combinations of events and conditions that could lead to AI catastrophe, as well as intervention options that could decrease risks. Model structures are derived from published literature on long term risk of AI catastrophe.

Title: Geoengineering and the distant future of Earth’s climate
Authors: Jacob Haqq-Misra, Blue Marble Space Institute of Science
The climate of Earth is susceptible to catastrophes that could threaten the longevity of human civilization. The distant future of Earth’s biosphere will be shaped by the balance between factors such as orbital variations in solar insolation, cycles in glacial coverage, and the carbonate-silicate cycle. The resonating effects of anthropogenic contributions to climate change may extend the length of the present interglacial and could even damp out the 100,000 year glacial cycle, leading Earth on a path toward an ice-free state that only weakly responds to orbital forcing. Even longer geologic timescales will force the climate to adapt to a steadily brightening sun by drawing down atmospheric carbon dioxide until habitable conditions no longer remain. Geoengineering to reduce incoming solar radiation has been suggested as a way to mediate the warming effects of contemporary climate change, and geoengineering may also serve as humanity’s last hope to withstand the sun’s transition into a red giant. Geoengineering technology may therefore be useful to keep on hand in case of a “climate emergency”. However, a pre-emptive geoengineering program that lasts thousands of years could also be used to enlarge the size of the polar ice caps and create a permanently cooler global climate. Such a large ice cap state would be more resilient to climate threats and could allow human civilization to survive further into the future than otherwise possible. Intentionally extending Earth’s glacial coverage will require uninterrupted commitment to this program for millennia but would ultimately reach a cooler equilibrium state where geoengineering is no longer needed. Whether or not this program is ever attempted, this application of geoengineering to short-term and long-range climate emergencies illustrates the need to identify preferable climate states that could ensure the long-term success of civilization.

Title: Nuclear war as a global catastrophic risk
Authors: James Scouras, Johns Hopkins University Applied Physics Laboratory (with B Ayyub)
Nuclear war is on most short lists of global catastrophes facing humanity. Like many other scenarios on that list, large uncertainties in likelihood and consequences complicate the acceptance of risk reduction strategies. However, nuclear war also has unique characteristics that set it aside from natural catastrophes and even from other anthropogenic catastrophes. In particular, there is a critical linkage between the likelihood of nuclear war and its anticipated consequences. The strategy of mutual assured destruction exploits this linkage by maintaining the specter of horrific consequences in order to keep the likelihood of large nuclear war low. Moreover, nuclear strategy intentionally maintains uncertainty in the potential for small nuclear wars to lead to large nuclear wars, thereby reinforcing the taboo against any scale nuclear war. However, nuclear strategy may be changing as we face the possibility of nuclear war arising from nonstate actors and new nuclear states, against which traditional deterrence may be more prone to failure.

Title: New pathways to global catastrophic risks
Authors: Bruce Tonn, University of Tennessee (with D Stiefel)
Global catastrophic risks are real, as are the attendant existential risks of human extinction. Some previous research has identified the paths by which unintended consequences of climate change, plagues, and artificial intelligence, among others, increase those risks (Tonn & Stiefel 2014). Other research has focused on methods to establish an acceptable risk of human extinction (Tonn 2009); to estimate the existential risks of human extinction (Tonn & Stiefel 2013); and to identify the conditions under which society ought to implement actions to reduce these existential risks (Tonn & Stiefel 2013). This paper synthesizes these research streams to identify new pathways to global catastrophic risks by applying a new methodology to the unintended consequences produced by the natural and human systems and weighting the unintended consequences based on the existential risks of human extinction. The first section of the paper updates the integrated framework of unintended consequences throughout the natural and human systems to incorporate existential risks of human extinction. The framework considers social, technological, economic, environmental, and political contexts as well as the interactions among the consequences. In the second section, the framework is applied to a database of unintended consequences via the new methodology. For each case, the thematic global catastrophic risks are noted as new pathways. In the third section, the question of next actions is addressed by theme in the context of a six component framework for action ranging from Level I, do nothing, to Level VI, extreme war footing in which the economy is organized around reducing global catastrophic risks. Lastly, the fourth section of the paper assesses the implications for researchers and policymakers given the updated integrated framework of unintended consequences, the methodology, the new pathways to global catastrophic risks, and the six component framework for action.