GCRI 2019 Plans: Global Catastrophic Risk Topics

GCRI’s ultimate aim is to develop the best ways of reducing global catastrophic risk. We recognize that global catastrophic risk is a complex and multifaceted issue area, and we do not start with any sweeping assumptions about what the best ways of reducing the risk may be. Instead, we work across a range of risks in order to identify the best opportunities. (In principle, we aspire to work across all of the risks, but at this time we can only work on some.) This is an ambitious and challenging agenda, but we believe it is the best way to address global catastrophic risk.

This post describes our 2019 plans for the global catastrophic risk topics we work on. This is a wide overview of what we could work on. Much of it could continue beyond 2019. Exactly what we do work on will depend on what we get funding for and which specific projects appear most promising. In some cases, important opportunities for us could arise that we had not foreseen, and our agenda could shift accordingly.

We recently published pages for the seven primary topics that we work on: aftermath of global catastrophe, artificial intelligence, cross-risk evaluation & prioritization, nanotechnology, nuclear war, risk & decision analysis, and solutions & strategy. The topics pages provide brief summaries of the topics and our work on them. This post provides more detail on our 2019 plans for each of the topics.

Out of all of these topics, our largest focus is currently AI, due to both funder interest and an abundance of good opportunities for us to contribute. We tentatively plan to continue to focus primarily on AI, though this will depend on our 2019 funding. We would encourage funders to consider any of these topics as being worthy of support. Indeed, the topics are closely interrelated, each adding an important dimension to our overall agenda.

GCRI has an important and distinctive role to play in understanding and addressing global catastrophic risk. We have expertise across a wide range of risks, which allows us to understand how the risks interact and to apply insights from different risk areas. We have social science and policy expertise that is not common in the field of global catastrophic risk. We use this expertise to engage with important stakeholders to find practical solutions that they might actually use. And we are one of the few groups working on important, understudied aspects of global catastrophic risk, including the aftermath of global catastrophe, the social science of artificial intelligence, the quantitative analysis of nuclear war risk, and pretty much anything related to long-term nanotechnology. We therefore hope to have the opportunity to pursue the plans described here.

Aftermath of Global Catastrophe

The aftermath concerns what happens to any people who survive a global catastrophe. Do they retain or rebuild civilization? What are the long-term consequences? This is a crucial topic because what happens after a catastrophe is the main factor in determining its total severity. It also has some important practical implications. Should global catastrophic risk communities focus narrowly on extinction events, such as runaway AI, or on a broader set of catastrophes? What, if anything, should society do to aid global catastrophe survivors? Unfortunately, the question of what would happen in the aftermath of a global catastrophe is both deeply uncertain and heavily neglected. GCRI is one of the only groups working on the question, and it is not even a major focus for us.

Our recent paper Long-term trajectories of human civilization is a good example of the work that we can do. This paper looks at the long-term fate of survivors to make progress on which catastrophic risks should be prioritized. The paper is an initial foray into this important and compelling topic; it is more of a research agenda than a final set of conclusions.

Two papers that illustrate our work on the aftermath of specific catastrophe scenarios are Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold and A model for the impacts of nuclear war. The asteroid paper critiques simplistic attempts to quantify the severity of sub-extinction global catastrophes, while the nuclear war paper presents a detailed model of the threats that survivors would face. If we had more of a dedicated program on the aftermath, we would further develop these sorts of analyses and link them to more general considerations such as in the Long-term trajectories paper. The result would be guidance for how to prioritize attention to different global catastrophic risks and how to improve prospects for survivors across a wide range of global catastrophe scenarios.

Artificial Intelligence

AI is GCRI’s largest focus, and it will probably continue to be in 2019. Many of our funders request we focus on AI, and we also see excellent opportunities. AI is just starting to become an important social and policy issue, creating a lot of demand for research, policy outreach, and related activities. At the same time, there is not much work being done on the social and policy dimensions of AI. GCRI has a substantial head start on AI, which we are taking advantage of.

Because of the significant demand for insights on managing AI risk, the primary focus of our AI work is developing practical solutions for reducing the risk. We focus specifically on the real-world social and institutional settings where AI is developed and influenced. Unfortunately, to date, there has been little effort to apply social science insights to this. GCRI is well-positioned to make progress on this because we have social science backgrounds and because we work across multiple risks. Those risks have generally seen more social science attention than AI, so we can transfer insights from them to AI. Climate change research has been a particularly fruitful source of insight, but it is not the only one.

Our AI solutions work involves a mix of mapping out the present state of affairs, exploring potential future states, developing ideas for how to steer the future states in better directions, and conducting outreach to put these ideas into action. We are studying each of the following: the computer science communities that develop AI; the academic, corporate, and government institutions that support, constrain, or otherwise influence AI; and the intellectual and activist communities that shape the debate over AI.

By putting the people who affect AI front and center, we can develop solutions that make sense from their perspective and can actually be implemented, instead of just theoretical ideals that have academic appeal but would not affect the risks themselves. This point is developed in more detail in our paper Reconciliation between factions focused on near-term and long-term artificial intelligence. The paper argues that while there are reasons for being especially concerned about catastrophic risks from long-term AI, it is often more useful to focus on debates over near-term AI, because this is where the important decisions are being made. This has proven to be a very useful guide for structuring research and outreach on AI solutions.

Our work to date on AI solutions has just scratched the surface. We plan to continue applying research insights from other issues, learning more about the people who affect AI, and crafting customized solutions for reducing AI risk. We also plan to expand our outreach efforts to make these solutions happen. We will also continue to coordinate closely with other groups that share our concerns about AI risk. Some specific aspects of AI for which we see especially good opportunities are:

• Cooperation on AI ethics and safety, drawing on our familiarity with the empirical literature on collective action by scholars such as Elinor Ostrom, and our expertise on the empirical details of actual AI projects, such as our AI survey paper.
• International security dimensions of AI, a growing policy topic for which our background in nuclear weapons and other international security topics leaves us very well positioned to contribute.
• United States government activities on AI, which we monitor closely and are well-positioned to engage with due to our close ties to US policy communities and our reputations as experts on AI policy.
• Debates over AI risks, for which we can apply our expertise on the social science of technical policy debates, as in our recent papers on AI skepticism and misinformation.

A secondary focus of our work on AI is risk and decision analysis. This aims to evaluate the importance of AI risk (primarily catastrophic risk from long-term AI) and provide guidance for decisions to reduce the risk. Our AI risk analysis publications to date have focused on outlining general methodology, developing model structures from existing literature, and applying the model to analyze debates in the literature. Our ongoing work goes beyond the literature, using expert interviews to tease out subtle details of the risk.

The expert interviews have proven to be a time-consuming project. This has prompted us to reflect on whether they are worth the time, and to consider more streamlined approaches. That is part of a broader assessment we’re undertaking on the role of risk and decision analysis for global catastrophic risk. We plan to complete the current round of AI expert interviews, see what was learned from that, and assess what (if any) additional AI risk and decision analysis would be worth pursuing.

Finally, we do a small amount of work on AI ethics, focused on which ethical values to program into AI systems. This work is somewhat tangential to GCRI’s primary themes. Usually, we focus on more practical and technical details, and leave the ethics for our many colleagues who have stronger backgrounds in philosophy. For AI ethics, we happen to have some good opportunities to provide a novel and complementary perspective, drawing on our backgrounds in economics and environmental ethics. A central question is whether it is better for humans to decide which ethics to program into AI or for the AI to learn its ethics from observations of humans (and possibly also observations of non-humans). Our publication Social choice ethics in artificial intelligence provides an initial inquiry into the topic. Additional studies are in progress. We plan to finish these studies, though we do not anticipate this becoming a significant focus for our AI work.

Cross-Risk Evaluation & Prioritization

This topic is the conceptual heart of GCRI. We were designed from the beginning to work across risks in order to evaluate which risks and risk-reduction interventions are most important and to guide decisions on how to prioritize attention and action across risks. We developed a concept of integrated assessment to guide our cross-risk activities. We believe this is the most fundamental topic for research and action on global catastrophic risk. Without it, efforts to address global catastrophic risk are largely flying blind and prone to error. That can result in suboptimal allocation of resources and in some cases even inadvertently increasing the risk. For example:

• Some scarce resources, such as policy-maker attention and philanthropic money, can be allocated to a range of different global catastrophic risks. What would the optimal allocation of these resources be?
• Some technologies can decrease some global catastrophic risks while increasing others. Examples include artificial intelligence, nanotechnology, and nuclear power. Under what circumstances would they cause a net increase or net decrease in global catastrophic risk?

Answering these questions is easier said than done. It requires quantifying the various global catastrophic risks and the effects of the technologies and resources on the risks. For example, nuclear power could decrease climate change risk and increase nuclear war risk; the question is which effect is larger. As discussed in the section on risk & decision analysis, rigorously quantifying the global catastrophic risks is a challenging activity. Because quantifying the risks is so difficult, it is often best to focus on simpler decisions for which quantification is not necessary. However, many important decisions, including those listed above, require quantification of multiple global catastrophic risks.

Unfortunately, our cross-risk work has never received any significant funding. Instead, funders almost always support work on specific risks. We have found remarkably little interest from funders in assessing which risks are most important to focus on in the first place. We believe this is a mistake. As outlined in our paper Value of GCR information: Cost effectiveness-based approach for global catastrophic risk (GCR) reduction, we believe that a little bit of research can go a long way towards improving important decisions.

If our cross-risk work continues to receive minimal funding, then we plan to continue to gradually chip away at this work because we believe it to be important. However, we hope to be able to attract dedicated funding for this, in which case this could become a primary focus for our work. In that case, we would remain mindful of the large effort needed to rigorously quantify the global catastrophic risks, and the question of whether that effort is worthwhile. Indeed, such considerations would be central to our work. The result could be a better use of both our own time and the wider space of resources available for addressing global catastrophic risk.

Nanotechnology

Nanotechnology was a major topic of study and debate about 10-20 years ago. It was seen as a major class of emerging technologies, with profound implications for human society and the global environment. Remarkably, within the last 10 years or so, the conversation has almost completely disappeared, especially for the long-term nanotechnology (known as molecular nanotechnology or atomically precise manufacturing) that could be radically transformative.

The reason for the decline in attention is a bit of a mystery. The underlying technology hasn’t stopped progressing. Indeed, the 2016 Nobel Prize in Chemistry was awarded to advances in very relevant aspects of nanotechnology. My guess is that the explanation is mainly social: the topic simply happened to have fallen out of favor among the communities of people who work on these sorts of things. The scientific nanotechnology community is mainly interested in near-term applications, while futurist and catastrophic risk communities have taken interest in other long-term technologies, especially artificial intelligence.

I had been among those neglecting nanotechnology until former GCRI Junior Associate Steven Umbrello took an interest in it. He and I ended up collaborating on a paper Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing, and he has also done other work on the topic.

Our paper addresses a basic question: do we want more attention and investment in future nanotechnology? The technology could decrease some risks, such as climate change, while increasing some others, such as artificial intelligence. If it increases some risks more than it decreases others, then the fact that it gets so little attention may be a good thing. This paper lays out some basics for this question, but it’s too complex of a question to rigorously answer in one paper.

Further work could make more progress on whether to encourage more attention to future nanotechnology. For example, we could incorporate information about the aftermath of global catastrophes, especially the long-term trajectories of human civilization, to address the relative importance of the effects of nanotechnology on AI and climate change risks. GCRI is very well set up to make progress on this because it draws on our work on the aftermath of global catastrophes and cross-risk evaluation and prioritization.

Other aspects of nanotechnology are also important. There should be a more careful evaluation of the state of affairs in nanotechnology and prospects for future breakthroughs. This work could resemble our survey of AI R&D projects. There should also be analysis of how best to govern nanotechnology so as to improve its overall impacts. However, arguably the most important question about nanotechnology is whether to draw more attention to it. If we conclude that it should get more attention, and if we succeed in getting it more attention, then other people may end up addressing these other aspects of the topic.

I am confident that nanotechnology should get at least a bit more attention than it currently does, especially from the wider global catastrophic risks community. Consider this: there are so few people currently working on long-term nanotechnology that journals are struggling to find peer reviewers for the papers that Steven Umbrello and I have been writing. Our co-authored paper was published with only one peer reviewer instead of the usual two or three. Journals have asked me to review his papers even though that would generally be considered inappropriate given our close relationship. For the basic health of the field, there needs to be at least a few more people working on the topic.

Therefore, I would recommend that funders support additional nanotechnology work from GCRI and to also fund other people capable of producing quality work on the topic. We may be able to help identify other people. I would recommend a relatively small initial investment aimed at learning more about whether there should be a greater amount of attention to nanotechnology. It would be best to fund people whose work does not necessarily attract large amounts of attention. This is one instance in which funding people at prestigious institutions may be disadvantageous.

Nuclear War

Nuclear war has commanded high-level research and policy attention for seven decades and continues to receive major attention today. However, it has not received much attention from a risk perspective. Therefore, our work on nuclear war focuses on advancing the risk analysis of nuclear war and relating that to important ongoing policy conversations and decisions. Having already published a lot of research on this, we plan for our 2019 work to focus on outreach to policy communities and other important decision-makers. Three specific opportunities stand out as especially promising for us to pursue.

First, advise the wider global catastrophic risks community on its activities on nuclear war risk. This is the easiest opportunity for us because we are very much in the middle of this community and it already seeks our advice on nuclear war. We could do more, for example, to help it craft philanthropic strategy and policy positions. Our availability for philanthropic advice is limited by the fact that we continue to need funding for our own activities, including our activities on nuclear war, which creates a conflict of interest. However, this is otherwise a role that we could readily play if circumstances would permit.

Second, explore the policy significance of global harms from nuclear war, in particular nuclear winter and damage to the global economy. From a global catastrophic risk perspective, nuclear winter is generally seen as the most important impact of nuclear war, while global economic damage is perhaps an underrated factor. However, nuclear weapons policy rarely accounts for these impacts. Instead, most policy conversations assume nuclear war is so overwhelmingly catastrophic that the only sensible policy is to prevent it. This position holds regardless of any global harms.

GCRI has previously published research on dedicated policy for nuclear winter, in particular winter-safe deterrence and options for aiding nuclear winter survivors. A good next step would be to convene discussions with relevant policy makers to solicit their perspectives on how nuclear winter (and possibly also global economic damage) could affect nuclear weapons policy. This would provide direction for future research and outreach, and would also be useful input to the wider global catastrophic risk community.

Third, advise the international debate over nuclear disarmament, specifically by promoting a risk perspective. This would be an ambitious project because it involves promoting new ways of thinking for a major international debate in which both sides have somewhat entrenched views. However, the payoffs could be quite large, including a major breakthrough in the international nuclear disarmament debate and a more prominent role for risk analysis in international security policy. I have had some success on this in previous outreach to the international community, enough that I believe this approach is worth pursuing.

The basic idea for the international nuclear disarmament debate is simple enough. The debate largely consists of one side arguing for more disarmament because nuclear weapons increase the severity of the humanitarian consequences of war and the other side arguing for less disarmament because nuclear weapons decrease the probability of war by strengthening deterrence. A starting point would be to explain to both sides that they are arguing about different aspects of risk and to propose that a more careful accounting of risk could help the two sides reach some measure of consensus. Perhaps they could both agree to favor a disarmament policy that keeps the world safer in risk terms. That would be a major breakthrough for the international community, and it would set the stage for further risk analysis, which may then be in high demand from the international community.

Risk & Decision Analysis

Risk and decision analysis are vital methodologies for evaluating global catastrophic risks and the opportunities to reduce them. These methodologies provide a rigorous framework for assessing which risks are most important and which opportunities are the best and most effective. GCRI has a longstanding emphasis on risk and decision analysis through our research and our active participation in the professional risk and decision analysis communities.

Much of our 2019 risk and decision analysis work will focus on specific risks and decisions. For AI, we are wrapping up a detailed risk analysis and evaluating future prospects. For nanotechnology, we plan to analyze the decision of whether nanotechnology should receive more attention. For nuclear war, we plan to engage with important decision-makers to discuss how risk and decision analysis can inform the decisions they face.

Another focus of our 2019 risk and decision analysis work will be to develop guidelines for the use of risk and decision analysis for global catastrophic risk. As I elaborate in a separate blog post, we are currently assessing the appropriate role for risk and decision analysis in global catastrophic risk. On one hand, risk and decision analysis is an essential tool for making sound decisions for many important aspects of global catastrophic risk. On the other hand, the analysis can be difficult and expensive and may not reach definitive conclusions, and important decision makers may not use the results of the analysis. We are reflecting on our own risk and decision analysis experience and consulting with senior risk and decision analysis experts (such as our Senior Advisor John Garrick) to develop guidelines for our own work and that of the wider global catastrophic risk community.

Solutions & Strategy

Ultimately, the important part is not the risks themselves but what we can do to reduce them. GCRI’s strategy for reducing global catastrophic risks is twofold. First, we use research to understand the risks and the opportunities to reduce them. This includes risk and decision analysis for evaluating the risks and opportunities, as well as other types of research. Second, we conduct outreach to share our ideas and learn the perspectives of those who can reduce the risks. The aim is to develop solutions that make sense from their perspective and therefore can be implemented in actual, real-world decision-making, not just in theoretical academic ideals.

As with our risk & decision analysis, much of our 2019 solutions and strategy work will focus on specific risk reduction opportunities. For AI, we plan to continue outreach to AI R&D groups and policy communities. For nanotechnology, we plan to focus on basic research to assess what outreach would be good to pursue. For nuclear war, we plan to focus on outreach to a range of communities to identify opportunities for our research findings to influence decision-making.

In addition to this, we plan to continue to lead the push for practical approaches to global catastrophic risk reduction within the broader global catastrophic risk community. We are certainly not the only group working on practical solutions, but we have an important role to play. As a think tank, we are positioned to translate theoretical research into practical solutions. As social scientists, we are positioned to understand global catastrophic risk decisions from the decision-makers’ perspectives. And as respected leaders within the global catastrophic risk community, we have the opportunity to champion a practical approach.

Concluding Remarks

This blog post describes a wide range of plans that GCRI can pursue in 2019. It’s a large agenda, and much of the work could spill over into subsequent years. This agenda is designed to take advantage of GCRI’s distinctive strengths in order to best address the global catastrophic risks. It works across a wide range of risks, enabling us to apply insights from different risks and to answer important cross-cutting questions. It emphasizes a number of important, understudied global catastrophic risk topics. And it prioritizes practical solutions to inform important real-world decisions. Exactly what we end up doing will depend on what we get funding for and what particular opportunities happen to arise. We look forward to discussing the possibilities with potential funders and others who are interested in supporting this agenda.

Tagged with
This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.