GCRI 2018 Accomplishments

2018 has been a productive year for GCRI across a range of areas: research, outreach, mentoring, and organization development. Or rather, it has been productive relative to our current rate of funding. As discussed below, we could do a lot more if funds were available, and much of our current work is on raising those funds and ensuring that they are put to the best use. GCRI has an important and distinctive role to play on global catastrophic risk and we believe that the risks would be more successfully addressed if GCRI could operate at a larger scale.

This post reviews our primary accomplishments of the year. Please note that the accomplishments listed here pertain exclusively to work funded by GCRI; they do not include work of former GCRI affiliates. For more on changes to our affiliates, please see here.

Here is a summary of our 2018 accomplishments:

  • Six research papers: two on artificial intelligence, two on nuclear war, and two on cross-cutting themes
  • A greater focus on outreach to influence important decisions, especially on AI policy and governance
  • Some mentoring of junior people, though not as much as we could
  • Extensive organization development to refine our activities and raise funds to scale up

Research

We have published six new papers in 2018, and we have made progress on several more. (Work on a paper often spans multiple years.)

Two of these papers focus on artificial intelligence: Superintelligence skepticism as a political tool and Countering superintelligence misinformation, both written by myself and published in the journal Information. These papers both cover some potential problems with debates about AI, in particular the prospect that they could have politicized skepticism or misinformation similar to what has occurred for climate change and related issues. The first paper lays out the potential problem, while the second discusses solutions. A central conclusion is that it could be much more effective to work on solutions now while the debates remain relatively calm and misinformation has not spread widely. Both papers leverage my experience with climate change debates and the accompanying research literatures in political science and psychology. These papers were written entirely in 2018. The papers were a central focus for a presentation I made at the UC Berkeley Center for Human-Compatible AI, and are also the basis for some nascent outreach efforts aimed at implementing some of the papers’ recommendations.

Two papers focus on nuclear war: A model for the impacts of nuclear war and A model for the probability of nuclear war, written by Tony Barrett, Robert de Neufville, and myself, and published as GCRI working papers. These papers present what we believe to be the most detailed models of nuclear war risk yet available. The papers are also both written for a more general audience and likewise serve as good introductions to the application of risk analysis to nuclear war and other global catastrophic risks. A running theme is the complexity of both the probability and the severity, and likewise the difficulty of rigorously quantifying them. Indeed, the papers abstain from quantifying the probability and severity because rigorous quantification could not be accomplished within the scope of the papers. The papers were largely written prior to 2018 based on funding from the Global Challenges Foundation. In 2018, we polished them up and published them. The papers also served as the basis for presentations at the UCLA Garrick Institute for Risk Sciences and EAGx-Australia, as well as a podcast with the Future of Life Institute.

One paper focuses on cross-cutting evaluation of global catastrophic risks: Long-term trajectories of human civilization, forthcoming in the journal Foresight. I am the lead author and there are 14 total co-authors including myself. The paper is based on a discussion group I led at last year’s Workshop on Existential Risk to Humanity hosted by Olle Häggström at Chalmers University of Technology and University in Gothenburg, Sweden. The co-authors are all participants in the discussion group. This was my first time leading a large group paper. The large team of co-authors definitely made it a better paper, and I gained confidence in my ability to play a leadership role in the paper process. Regarding the substance of the paper, it breaks important ground on the evaluation of the long-term fate of human civilization, including for the survivors of global catastrophes. A central question in the paper is whether human extinction risks are fundamentally more important than “mere” global catastrophes that leave some survivors. The paper tentatively answers no, the former is not fundamentally more important to the latter. The paper also served as the basis for a presentation at EA Global San Francisco and an article in progress for the BBC.

Finally, one paper uses asteroid risk analysis to make a broader point about the analysis of global catastrophic risk: Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold, written by myself and published in Natural Hazards. Asteroids are not usually seen as one of the more probable—and thus more important—global catastrophic risks. However, they are an important case study because they are arguably the best-understood global catastrophic risk. The underlying astronomy and collision physics is relatively simple, and the asteroids research community has invested heavily in risk analysis for many years. The purpose of this paper is to show that asteroid risk is actually a lot more uncertain than one might think and than the literature indicates. In particular, the human consequences of asteroid collisions are highly uncertain, especially for globally catastrophic collisions. This point is of broad significance to the study of global catastrophic risk. Essentially, if we don’t even understand asteroid risk, then we don’t really understand any global catastrophic risks. The paper argues for humility in global catastrophe risk analysis and in the corresponding decision-making. This is a point that I’ve been making informally for many years, and I finally had the chance to develop it and write it up. It also provides a platform for engaging with the asteroid risk community, which is in many ways very sophisticated and exemplary. And it was a rare chance to get a paper in Natural Hazards, which is the top journal in that field. This paper was written entirely in 2018. It was motivated in part by a talk I gave at the Cambridge Centre for the Study of Existential Risk.

In addition to the above, GCRI made progress on several research projects that have not yet been published:

Tony Barrett focused most of his GCRI hours on AI risk analysis. He is leading a detailed study of expert views of AI as they relate to the AI risk model we previously published (see this, this, and this). This project is using a very careful and thorough methodology, which should yield more robust insights. On the other hand, it has also been more time consuming. Our experience with this and other risk analyses has prompted us to reflect on the circumstances in which such detailed analysis is worth the effort (see here for further discussion), and to consider more streamlined methodology for future work. We plan to finish this current piece of research, see what has been learned, and then decide what further AI risk analysis to pursue.

I spent significant time earlier this year on military applications of AI. Military AI has been a growing topic of study, but there is not much from a global catastrophic risk perspective. This is an interesting and potentially important topic in its own right, and publications on it could also serve as a platform for engaging military and international security communities on AI. Tony Barrett and I have already been successful at engaging these communities on AI, and quality publications on the topic could be a valuable platform for us. With that in mind, I have been taking the time to make sure I get the analysis right. Therefore, my work on this topic has not yet been published.

My primary ongoing project at the moment is updating and extending my survey of artificial general intelligence R&D projects. The initial survey was highly successful and the extensions hopefully will be too. The primary extension is a series of interviews with people at the projects, to dive deeper into their activities and perspectives. I believe it is important to understand the personalities involved in AGI R&D, in order to help them steer their projects in more positive directions. This is challenging research because it requires careful social science methodology as well as at least a modest competence in the computer science of AGI. However, I believe it will prove to be very worthwhile.

I spent some time this year on ethics issues in AI and also in space exploration. The latter was led by Andrea Owe, a graduate student who interned with me in conjunction with the Blue Marble Space Institute of Science, which I have had close ties to for many years. I have done less on outer space topics in recent years, but Ms. Owe’s internship afforded the opportunity to do a bit more. Both the AI and the space ethics projects have important implications for global catastrophic risk, and I look forward to the eventual publication of the papers.

I have a second paper on asteroid risk currently in revise-and-resubmit in Risk Analysis. This paper looks at risk-risk tradeoffs related to the use of nuclear weapons to deflect Earthbound asteroids. It is loosely based on a short article on this topic I published in 2015 in the Bulletin of the Atomic Scientists. I did some work on this paper in 2016 and 2017 and finally got it together in 2018. As with the other asteroids paper noted above, this isn’t necessarily the most important aspect of global catastrophic risk, but it’s full of insights that are more widely applicable, in this case concerning risk-risk tradeoffs. Among other things, it shows that my original Bulletin article was fundamentally mistaken. I look forward to publishing this paper and using it to help guide the global catastrophe risk community on risk and decision analysis. And I will be glad to finally have a research paper published in Risk Analysis, which is the Society for Risk Analysis’s flagship journal and the most important one in the field.

Finally, we have also worked to help other people with their research. For example, I have completed peer reviews of 20 journal article submissions, including 5 for Science & Engineering Ethics after one of their editors unexpectedly passed away and 4 for Information as part of their growing focus on AI. We’ve also provided feedback on various draft papers and helped colleagues think through their research ideas. This work typically gets little recognition but it is vital for improving the quality of scholarship on global catastrophic risk and related topics.

Outreach

GCRI’s outreach consists of interactions with people who can have significant impacts on global catastrophic risk. These people can include policymakers, people in industry, the media, and our colleagues in the global catastrophic risk community. (Mentoring activities are discussed separately below.) Our outreach may be based on specific research we have published or on our general understanding of the risks.

GCRI has always done some outreach, but we have done more than usual in 2018. There are several reasons for this. As our research progresses, we have more expertise built up that we can share in outreach. Indeed, we are increasingly of the view that our impact on the risks is limited less by research and more by outreach, and that the same may also apply to the wider global catastrophic risk community. Meanwhile, in 2018, we have happened upon some unusually compelling outreach opportunities. Finally, we wanted to provide a larger demonstration of our outreach capacity in hopes of attracting more dedicated funding for it. Historically, our funding has been almost exclusively for research, but we believe it would be best for us to have more of a balance between research and outreach.

What follows is a loose summary of our 2018 outreach work. I regret that we must exercise some discretion in describing our outreach, because much of it consists of private conversations. There are significant chunks that are at most only alluded to here. I am able to share some additional information privately for those who need it.

The largest theme of our 2018 outreach has been on AI policy and governance. It is an excellent time for this outreach. Policy communities are just starting to pay attention to AI. The ideas they settle on now could continue to define their thinking and actions into the future, a phenomenon known as path dependence. They have been very interested in our ideas, so we have gone out of our way to share them.

Our AI outreach has largely focused on the US national security policy community. Tony Barrett and I both participated in invitation-only workshops on AI and national security, at the Center for New American Security (Tony Barrett) and at the Federation of American Scientists (myself). (I also participated in an invite-only workshop on AI and cybersecurity at the University of Cambridge Centre for the Study of Existential Risk.) In addition, throughout the year, GCRI has participated in numerous discussions with senior US Congressional staff on matters pertinent to AI and national security. GCRI has also helped facilitate the participation of the broader AI and GCR community of organizations in these discussions.

We have also made some effort at outreach on international dimensions of AI policy and governance. One means for this is the UN panel on Digital Cooperation, which we have identified as an important means to share AI governance ideas with the international diplomatic community. The panel has an open call for contributions; we are leading an effort to develop and submit contributions to this.

We also wrote one article on AI for the popular media, Preventing an AI Apocalypse, published by Project Syndicate and republished by other media outlets around the world (details here). Overall, we have been doing less popular media work recently because we see better opportunities elsewhere, though we still do some when good opportunities arise.

Another international dimension concerns relations between China and the West. China and the West are both leading developers of AI. Tensions between them could be central to AI development. We at GCRI are not China specialists, but we are generally knowledgeable of international relations. Therefore, we have been building relationships with China specialists in order to best contribute our expertise to this important issue.

Another theme for our 2018 outreach is nuclear weapons. This work has been more indirect, focused mainly on helping some of our global catastrophic risk colleagues refine their strategies for work on nuclear weapons. One more direct outreach was via the Federation of American Scientists workshop I participated in. The workshop was on the implications of emerging technologies for nuclear weapons risks, and my contribution was specifically on the implications of AI.

Finally, we have done a small amount of outreach on cross-cutting aspects of global catastrophic risk. One example is the discussion of my paper Long-term trajectories of human civilization at the EA Global San Francisco conference, which focused on whether the global catastrophic risk community should focus primarily on extinction risks or on a wider space of risks. Another example is my paper Resilience to global catastrophe, which summarizes this important topic for the dual academic and policy audience of the International Risk Governance Center.

There are a few general themes that cut across much of our outreach. One is the synergy between our outreach and our research on developing and advancing practical solutions for reducing global catastrophic risk. Through outreach, we learn the perspectives and opportunities of important decision-makers. That feeds into our research on developing solutions for reducing the risk, which in turn informs our outreach. For example, our AI outreach has largely followed the strategy outlined in our paper Reconciliation between factions focused on near-term and long-term artificial intelligence.

Another theme is our close collaboration with other people and organizations in the wider global catastrophic risk community. Organizations we have collaborated with include the Center for Human-Compatible AI, the Centre for the Study of Existential Risk, the Federation of American Scientists, the Future of Life Institute, the Garrick Institute for Risk Sciences, and numerous others. (Our collaborations with the listed organizations are described in part elsewhere in this post.) This collaborative approach helps the outreach go further and helps us focus our efforts on where we add the most value. The global catastrophic risk community is a vibrant and highly capable and we are proud to play a central role within it.

Mentoring

GCRI’s mentoring leverages our broad interdisciplinary expertise, our wide professional networks, and our comfort with remote collaboration. This enables us to help young people from diverse backgrounds, regardless of where in the world they live. We help them learn more about global catastrophic risk and their roles for working on it, and we connect them with the right senior professionals who can further help them.

Historically, GCRI has mainly done long-term mentoring via our Junior Associates program. Through this, we would interact with younger people over extended periods, often several years. In some cases, we would even co-author papers with them, such as the paper Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing, co-authored by former Junior Associate Steven Umbrello and myself. We also advise them on graduate school options, with both Umbrello and Matthijs Maas getting accepted into excellent programs where they will continue important research. This long-term mentoring extended into 2018.

In 2018, we also began a process of providing more short-term mentoring. This was done mainly in collaboration with the organization 80,000 Hours, which provides career advising for young people seeking to make the world a better place. Global catastrophic risk is one of the issue areas that they cover. This year, I had the opportunity to do some career advising for them. This has been a chance to provide input to a larger number of people and to help them connect with our wider networks of more senior people. 80,000 Hours is an excellent organization and we hope to continue partnering with them in the future.

Our mentoring activities are currently paused while we work on organization development. We hope to significantly scale up our mentoring, and we are assessing the best ways to do this. We expect it will involve a mix of short-term and long-term mentoring, as well as connecting junior people with senior people in our networks. It could also involve developing resources for junior people to learn more about global catastrophic risk. Exactly what we do will likely depend on the availability of funding to support our mentoring work. Mentoring is one area where we could be doing a lot more than we are, especially if funding was available for it.

Organization Development

Finally, in 2018, we have spent significant time on organization development. This has aimed to improve our current operations and position the organization to do even better in the future. To do this, we have consulted with numerous people from across and beyond the global catastrophic risk community, examined our strengths and weaknesses, and developed ideas for how GCRI can best contribute to the reduction of global catastrophic risk.

One change we’ve already made is to overhaul our affiliates program. We removed all of our Associates and Junior Associates, as well as our Deputy Director, and established a Senior Advisory Council. GCRI is a dynamic organization and our roster of affiliates had simply fallen out of date. More detail on this change is available here.

Another change is a series of new topics pages to describe our ongoing areas of focus. We list seven topics: aftermath of global catastrophe, artificial intelligence, cross-risk evaluation & prioritization, nanotechnology, nuclear war, risk & decision analysis, solutions & strategy. These replace older “projects” pages that had also fallen out of date. In addition to these topics pages, we also published a detailed blog post describing our 2019 plans for each of the topics.

Looking ahead, the central challenge we face is to scale GCRI up. We have a lot of potential that remains untapped simply due to lack of funding. There is enough funding in the global catastrophic risk nonprofit sector, but much of it only goes to larger organizations. (This is documented, for example, here.) There is a chicken-and-egg problem in which organizations need to be large in order to get larger amounts of funding.

Unfortunately, GCRI has been caught on the wrong side of the chicken-and-egg problem. Therefore, our challenge is to first ensure that we are worthy of the larger amounts of funding that would enable us to scale up, and then to demonstrate this to funders capable of providing us with sufficient funding to do so. This is the primary focus of our ongoing organization development. It will be a major focus of our time until we succeed in getting the funding or until we exhaust our options for doing so.

Regarding why GCRI may be worthy of larger amounts of funding, several things can be said. The first is that we can scale up—at a time when much of the global catastrophic risk community has a shortage of senior talent (documented here), we have an abundance that could start working if funding was available. Second, we have a distinctive and important role to play in the wider landscape of global catastrophic risk organizations, deriving from our deep and highly practical research expertise, our wide networks both within and beyond the global catastrophic risk community, and our highly flexible institutional structure. GCRI is in a unique position to advance the scholarship of global catastrophic risk, to translate the scholarship into real-world risk reduction, and to strengthen the talent pool and networks of people working on global catastrophic risk. We hope to secure the funding to enable us to do all of this at scale.

Tagged with
This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.