2019 Annual Report

2019 has been a year of growth for GCRI. We have made good progress toward our goal, announced last year, of scaling up the organization so we can do more to reduce global catastrophic risk. We have also completed work on ongoing global catastrophic risk projects. We are especially happy with the advising and collaboration program we ran this year, through which we helped many people advance their work on global catastrophic risk while demonstrating how GCRI can operate at a larger scale.

Last year, we found ourselves at a turning point. We had established ourselves as a leading global catastrophic risk research institute, with a distinctive ability to connect academic scholarship with professional practice. However, because GCRI was relatively small, we did not have the capacity to do all of the vital work we would like to do. In a series of blog posts last year, we outlined our plans to increase our capacity for work to reduce global catastrophic risk. This post is an update on the progress we made this year and outline of our plans for the future.

2019 Accomplishments

GCRI made substantial progress this year on our plans (published one year ago) for work on global catastrophic risk topics and organization development.

GCRI’s plans for global catastrophic risk research covered the seven topics GCRI is actively working on: the aftermath of global catastrophe, artificial intelligence, cross-risk evaluation & prioritization, nanotechnology, nuclear war, risk & decision analysis, and solutions & strategy. Our 2019 funding has mainly been for work on AI, so that has been our primary focus. We have also completed some work on cross-risk evaluation & prioritization, risk & decision analysis, nuclear war, and solutions & strategy. We did not have the chance to work on the aftermath of global catastrophe or nanotechnology this year, though we continue to believe that these are important topics.

GCRI’s plans for organization development centered on scaling up GCRI by raising more money and improving our organizational capacity. We have been able to do some of both. We were able to raise enough money to bring on our Director of Communications, Robert de Neufville. We hope to have enough to expand further soon. We also improved our organizational capacity by successfully managing a complex set of projects and interacting with colleagues around the world. We learned a lot from this experience and are ready to expand our capacity even more in upcoming years.

Publications

GCRI has five publications so far in 2019:

Risk-risk tradeoff analysis of nuclear explosives for asteroid deflection, in Risk Analysis. This paper presents an example of an analysis of a decision that presents a tradeoff between two global catastrophic risks. Nuclear explosives could reduce asteroid risk by deflecting away earthbound asteroids, but nuclear deflection programs could inadvertently increase the risk of nuclear war. Asteroid risk may be much smaller than nuclear war risk, but nuclear deflection may still bring a net decrease in risk. This is an important example of GCRI’s work on cross-risk evaluation & prioritization and risk & decision analysis, published in the top journal in the field of risk analysis.

Lessons for artificial intelligence from other global risks, in The Global Politics of Artificial Intelligence (an edited volume from CRC Press). This paper draws on insights from other global risks to inform the study of AI risk. Four risks are included: biotechnology, nuclear war, global warming, and asteroids. The paper contains a range of insights and shows how the study of new and emerging risks can draw on lessons from more established risks.

Preparing for the unthinkable, in Science. This is a review of Bryan Walsh’s new book End Times for one of the top journals in the world. End Times provides a good introduction to the field of global catastrophic risk, including the work of GCRI, written by a veteran science journalist.

The challenge of analyzing global catastrophic risks, in Decision Analysis Today. This short article provides an overview of some of the difficulties of making sound decisions about global catastrophic risk, including the analytical challenge of quantifying the risks and the institutional challenge of getting decision-makers to factor the analysis into their decisions. It is written for an audience of researchers and professionals in the field of decision analysis.

Why catastrophes can change the course of humanity, in BBC Future. This essay for a general audience summarizes themes in the recent paper Long-Term Trajectories of Human Civilization.

Those who follow GCRI’s work might notice that we have fewer publications so far this year than we have had in the past. This is due in part to natural variation in the timing of publications. The papers we worked on last year were all published last year, except the Risk Analysis paper listed above, while many of the papers we worked on this year are still going through the peer review process. We have also been writing papers more slowly for the last couple of years in order to ensure our work meets a high standard of quality. We constantly strive to balance quantity and quality of our work. We now believe that we may have been too cautious and that we should be able to maintain a high standard while getting our ideas out more quickly. In addition, scaling up has been a learning process for us. In particular, this was Robert de Neufville’s first year working full-time for GCRI, and it took him some time for him to get up to speed. He has several works in progress that we expect will be published over the upcoming year.

Outreach

GCRI’s outreach efforts link our research to actual risk reduction. By reaching out to decision-makers and stakeholders, we can share insights from our research that they can use to reduce risks more effectively. Additionally, talking with decision-makers and stakeholders helps us ensure our research is relevant to them. Publications are one form of outreach. We also make a dedicated effort to reach out directly to key audiences.

One way we do this is with formal speaking engagements in a variety of settings. In 2019, we spoke at the 2019 International Academy of Astronautics Planetary Defense Conference (on the asteroid and comet threat), the This Is Not a Drill Journalism Workshop (on nuclear weapons), the Beneficial AGI 2019 conference (on artificial intelligence), and the conference Effective Altruism Global San Francisco 2019. We will also host a global catastrophic risk session at the 2019 Annual Meeting of the Society for Risk Analysis. Additionally, we spoke at events hosted by the UC Berkeley Center for Human-Compatible Artificial Intelligence, the University of Warwick Integrative Synthetic Biology Centre, the Princeton University Global Systemic Risk group (the event was co-hosted with the Stockholm Resilience Centre), and Effective Altruism Philadelphia/University of Pennsylvania. For details, please see our speaking engagements page.

Additionally, outreach is also often done informally in private conversations, and therefore cannot be reported in full, especially in public writeups such as this.

Like many think tanks, GCRI does a fair amount of outreach to governments. (GCRI actually focuses less on outreach to governments than most think tanks, although government outreach is still a significant focus for us.) In 2019, we were involved in conversations about the design of the proposed International Panel on Artificial Intelligence (which has since been renamed the Global Partnership on AI) proposed by the governments of France and Canada. We were able to make valuable contributions because of our expertise in AI and our familiarity with the similar Intergovernmental Panel on Climate Change. We were also involved in conversations about the implications of AI for nuclear weapons and environmental protection policy on nuclear weapons and environmental protection, and the implications of nuclear weapons for asteroid and comet policy. We were able to make valuable contributions to these topics because of our unique expertise across the range of global catastrophic risks. Our outreach efforts took us to events at the United Nations (the conference of the Nuclear Non-Proliferation Treaty), the Government of Sweden’s residence in New York, and elsewhere. Finally, we also participated in ongoing conversations with other global catastrophic risk organizations about appropriate policy and government outreach strategy.

We also did some outreach to private sector organizations. Much of this is tied to our ongoing work on AI. While government interest in AI is increasing, AI development is still heavily driven by the private sector. Among other things, we participated in conversations about how to encourage private organizations—and in particular for-profit corporations whose financial interests may diverge from the public good—to develop AI in safe and socially beneficial ways.

Community Support

A major focus for GCRI in 2019 has been supporting the broader global catastrophic risk community. This is something we’ve always done, but in 2019 we scaled it up via a formal advising and collaboration program.

As documented here, our advising and collaboration program has been a great success. We interacted with more than 50 people from around the world, ranging from undergraduates to senior scholars and professionals. We hosted 40 one-on-one and three group advising sessions, five thematic group calls, and one in-person meeting. We also made 27 private introductions to connect program participants to each other and to other people in our networks. We were able to do all this for a very low cost. Our primary expense was a $150 subscription to a professional phone-VOIP platform.

The success of our advising and collaboration program demonstrated the value of GCRI’s expertise and GCRI’s ability to collaborate with people around the world. The response to the program demonstrated that there is a large unmet demand for support for global catastrophic risk careers, especially among people in places where there is not much of a global catastrophic risk community. We plan to continue this program in some form of in future years.

Organization Development

GCRI’s main organizational development priority is to scale up so that we can have a larger overall impact. To that end, we have focused on fundraising to support a larger staff and improving our internal operations to manage a larger team (including both staff members and external collaborators).

Our fundraising in 2019 was generally successful, though we have not yet reached our overall fundraising goals. We are grateful to support from Gordon Irlam, the Survival & Flourishing Fund, and many other generous donors. Thanks to them, GCRI’s overall financial position continues to improve. For further details, see the fundraising section below.

We had a number of opportunities to improve our internal operations capacity in 2019. First, in 2019 we were received funding (especially this) to work on a larger and more complex set of projects than we had in previous years, which challenged us to improve our project management capacity. Second, our advising and collaborations program (described above and here) meant that GCRI interacted on a regular basis with a large number of people and gave us some experience operating at a larger scale. Third, throughout the year, we have consulted extensively with our advisors and other knowledgeable individuals about how GCRI can grow and improve as an organization. We are confident as a result of this experience that we can continue our high quality of work as GCRI expands.

2020 Plans

GCRI does not plan any major new changes in direction relative to the plans we published last year. We plan to continue making progress on our seven global catastrophic risk topics and scaling up our organization.

Our specific focus will depend partly on what new funding receive. Our current funding leads are primarily for AI and nuclear weapons. Regardless of the new funding we receive, we plan to continue the work we’ve been doing in 2019, which has mainly been on AI. We want to maintain the momentum we have developed and finish the work we have in progress. Our ongoing AI work is largely as described here. Our leads on nuclear weapons primarily relate to our work on nuclear war risk analysis, as described here. We would especially appreciate funding to support our work on cross-risk evaluation & prioritization, but we have found that most funding tends to be for work on specific risks.

We also plan to spend some time on another round of our advising and collaboration program. The program was quite successful this past year and we have already been approached by people interested in participating in a new round. The scale of next year’s program will depend in part on the funding available for it, but we plan to continue the program in some form regardless of whether we receive any new funding for it.

Finally, we will continue to focus on fundraising and improving our capacity to operate at a larger scale. Our fundraising plans are detailed below. We plan to improve our operational capacity in two main ways. First, we seek to improve our project management as we take on a more complex mix of projects with a larger team. Second, we seek to improve our balance of in-depth research, outreach, and publishing, all of which are primary activities for GCRI.

Fundraising

GCRI is currently operating on an annual budget of approximately $250,000. Our current reserves are enough for us to maintain operations through the beginning of 2021. We currently seek to raise up to $1.5 million to expand our current operations and maintain them further into the future. We graciously welcome any support. Prospective contributors can visit our donate page or contact me directly.

Thanks to increased funding over the past year, we were able to hire GCRI’s Director of Communications, Robert de Neufville. Robert has contributed widely to GCRI activities, including the advising and collaboration program and our research on social and political dimensions of AI. (One paper Robert contributed to has already been published: Lessons for artificial intelligence from other global risks. Robert has additional papers that will likely be published over the upcoming year.) Having Robert on board has significantly increased GCRI’s capacity. We aim to expand further in 2020.

We are well-positioned to expand further, given the funds to do so. We are already in touch with multiple people who could step in and contribute. They include both close long-term collaborators and people we’ve connected with recently via our 2019 advising and collaboration program. They range from junior to senior people and have a mix of different types of global catastrophic risk expertise. Furthermore, our networks could help us identify additional contributors as needed, just as they have already helped identify people for our advising and collaboration program.

Conclusion

Global catastrophic risks are large, complex, and urgent. A year ago, we announced our intentions to scale up so that we could do more to address them. Over the past year, we made considerable progress on this goal. We expanded our team, published work in top journals such as Science and Risk Analysis, and ran a successful advising and collaboration program to support of talented people around the world. GCRI depends on donations to continue our work. With your support, we are poised to accomplish even more in 2020 and beyond.

Note: This page was originally published as Summary of 2019-2020 GCRI Accomplishments, Plans, and Fundraising“.

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.