2022 Annual Report

Our work in 2022 has taken an unexpected turn. We began the year focused on a series of research projects. Then, in late February, Russia invaded Ukraine, creating a historic nuclear crisis. This began a series of events that put global catastrophic risk into the news. The northern summer saw major extreme weather events in many locations, thrusting climate change to the forefront. Most recently, the dramatic collapse of the cryptocurrency company FTX brought a different sort of news coverage. The philanthropic arm of FTX was, for a brief time, a major funder of work on global catastrophic risk. Its collapse sparked media interest in certain institutions and communities involved in global catastrophic risk.

Along the way, we came to appreciate the need for voices helping the public and policymakers understand global catastrophic risk. In recent years, the overall field of global catastrophic risk has grown, but there remain very few people in the field who are engaging the public. We see a particular need for people who can provide perspective on major ongoing events as they relate to global catastrophic risk and who can articulate constructive solutions for reducing the risk. If there were more voices articulating these sorts of themes, it could help to motivate wider action to reduce the risk.

Therefore, while continuing our research, we have also spent some time exploring the role we might play in public discussions of global catastrophic risk. No definitive decisions have yet been made, but our current sentiment is that this sort of public engagement work should become a larger point of focus for GCRI. The broad expertise that we have built up over the years is of value for contributing to the complex and multifaceted public issues of relevance to global catastrophic risk. We may also be able to explain the ideas in widely accessible terms and handle the challenges of a more public profile, including the darker side of it, such as the online harassment that is all too common these days. We look forward to continuing to explore this direction of work in 2023 and sharing it with you along the way.

This post summarizes what GCRI accomplished in 2022, what we plan to do in 2023, and the funding we are seeking to execute these plans. GCRI posted similar year-end summaries in 2018, 2019, 2020, and 2021.

Jump to:
2022 Accomplishments:
* Research
* Outreach
* Community Support
* Organization Development

2023 Plans:
* Research
* Outreach
* Community Support
* Organization Development

Fundraising
Conclusion

2022 Accomplishments

Throughout 2022, GCRI made steady progress in each of our primary focus areas: research, outreach, community support, and organization development.

Research

GCRI’s research in 2022 has focused heavily on AI risk and related ethics issues, thanks to continued funding from Gordon Irlam. Additionally, we have also conducted some research on other global catastrophic risk topics.

GCRI has nine new publications to report:

Assessing natural global catastrophic risks, in Natural Hazards. This paper argues that prior theoretical research has understated the risk from natural threats, such as volcanic eruptions or near-Earth objects. The paper presents analysis of six natural global catastrophic risks, finding potential for several to pose a high ongoing risk to humanity.

Greening the universe: The Case for ecocentric space expansion, in the forthcoming book Reclaiming Space: Progressive and Multicultural Visions of Space Exploration from Oxford University Press. This paper presents a case for ecocentric ethics, the idea that all living beings are of moral value not just on their own, and through this lens describes its vision which entails human civilization working to establish flourishing ecological communities across the universe.

Pandemic refuges: Lessons from two years of COVID-19, in Risk Analysis. The paper studies two jurisdictions with especially low spread of COVID-19: China and Western Australia. Despite being quite different countries, including but not limited to geography, political structure, and population density, both jurisdictions have a high degree of political centralization and capacity for isolation. During the pandemic, both were highly motivated to avoid pathogen spread. Together, the cases provide a more nuanced understanding of the sorts of jurisdictions that can succeed as refuges for pandemics and perhaps also for other global catastrophe scenarios.

Nonhuman value: A Survey of the intrinsic valuation of natural and artificial nonhuman entities, in Science and Engineering Ethics. This paper, written in collaboration with Professor Mark Coekelbergh from the University of Vienna, discussed the intrinsic value, or inherent value, of nonhumans including natural entities, such as ecosystems or biological life, and artificial entities, such as art or technology. The paper also distinguishes between “subject-based” and “object-based” moral theories. In “subject-based” theories, moral judgments are made based on some aggregate of the views of some population of moral subjects. In “object-based” theories, moral judgments are made based on what is good or bad for some type of moral object. Using these dimensions, there are many moral theories that can be applied to wide-reaching issues such as global warming, deforestation, the creation of AI, and expansion into the universe.

Book review: The Precipice: Existential Risk and the Future of Humanity, in Risk Analysis. This paper reviews the new book The Precipice by Toby Ord, which provides a wide-ranging survey of topics related to global catastrophic risk. Compared to other books on global catastrophic risk, The Precipice stands out for its depth of discussion, its quality of scholarship, and its readability. However, the book errs in its emphasis on only the most extreme global catastrophe scenarios, its strong belief in the resilience of civilization, and its use of quantitative risk analysis.

How to evaluate the risk of nuclear war, in BBC Future. This article discusses the quantitative analysis of nuclear war risk. It is written in the context of the Russian invasion of Ukraine and also discusses more general analytical issues, such as found in GCRI’s nuclear war research.

Space expansion must support sustainability – On Earth and in space, in RUSI. This article, published with the Royal United Services Institute, discusses the role of sustainability when expanding human activities into outer space. The article illustrates how a framework for space expansion is being set right now, but that this framework risks expanding unsustainable practices and paradigms into space. Consequently, global civilization risks wasting immense amounts of resources and even failing to sustain humanity at worst. In response, the article suggests five points of emphasis for a robust sustainability policy for space expansion.

Early reflections and resources on the Russian invasion of Ukraine, in the Effective Altruism Forum. This article presents analysis of the Russian invasion of Ukraine written for a global catastrophic risk audience. The article discusses nuclear war risk, the changing geopolitical landscape, and recommendations for personal preparedness and philanthropy. It also describes the author’s own activities in addressing the immediate risk and presents a compilation of resources for learning more about the war.

Doing better on climate change, in the Effective Altruism Forum. This article presents a wide-ranging discussion of how to factor climate change into efforts to make the world a better place. The article relates climate change to other global catastrophic risks and related issues. It emphasizes the value of reducing greenhouse gas emissions and explains which activities are effective at reducing emissions.

Outreach

GCRI has conducted a variety of outreach activities over the past year.

First, we have conducted outreach to policymakers and related bodies. This outreach has focused on three initiatives:

P2863, Recommended Practice for Organizational Governance of Artificial Intelligence, an initiative of the Standards Association of the Institute of Electrical and Electronics Engineers (IEEE). GCRI Executive Director Seth Baum and Research Associate Andrea Owe are both members of the expert working group tasked with formulating the standard. Their work is oriented toward encouraging the IEEE standard to appropriately address catastrophic risks, environmental issues, nonhumans, and related topics.

AI Risk Management Framework, an initiative of the US National Institute of Standards and Technology (NIST). GCRI is supporting a multi-organization outreach effort to ensure that the Framework appropriately addresses catastrophic risk and related topics. This outreach effort is led by GCRI Director of Research Tony Barrett, working in his capacity as a Non-Resident Fellow with the UC Berkeley Center for Long-Term Cybersecurity (CLTC) and as a Senior Policy Analyst at the Berkeley Existential Risk Initiative (BERI).

National Artificial Intelligence Research and Development Strategic Plan, 2022 update, an initiative of the US Office of Science and Technology Policy (OSTP). In early 2020, the National AI Initiative Act of 2020 became law as part of the National Defense Authorization Act. The National AI Initiative Act calls for regular updates to the National AI R&D Strategic Plan. For that, OSTP released a Request for Information, which GCRI responded to.

Second, we have done some outreach to the public. The primary focus of this work was the war in Ukraine and in particular the risk of it escalating to nuclear war. This outreach work involved us publishing a series of analyses on social media and speaking with journalists about them.

Finally, GCRI has 13 new presentations to report this year:

Can thriving online include thriving on Mastodon?, an online panel discussion including Seth Baum for the Columbia Earth Institute, 12 December.

Ukraine and nuclear war risk, a remote talk given by Seth Baum at the EAGxVirtual Conference, 22 October.

Deep green ethics and catastrophic risk, a remote talk given by Andrea Owe to Effective Altruism Nordics, 31 August.

Global catastrophic risk: Starting small, a talk given by McKenna Fitzgerald to the Lead for America Hometown Fellowship 2022 Cohort, 16 August, Washington, DC.

Global and long-term implications of asteroid impacts, a remote talk given by Seth Baum at the Asteroid Impact Global Effects Online Virtual Technical Interchange Meeting hosted by the NASA Planetary Defense Coordination Office, 13 July.

An Overview of Global Catastrophic Risk, a remote talk given by Seth Baum at the Swiss Existential Risk Initiative (CHERI) 2022 Summer Research Program Opening Event, 5 July.

Ethics for preventing global catastrophe, a remote talk given by McKenna Fitzgerald to Let’s Phi, 21 April.

Limits of the value alignment paradigm, a remote talk given by Seth Baum to Effective Altruism UC Berkeley, 13 April.

Nonhuman-compatible AI, a seminar delivered remotely by Seth Baum to the UC Berkeley Center for Human-Compatible AI, 23 February.

Deepening AI ethics: AI and why we are in an environmental crisis, a remote talk given by Andrea Owe to the Chalmers AI Research Centre, 22 February.

Digitalisation and sustainability transitions, an online panel discussion including Andrea Owe for the Sustainability Frontiers Conference, hosted by the Stockholm Environment Institute and Centre for Sustainability Studies, Lund University, 15 February.

The challenges of addressing rare events and how to overcome them, a remote talk given by Seth Baum at the workshop Anticipating Rare Events of Major Significance, hosted by the US National Academies of Sciences, Engineering, and Medicine, 17 & 21 December.

The future of catastrophic risk, an online panel discussion including Seth Baum for the University of Warwick, 9 December.

Community Support

GCRI continues to believe in the importance of fostering a broad network of professionals working on global catastrophic risk. To that end, we have conducted a variety of activities to support the global catastrophic risk community.

Our fourth annual Advising and Collaboration Program went well. We were able to connect with a diverse group of people from around the world. We received 73 inquiries in response to our open call and we were able to speak with 54 respondents. We made 22 professional network introductions. Participants spanned all career points, from undergraduates to senior professionals. They also had a wide range of academic and professional backgrounds and hailed from 26 countries around the world. Most participants were interested in AI ethics, AI policy, or nuclear war risk.

We have also completed our second annual Fellowship Program. The program recognizes individuals who have made significant contributions to addressing global catastrophic risk in collaboration with GCRI. This year, we have four Fellows. They have worked with us on topics related to nuclear war risk, AGI scenario mapping, and public health. They include one participant in each of the 2021 and 2022 iterations of our Advising and Collaboration Program.

We have continued to support other organizations and programs working on global catastrophic risk and related issues. McKenna Fitzgerald continues to serve as a mentor for women, non-binary, and transgender people interested in global catastrophic risk through Magnify Mentoring (formerly known as WANBAM). Additionally, she has also started working with Women of Color Advancing Peace, Security, and Conflict Transformation (WCAPS) through organizational support and mentorship. GCRI also continues to assist with other global catastrophic risk mentorship programs such as those at CHERI and CERI.

Finally, we have published two formal statements that advance best practices within the field of global catastrophic risk. First, the GCRI Statement on Pluralism in the Field of Global Catastrophic Risk articulates the value of supporting diverse views among those working on global catastrophic risk. It was prompted by the range of views found within work published by GCRI. Second, the GCRI Statement on the Ethics of Funding Sources addresses the need for recipients of funding to evaluate the merits of prospective funders. It was prompted by the collapse of the cryptocurrency company FTX, which, for a brief stretch of time, had been a major funder of work on global catastrophic risk.

Organization Development

Throughout 2022, GCRI has had a significant focus on organization development. This has concentrated on our assessment of opportunities for public outreach, as explained above. Thanks to this work, we now have a much clearer understanding of the opportunities, their importance, and the challenges they entail. This will guide us throughout our future work.

Another development is that the end of this year, the GCRI team will get smaller. In 2021, we were able to hire ethicist Andrea Owe as a Research Associate. After two wonderful years with GCRI, Andrea will be leaving to pursue other activities. She has been involved with GCRI since 2018 when she took on an internship with GCRI as part of her Masters in Philosophy degree. Since then, Andrea has contributed immensely to research on AI ethics, space ethics, and environmental ethics. Her expertise in these subjects resulted in six publications including Nonhuman value: A survey of the intrinsic valuation of natural and artificial nonhuman entities, Greening the universe: The case for ecocentric space expansion, and Moral consideration of nonhumans in the ethics of artificial intelligence. We are grateful to have worked alongside such a talented researcher over the past few years and we wish her all the best in her future endeavors.

2023 Plans

Our plans for 2023 are more open-ended than our plans from 2022 and 2021. We are increasingly attentive to the need for high-quality outreach on global catastrophic risk, especially to provide perspective on unfolding news events such as the Russia-Ukraine war. GCRI is unusual in our ability to work across global catastrophic risks and likewise contribute to a wide range of public and policy conversations about the risks. We may therefore allocate a larger portion of our activity to outreach in 2023 and beyond.

Research

Research has traditionally been GCRI’s primary focus. In 2023, that may change as we pursue more outreach. Nonetheless, we will continue to conduct some research. This includes several active research projects, including AI projects that were funded by Gordon Irlamin late 2021. We may also consider select new research projects, especially high-value research projects that we are uniquely qualified to pursue. 

Outreach

We currently expect that outreach will become a significantly larger portion of our portfolio in 2023 and beyond. This will likely include an emphasis on public conversations of topics related to global catastrophic risk, such as appear in popular media and social media. We additionally anticipate an emphasis on practical solutions for reducing the risk. This can include general concepts for solutions and ideas tied to specific issues such as the Russia-Ukraine war. This outreach work would draw on GCRI’s longstanding work on solutions & strategy.

Concurrently, we plan to continue some policy outreach and related activities. This will include ongoing AI governance initiatives led by IEEE and NIST, both of which we are already involved in. We will also pursue policy outreach opportunities as they arise, especially to leverage synergies with our public media outreach.

Community Support

We expect to continue supporting the field of global catastrophic risk in a variety of ways. First, we will continue supporting individuals who are seeking to get more involved in global catastrophic risk. This will occur either via new rounds of our Advising and Collaboration Program and Fellowship Program or in some other format. Second, we will continue supporting other organizations who are helping new people get involved in global catastrophic risk. Third, we will continue to provide leadership to the field of global catastrophic risk, such as via our official statements on topics of relevance to the field.

Organization Development

Our organization development work is currently focused on two areas. First, we are assessing how best to navigate and contribute to changes in the landscape of global catastrophic risk and related matters. Second, we are exploring new directions for outreach. We expect that both of these areas of activity will continue in 2023. The outreach work in particular has potential to be a major change in the focus of GCRI. This will merit ongoing assessment to ensure that we are making the best use of our abilities and opportunities.

Fundraising

GCRI currently operates on an annual budget of approximately $350,000 a year. We have enough reserves to continue to operate through early 2024.

We are currently seeking to raise funds to extend our current operations. We do not currently seek funding to expand the GCRI team, though we do value the opportunity should it arise. Additional funding would allow us to continue working with external collaborators on various projects. We would be grateful for any support. Prospective contributors can visit our donate page or contact Seth Baum at seth [at] gcrinstitute.org.

Conclusion

Although the world continues to go through unprecedented times, including the ongoing COVID-19 pandemic and Russia-Ukraine war, GCRI still had a productive year. We feel these global events highlight precisely why we exist and why we continue to work on researching ways to reduce the risks from global catastrophes. We are grateful to the many people who have helped make this possible, including our advisees and collaborators, our colleagues at other organizations, and our funders. We look forward to continuing our work in 2023.

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.