Climate Change, Uncertainty, and Global Catastrophic Risk

View the paper “Climate Change, Uncertainty, and Global Catastrophic Risk”

Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on climate change, the potential for extreme global harms remains highly uncertain and controversial. This paper addresses the question by examining the theoretical definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. …

Read More »

On the Intrinsic Value of Diversity

View the paper “On the Intrinsic Value of Diversity”

Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s additionally relevant to scenarios involving extreme good, such as in well-designed advanced AI. However, the ethics of diversity has been remarkably understudied. To help address the full range of issues involving diversity, this paper presents a foundational study of the …

Read More »

2023 Annual Report

2023 was a year of learning and transition for GCRI, and likewise a relatively quiet year for us. A year ago, we lost two team members, McKenna Fitzgerald and Andrea Owe, leaving the GCRI team with just its two co-founders, Seth Baum and Tony Barrett. We certainly miss the excellent contributions of our former team members. Nonetheless, in our newly streamlined situation, we have taken the opportunity to reinvest in ourselves and recalibrate the direction of our activities.

In our 2022 Annual Report, we described our …

Read More »

Public Health and Nuclear Winter: Addressing a Catastrophic Threat

View the paper “Public Health and Nuclear Winter: Addressing a Catastrophic Threat”

A large nuclear war may cause severe global environmental disruption, which is commonly known as nuclear winter. The effects may be catastrophic for human health. The field of public health has substantial capacity to understand and mitigate these harms, but it has thus far done little. Therefore, this paper outlines a public health research and policy agenda to address the threat of nuclear winter. The paper is co-authored by Seth Baum of GCRI and …

Read More »

2022 Annual Report

Our work in 2022 has taken an unexpected turn. We began the year focused on a series of research projects. Then, in late February, Russia invaded Ukraine, creating a historic nuclear crisis. This began a series of events that put global catastrophic risk into the news. The northern summer saw major extreme weather events in many locations, thrusting climate change to the forefront. Most recently, the dramatic collapse of the cryptocurrency company FTX brought a different sort of news coverage. The philanthropic arm of FTX …

Read More »

Summary of the 2022 Advising and Collaboration Program

In May, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of our successful 2019, 2020, and 2021 Advising and Collaboration Programs. The 2022 Program was made possible by continued support from Gordon Irlam.

The GCRI Advising and Collaboration Program is an opportunity for anyone interested in global catastrophic risk to get more involved in the field. There is practically no barrier to entry in the program: the only thing people need …

Read More »

Space Expansion Must Support Sustainability – On Earth and in Space

View the article “Space Expansion Must Support Sustainability – On Earth and in Space”.

This article, published with the Royal United Services
Institute, discusses the role of sustainability when expanding human activities
into outer space. The article illustrates how a framework for space expansion
is being set right now, but that this framework risks expanding unsustainable practices
and paradigms into space. Consequently, global civilization risks wasting
immense amounts of resources and even failing to sustain humanity at worst. In response,
the article suggests five points of emphasis for a robust sustainability …

Read More »

From AI for People to AI for the World and the Universe

View the paper “From AI for People to AI for the World and the Universe”

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for …

Read More »

Common Points of Advice for Students and Early-Career Professionals Interested in Global Catastrophic Risk

GCRI runs a recurring Advising and Collaboration Program in which we connect with people at all career points who are interested in getting more involved in global catastrophic risk. Through that program, I have had the privilege of speaking with many people to share my experience in the field and help them find opportunities to advance their careers in global catastrophic risk. It has been an enriching experience, and I thank all of our program participants.

Many of the people in our program are students and early-career professionals. …

Read More »

The Case for Long-Term Corporate Governance of AI

In a new post on the Effective Altruism Forum, GCRI’s Seth Baum and Jonas Schuett from the Legal Priorities Project make The case for long-term corporate governance of AI. Baum and Schuett make three main points in their post. First, the long-term corporate governance of AI, which they define as the corporate governance of AI that could affect the long-term future, is an important area of long-term AI governance. Second, corporate governance of AI has been relatively neglected by communities that focus on long-term AI …

Read More »