December Newsletter: Year in Review

Dear friends,This year has been an important year for global catastrophic risk. The Russian invasion of Ukraine, a multitude of extreme weather events, the release of new AI systems, and the ongoing COVID-19 pandemic have either threatened global catastrophe or raised issues related to global catastrophic risk. Additionally, the recent collapse of the cryptocurrency company FTX has brought disruption and scrutiny to the field global catastrophic risk due to FTX’s philanthropic connections to the field.As explained in this year’s Annual Report, these events have prompted …

Read More »

November Newsletter: Giving Tuesday

Dear friends,

GCRI would like to take the time to thank you for your continued support throughout 2022. Because of your help, we have been able to accomplish much throughout the year including publishing research, hosting another successful Advising and Collaboration Program, and much more (you’ll find our summary of 2022 accomplishments in the upcoming December newsletter). Whether you subscribe to our newsletter, participate in our annual Advising and Collaboration Program, or have the means to donate, we are grateful for your generosity. To continue supporting …

Read More »

Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities

View the paper “Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities”

The concept of global catastrophic
risk is customarily defined in human terms. Details vary, but a global
catastrophe is almost always regarded as something bad that happens to
humans. However, in moral philosophy, it is often considered that things
that happen to nonhumans can also be bad—and likewise for good things. In some
circumstances, whether and how nonhumans are valued may be the difference
between extremely good or catastrophically bad outcomes for nonhumans. This
raises the …

Read More »

March Newsletter: Implications of the War in Ukraine

Dear friends,The Russian invasion of Ukraine is already proving to be an event of profound importance for global catastrophic risk. As detailed in the GCRI Statement on the Russian Invasion of Ukraine, the war’s implications for nuclear war risk are especially strong, but it also has implications for other risks including climate change, pandemics, and artificial intelligence. These changes are coming from the war itself and from the accompanying shifts in global politics. We at GCRI hope that the war can reach a prompt and peaceful …

Read More »

February Newsletter: Ukraine & Pluralism

Dear friends,

We at GCRI are watching the ongoing Russian invasion of Ukraine with great concern. In addition to the grave harm being inflicted on the Ukrainian people, this invasion also constitutes a large escalation of tensions between Russia and the West and a shooting war adjacent to several NATO countries. In our judgment, this increases the risk of US-Russia or NATO-Russia nuclear war and accompanying nuclear winter. Our hearts go out to the people of Ukraine who are enduring this tragic violence. For the sake …

Read More »

Greening the Universe: The Case for Ecocentric Space Expansion

View the paper “Greening the Universe: The Case for Ecocentric Space Expansion”

One reason for focusing on global catastrophic risk is because if a global catastrophe occurs, it could prevent human civilization from accomplishing great things in the future. Arguably, some of the greatest things it could accomplish involve expansion into outer space. This paper presents an ecocentric vision for future space expansion, in which human civilization spreads flourishing ecosystems across the cosmos. The paper is part of a broader collection of visions for space exploration …

Read More »

From AI for People to AI for the World and the Universe

View the paper “From AI for People to AI for the World and the Universe”

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for …

Read More »

The Ethics of Sustainability for Artificial Intelligence

View the paper “The Ethics of Sustainability for Artificial Intelligence”

Access the data used in the paper. 

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not …

Read More »

Artificial Intelligence Needs Environmental Ethics

View the paper “Artificial Intelligence Needs Environmental Ethics”

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide …

Read More »

June Newsletter: AI Ethics & Governance

Dear friends,

This month GCRI announces two new research papers. First, Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence led by GCRI Research Associate Andrea Owe, addresses the current state of treatment of nonhumans across the field of AI ethics. The paper finds limited existing attention and calls for more. The paper speaks to major themes in AI ethics, such as the project of aligning AI to human (or nonhuman) values. Given the profound current and potential future impacts of AI technology, how nonhumans …

Read More »