Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities

View the paper “Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities”

The concept of global catastrophic
risk is customarily defined in human terms. Details vary, but a global
catastrophe is almost always regarded as something bad that happens to
humans. However, in moral philosophy, it is often considered that things
that happen to nonhumans can also be bad—and likewise for good things. In some
circumstances, whether and how nonhumans are valued may be the difference
between extremely good or catastrophically bad outcomes for nonhumans. This
raises the …

Read More »

March Newsletter: Implications of the War in Ukraine

Dear friends,The Russian invasion of Ukraine is already proving to be an event of profound importance for global catastrophic risk. As detailed in the GCRI Statement on the Russian Invasion of Ukraine, the war’s implications for nuclear war risk are especially strong, but it also has implications for other risks including climate change, pandemics, and artificial intelligence. These changes are coming from the war itself and from the accompanying shifts in global politics. We at GCRI hope that the war can reach a prompt and peaceful …

Read More »

February Newsletter: Ukraine & Pluralism

Dear friends,

We at GCRI are watching the ongoing Russian invasion of Ukraine with great concern. In addition to the grave harm being inflicted on the Ukrainian people, this invasion also constitutes a large escalation of tensions between Russia and the West and a shooting war adjacent to several NATO countries. In our judgment, this increases the risk of US-Russia or NATO-Russia nuclear war and accompanying nuclear winter. Our hearts go out to the people of Ukraine who are enduring this tragic violence. For the sake …

Read More »

Greening the Universe: The Case for Ecocentric Space Expansion

View the paper “Greening the Universe: The Case for Ecocentric Space Expansion”

One reason for focusing on global catastrophic risk is because if a global catastrophe occurs, it could prevent human civilization from accomplishing great things in the future. Arguably, some of the greatest things it could accomplish involve expansion into outer space. This paper presents an ecocentric vision for future space expansion, in which human civilization spreads flourishing ecosystems across the cosmos. The paper is part of a broader collection of visions for space exploration …

Read More »

From AI for People to AI for the World and the Universe

View the paper “From AI for People to AI for the World and the Universe”

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for …

Read More »

The Ethics of Sustainability for Artificial Intelligence

View the paper “The Ethics of Sustainability for Artificial Intelligence”

Access the data used in the paper. 

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not …

Read More »

Artificial Intelligence Needs Environmental Ethics

View the paper “Artificial Intelligence Needs Environmental Ethics”

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide …

Read More »

BBC On Short-Termism And Long-Term Trajectories

Richard Fisher of the BBC has published a detailed and thoughtful article The perils of short-termism: Civilisation’s greatest threat. The article covers the widespread tendency across contemporary society to focus on very short-term issues and several efforts to promote more long-term thinking and action. The article includes a detailed discussion of the research paper Long-term trajectories of human civilization, which I am lead author of. The BBC article includes each of the four types of trajectories covered in the Long-term trajectories paper (and illustrated in …

Read More »

Long-Term Trajectories of Human Civilization

View the paper “Long-Term Trajectories of Human Civilization”

Society today needs greater attention to the long-term fate of human civilization. Important present-day decisions can affect what happens millions, billions, or trillions of years into the future. The long-term effects may be the most important factor for present-day decisions and must be taken into account. An international group of 14 scholars calls for the dedicated study of “long-term trajectories of human civilization” in order to understand long-term outcomes and inform decision-making. This new approach is presented in …

Read More »

Social Choice Ethics in Artificial Intelligence

View the paper “Social Choice Ethics in Artificial Intelligence”

A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom-up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of …

Read More »