Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities

View the paper “Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities”

The concept of global catastrophic
risk is customarily defined in human terms. Details vary, but a global
catastrophe is almost always regarded as something bad that happens to
humans. However, in moral philosophy, it is often considered that things
that happen to nonhumans can also be bad—and likewise for good things. In some
circumstances, whether and how nonhumans are valued may be the difference
between extremely good or catastrophically bad outcomes for nonhumans. This
raises the …

Read More »

From AI for People to AI for the World and the Universe

View the paper “From AI for People to AI for the World and the Universe”

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for …

Read More »

The Ethics of Sustainability for Artificial Intelligence

View the paper “The Ethics of Sustainability for Artificial Intelligence”

Access the data used in the paper. 

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not …

Read More »

Artificial Intelligence Needs Environmental Ethics

View the paper “Artificial Intelligence Needs Environmental Ethics”

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide …

Read More »

Artificial Intelligence, Systemic Risks, and Sustainability

View the paper “Artificial Intelligence, Systemic Risks, and Sustainability”

Artificial intelligence is increasingly a factor in a range of global catastrophic risks. This paper studies the role of AI in two closely related domains. The first is systemic risks, meaning risks involving interconnected networks, such as supply chains and critical infrastructure systems. The second is environmental sustainability, meaning risks related to the natural environment’s ability to sustain human civilization on an ongoing basis. The paper is led by Victor Galaz, Deputy Director of the Stockholm Resilience Centre, …

Read More »

Deep Learning and the Sociology of Human-Level Artificial Intelligence

View the paper “Deep Learning and the Sociology of Human-Level Artificial Intelligence”

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology,
especially language, and their implications for AI. This is a worthy
contribution, all …

Read More »

Medium-Term Artificial Intelligence and Society

View the paper “Medium-Term Artificial Intelligence and Society”

Discussion of artificial intelligence tends to focus on either near-term or long-term AI. That includes some contentious debate between “presentists” who favor attention to the near-term and “futurists” who favor attention to the long-term. Largely absent from the conversation is any attention to the medium-term. This paper provides dedicated discussion of medium-term AI and its accompanying societal issues. It focuses on how medium-term AI can be defined and how it relates to the presentist-futurist debate. It builds on …

Read More »

Lessons for Artificial Intelligence from Other Global Risks

View the paper “Lessons for Artificial Intelligence from Other Global Risks”

It has become clear in recent years that AI poses important global risks. The study of AI risk is relatively new, but it can potentially learn a lot from the study of similar, better-studied risks. GCRI’s new paper applies to the study of AI risk lessons from four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper is co-authored by GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett, along with GCRI Senior …

Read More »

October Newsletter: The Superintelligence Debate

Dear friends,

When I look at debates about risks from artificial intelligence, I see a lot of parallels with debates over global warming. Both involve global catastrophic risks that are, to a large extent, driven by highly profitable industries. Indeed, today most of the largest corporations in the world are in either the computer technology or fossil fuel industries.

One key difference is that whereas global warming debates have been studied in great detail by many talented researchers, AI debates have barely been studied at all. As …

Read More »

Countering Superintelligence Misinformation

View the paper “Countering Superintelligence Misinformation”

In any public issue, having the right information can help us make the right decisions. This holds in particular for high-stakes issues like the global catastrophic risks. Unfortunately, sometimes incorrect information, or misinformation, is spread. When this happens, it is important to set the record straight.

This paper studies misinformation about artificial superintelligence, which is AI that is much smarter than humans. Current AI is not superintelligent, but if superintelligence is built, it could have massive consequences. Misinformation about superintelligence could …

Read More »