Assessing the Risk of Takeover Catastrophe from Large Language Models

View the paper “Assessing the Risk of Takeover Catastrophe from Large Language Models”

Recent large language models (LLMs) have shown some impressive capabilities, but this has raised concerns about their potential to cause harm. Once concern is that LLMs could take over the world and cause catastrophic harm, potentially even killing everyone on the planet. However, this concern has been questioned and hotly debated. Therefore, this paper presents a careful analysis of LLM takeover catastrophe risk.

Concern about LLM takeover is noteworthy across the entire history of …

Read More »

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

View the paper “Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics”

Vote suppression, disinformation, sham elections that give authoritarians the veneer of democracy, and even genocide: all of these are means of manipulating the outcomes of elections. (Shown above: a ballot from the sham 1938 referendum for the annexation of Austria by Nazi Germany; notice the larger circle for Ja/Yes.) Countering these manipulations is an ongoing challenge. Meanwhile, work on AI ethics often proposes that AI systems use something similar to democracy. Therefore, this …

Read More »

Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities

View the paper “Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities”

The concept of global catastrophic
risk is customarily defined in human terms. Details vary, but a global
catastrophe is almost always regarded as something bad that happens to
humans. However, in moral philosophy, it is often considered that things
that happen to nonhumans can also be bad—and likewise for good things. In some
circumstances, whether and how nonhumans are valued may be the difference
between extremely good or catastrophically bad outcomes for nonhumans. This
raises the …

Read More »

From AI for People to AI for the World and the Universe

View the paper “From AI for People to AI for the World and the Universe”

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for …

Read More »

The Ethics of Sustainability for Artificial Intelligence

View the paper “The Ethics of Sustainability for Artificial Intelligence”

Access the data used in the paper. 

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not …

Read More »

Artificial Intelligence Needs Environmental Ethics

View the paper “Artificial Intelligence Needs Environmental Ethics”

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide …

Read More »

Artificial Intelligence, Systemic Risks, and Sustainability

View the paper “Artificial Intelligence, Systemic Risks, and Sustainability”

Artificial intelligence is increasingly a factor in a range of global catastrophic risks. This paper studies the role of AI in two closely related domains. The first is systemic risks, meaning risks involving interconnected networks, such as supply chains and critical infrastructure systems. The second is environmental sustainability, meaning risks related to the natural environment’s ability to sustain human civilization on an ongoing basis. The paper is led by Victor Galaz, Deputy Director of the Stockholm Resilience Centre, …

Read More »

Deep Learning and the Sociology of Human-Level Artificial Intelligence

View the paper “Deep Learning and the Sociology of Human-Level Artificial Intelligence”

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology,
especially language, and their implications for AI. This is a worthy
contribution, all …

Read More »

Medium-Term Artificial Intelligence and Society

View the paper “Medium-Term Artificial Intelligence and Society”

Discussion of artificial intelligence tends to focus on either near-term or long-term AI. That includes some contentious debate between “presentists” who favor attention to the near-term and “futurists” who favor attention to the long-term. Largely absent from the conversation is any attention to the medium-term. This paper provides dedicated discussion of medium-term AI and its accompanying societal issues. It focuses on how medium-term AI can be defined and how it relates to the presentist-futurist debate. It builds on …

Read More »

Lessons for Artificial Intelligence from Other Global Risks

View the paper “Lessons for Artificial Intelligence from Other Global Risks”

It has become clear in recent years that AI poses important global risks. The study of AI risk is relatively new, but it can potentially learn a lot from the study of similar, better-studied risks. GCRI’s new paper applies to the study of AI risk lessons from four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper is co-authored by GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett, along with GCRI Senior …

Read More »