Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities

View the paper “Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities”

The concept of global catastrophic
risk is customarily defined in human terms. Details vary, but a global
catastrophe is almost always regarded as something bad that happens to
humans. However, in moral philosophy, it is often considered that things
that happen to nonhumans can also be bad—and likewise for good things. In some
circumstances, whether and how nonhumans are valued may be the difference
between extremely good or catastrophically bad outcomes for nonhumans. This
raises the …

Read More »

From AI for People to AI for the World and the Universe

View the paper “From AI for People to AI for the World and the Universe”

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for …

Read More »

November Newsletter: Year in Review

Dear friends,

2021 has been a year of overcoming challenges, of making the most of it under difficult circumstances. The Delta variant dashed hopes for a smooth recovery from the COVID-19 pandemic. Outbreaks surge even in places with high vaccination rates, raising questions of when or even if the pandemic will ever end. As we at GCRI are abundantly aware, it could be a lot worse. But it has still been bad, and we send our condolences to those who have lost loved ones.Despite the circumstances, …

Read More »

The Ethics of Sustainability for Artificial Intelligence

View the paper “The Ethics of Sustainability for Artificial Intelligence”

Access the data used in the paper. 

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not …

Read More »

Artificial Intelligence Needs Environmental Ethics

View the paper “Artificial Intelligence Needs Environmental Ethics”

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide …

Read More »

September Newsletter: AI Corporate Governance

Dear friends,

This month GCRI announces a new research paper, Corporate governance of artificial intelligence in the public interest. Private industry is at the forefront of AI technology, making corporate governance a vital area of activity. The paper surveys the wide range of opportunities available to improve AI corporate governance so that it better advances the public interest. It includes opportunities for people both inside and outside corporations.

The paper is co-authored by Peter Cihon of GitHub, Jonas Schuett of Goethe University Frankfurt and the Legal Priorities …

Read More »

Corporate Governance of Artificial Intelligence in the Public Interest

View the paper “Corporate Governance of Artificial Intelligence in the Public Interest”

OCTOBER 28, 2021: This post has been corrected to fix an error.

Private industry is at the forefront of AI technology. About half of artificial general intelligence (AGI) R&D projects, including some of the largest ones, are based in corporations, which is far more than in any other institution type. Given the importance of AI technology, including for global catastrophic risk, it is essential to ensure that corporations develop and use AI in appropriate ways. …

Read More »

August Newsletter: Collective Action on AI

Dear friends,

This month GCRI announces a new research paper, Collective action on artificial intelligence: A primer and review. Led by GCRI Director of Communications Robert de Neufville, the paper shows how different groups of people can work together to bring about AI outcomes that no one individual could bring about on their own. The paper provides a primer on basic collective action concepts, derived mainly from the political science literature, and reviews the existing literature on AI collective action. The paper serves to get people …

Read More »

Collective Action on Artificial Intelligence: A Primer and Review

View the paper “Collective Action on Artificial Intelligence: A Primer and Review”

The development of safe and socially beneficial artificial intelligence (AI) will require collective action: outcomes will depend on the actions that many different people take. In recent years, a sizable but disparate literature has looked at the challenges posed by collective action on AI, but this literature is generally not well grounded in the broader social science literature on collective action. This paper advances the study of collective action on AI by providing a …

Read More »

June Newsletter: AI Ethics & Governance

Dear friends,

This month GCRI announces two new research papers. First, Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence led by GCRI Research Associate Andrea Owe, addresses the current state of treatment of nonhumans across the field of AI ethics. The paper finds limited existing attention and calls for more. The paper speaks to major themes in AI ethics, such as the project of aligning AI to human (or nonhuman) values. Given the profound current and potential future impacts of AI technology, how nonhumans …

Read More »