2021 Annual Report

2021 has been a good year for GCRI. Our productivity is up relative to previous years, boosted by a growing team and rich network of outside collaborators. Our work over the past year is broadly consistent with the plans we outlined one year ago. We have adjusted well to the new realities of the COVID-19 pandemic, aided by the fact that GCRI was designed for remote collaboration from the start. Because of the pandemic, there is a sense in which no one has had a truly great …

Read More »

Summary of the 2021 Advising and Collaboration Program

In May, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of our successful 2019 and 2020 Advising and Collaboration Programs. We anticipate conducting future iterations of the program in 2022 and beyond. The 2021 Program was made possible by generous support from Gordon Irlam and the Survival and Flourishing Fund.

The GCRI Advising and Collaboration Program is an opportunity for anyone interested in global catastrophic risk to get more involved in the field. There is practically no barrier to entry …

Read More »

Collaborative Publishing with GCRI

Global catastrophic risk is a highly complex, interdisciplinary topic. It benefits from contributions from many people with a variety of backgrounds. For this reason, GCRI emphasizes collaborative publishing. We publish extensively with outside scholars at all career points, including early-career scholars who are relatively new to the field, as well as mid-career and senior scholars at other organizations who bring complementary expertise.

This post describes our approach to collaborative publishing and documents our collaborative publications. Researchers interested in publishing with GCRI should visit our get involved page. The …

Read More »

The Ethics of Sustainability for Artificial Intelligence

View the paper “The Ethics of Sustainability for Artificial Intelligence”

Access the data used in the paper. 

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not …

Read More »

Artificial Intelligence Needs Environmental Ethics

View the paper “Artificial Intelligence Needs Environmental Ethics”

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide …

Read More »

Common Points of Advice for Students and Early-Career Professionals Interested in Global Catastrophic Risk

GCRI runs a recurring Advising and Collaboration Program in which we connect with people at all career points who are interested in getting more involved in global catastrophic risk. Through that program, I have had the privilege of speaking with many people to share my experience in the field and help them find opportunities to advance their careers in global catastrophic risk. It has been an enriching experience, and I thank all of our program participants.

Many of the people in our program are students and early-career professionals. …

Read More »

Artificial Intelligence, Systemic Risks, and Sustainability

View the paper “Artificial Intelligence, Systemic Risks, and Sustainability”

Artificial intelligence is increasingly a factor in a range of global catastrophic risks. This paper studies the role of AI in two closely related domains. The first is systemic risks, meaning risks involving interconnected networks, such as supply chains and critical infrastructure systems. The second is environmental sustainability, meaning risks related to the natural environment’s ability to sustain human civilization on an ongoing basis. The paper is led by Victor Galaz, Deputy Director of the Stockholm Resilience Centre, …

Read More »

Corporate Governance of Artificial Intelligence in the Public Interest

View the paper “Corporate Governance of Artificial Intelligence in the Public Interest”

OCTOBER 28, 2021: This post has been corrected to fix an error.

Private industry is at the forefront of AI technology. About half of artificial general intelligence (AGI) R&D projects, including some of the largest ones, are based in corporations, which is far more than in any other institution type. Given the importance of AI technology, including for global catastrophic risk, it is essential to ensure that corporations develop and use AI in appropriate ways. …

Read More »

June Newsletter: AI Ethics & Governance

Dear friends,

This month GCRI announces two new research papers. First, Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence led by GCRI Research Associate Andrea Owe, addresses the current state of treatment of nonhumans across the field of AI ethics. The paper finds limited existing attention and calls for more. The paper speaks to major themes in AI ethics, such as the project of aligning AI to human (or nonhuman) values. Given the profound current and potential future impacts of AI technology, how nonhumans …

Read More »

Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence

View the paper “Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence”

In the ethics of artificial intelligence, a major theme is the challenge of aligning AI to human values. This raises the question of the role of nonhumans. Indeed, AI can profoundly affect the nonhuman world, including nonhuman animals, the natural environment, and the AI itself. Given that large parts of the nonhuman world are already under immense threats from human affairs, there is reason to fear potentially catastrophic consequences should AI R&D fail …

Read More »