Artificial Intelligence Needs Environmental Ethics

View the paper “Artificial Intelligence Needs Environmental Ethics”

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide …

Read More »

Common Points of Advice for Students and Early-Career Professionals Interested in Global Catastrophic Risk

GCRI runs a recurring Advising and Collaboration Program in which we connect with people at all career points who are interested in getting more involved in global catastrophic risk. Through that program, I have had the privilege of speaking with many people to share my experience in the field and help them find opportunities to advance their careers in global catastrophic risk. It has been an enriching experience, and I thank all of our program participants.

Many of the people in our program are students and early-career professionals. …

Read More »

The Inaugural 2021 GCRI Fellowship Program

GCRI is delighted to announce the formation of a new Fellowship Program and the members of the inaugural 2021 GCRI Fellowship class. The Fellowship Program recognizes a select group of 12 individuals who have made exceptional contributions to addressing global catastrophic risk in collaboration with GCRI during the year 2021.

The 2021 GCRI Fellows range from undergraduates to senior professionals and hail from seven countries around the world. Their contributions include research across a diverse range of disciplines, policy outreach, program development, and more. We at …

Read More »

Global Catastrophic Risk Presentations at the 2021 Society for Risk Analysis Annual Meeting

Every December, the Society for Risk Analysis holds its Annual Meeting. It is the leading professional conference for risk analysis. GCRI has participated in the Annual Meeting in most years since 2010, as detailed here.

This year, GCRI is involved in three presentations. Each presentation is led by one or more early-career researchers who connected with GCRI via this year’s advising and collaboration program. GCRI is excited to provide these researchers with the opportunity to attend the SRA Annual Meeting and share their research. The research to be presented …

Read More »

The Case for Long-Term Corporate Governance of AI

In a new post on the Effective Altruism Forum, GCRI’s Seth Baum and Jonas Schuett from the Legal Priorities Project make The case for long-term corporate governance of AI. Baum and Schuett make three main points in their post. First, the long-term corporate governance of AI, which they define as the corporate governance of AI that could affect the long-term future, is an important area of long-term AI governance. Second, corporate governance of AI has been relatively neglected by communities that focus on long-term AI …

Read More »

Artificial Intelligence, Systemic Risks, and Sustainability

View the paper “Artificial Intelligence, Systemic Risks, and Sustainability”

Artificial intelligence is increasingly a factor in a range of global catastrophic risks. This paper studies the role of AI in two closely related domains. The first is systemic risks, meaning risks involving interconnected networks, such as supply chains and critical infrastructure systems. The second is environmental sustainability, meaning risks related to the natural environment’s ability to sustain human civilization on an ongoing basis. The paper is led by Victor Galaz, Deputy Director of the Stockholm Resilience Centre, …

Read More »

September Newsletter: AI Corporate Governance

Dear friends,

This month GCRI announces a new research paper, Corporate governance of artificial intelligence in the public interest. Private industry is at the forefront of AI technology, making corporate governance a vital area of activity. The paper surveys the wide range of opportunities available to improve AI corporate governance so that it better advances the public interest. It includes opportunities for people both inside and outside corporations.

The paper is co-authored by Peter Cihon of GitHub, Jonas Schuett of Goethe University Frankfurt and the Legal Priorities …

Read More »

Corporate Governance of Artificial Intelligence in the Public Interest

View the paper “Corporate Governance of Artificial Intelligence in the Public Interest”

OCTOBER 28, 2021: This post has been corrected to fix an error.

Private industry is at the forefront of AI technology. About half of artificial general intelligence (AGI) R&D projects, including some of the largest ones, are based in corporations, which is far more than in any other institution type. Given the importance of AI technology, including for global catastrophic risk, it is essential to ensure that corporations develop and use AI in appropriate ways. …

Read More »

August Newsletter: Collective Action on AI

Dear friends,

This month GCRI announces a new research paper, Collective action on artificial intelligence: A primer and review. Led by GCRI Director of Communications Robert de Neufville, the paper shows how different groups of people can work together to bring about AI outcomes that no one individual could bring about on their own. The paper provides a primer on basic collective action concepts, derived mainly from the political science literature, and reviews the existing literature on AI collective action. The paper serves to get people …

Read More »

Collective Action on Artificial Intelligence: A Primer and Review

View the paper “Collective Action on Artificial Intelligence: A Primer and Review”

The development of safe and socially beneficial artificial intelligence (AI) will require collective action: outcomes will depend on the actions that many different people take. In recent years, a sizable but disparate literature has looked at the challenges posed by collective action on AI, but this literature is generally not well grounded in the broader social science literature on collective action. This paper advances the study of collective action on AI by providing a …

Read More »