November Newsletter: Year in Review

Dear friends,

2021 has been a year of overcoming challenges, of making the most of it under difficult circumstances. The Delta variant dashed hopes for a smooth recovery from the COVID-19 pandemic. Outbreaks surge even in places with high vaccination rates, raising questions of when or even if the pandemic will ever end. As we at GCRI are abundantly aware, it could be a lot worse. But it has still been bad, and we send our condolences to those who have lost loved ones.Despite the circumstances, …

Read More »

The Case for Long-Term Corporate Governance of AI

In a new post on the Effective Altruism Forum, GCRI’s Seth Baum and Jonas Schuett from the Legal Priorities Project make The case for long-term corporate governance of AI. Baum and Schuett make three main points in their post. First, the long-term corporate governance of AI, which they define as the corporate governance of AI that could affect the long-term future, is an important area of long-term AI governance. Second, corporate governance of AI has been relatively neglected by communities that focus on long-term AI …

Read More »

September Newsletter: AI Corporate Governance

Dear friends,

This month GCRI announces a new research paper, Corporate governance of artificial intelligence in the public interest. Private industry is at the forefront of AI technology, making corporate governance a vital area of activity. The paper surveys the wide range of opportunities available to improve AI corporate governance so that it better advances the public interest. It includes opportunities for people both inside and outside corporations.

The paper is co-authored by Peter Cihon of GitHub, Jonas Schuett of Goethe University Frankfurt and the Legal Priorities …

Read More »

August Newsletter: Collective Action on AI

Dear friends,

This month GCRI announces a new research paper, Collective action on artificial intelligence: A primer and review. Led by GCRI Director of Communications Robert de Neufville, the paper shows how different groups of people can work together to bring about AI outcomes that no one individual could bring about on their own. The paper provides a primer on basic collective action concepts, derived mainly from the political science literature, and reviews the existing literature on AI collective action. The paper serves to get people …

Read More »

Collective Action on Artificial Intelligence: A Primer and Review

View the paper “Collective Action on Artificial Intelligence: A Primer and Review”

The development of safe and socially beneficial artificial intelligence (AI) will require collective action: outcomes will depend on the actions that many different people take. In recent years, a sizable but disparate literature has looked at the challenges posed by collective action on AI, but this literature is generally not well grounded in the broader social science literature on collective action. This paper advances the study of collective action on AI by providing a …

Read More »

May Newsletter: 2021 Advising/Collaboration Program

Dear friends,

GCRI has just opened a new round of our advising and collaboration program. It is an open call for anyone who would like to connect with us. We are providing advice on career opportunities, research directions, and anything else related to global catastrophic risk. We are also discussing opportunities to collaborate on specific projects, including several active GCRI projects listed online. Whether you are new to the field or an old colleague seeking to reconnect, we welcome your inquiry. For further information, please see …

Read More »

February Newsletter: Welcoming Andrea Owe

Dear friends,

I am delighted to announce a new member of the GCRI team, Research Associate Andrea Owe. Andrea is an environmental and space ethicist based in Oslo. She first started working with GCRI a few years ago while she was a masters student at the University of Oslo’s Centre for Development and the Environment. As a full-time research associate, Andrea will lead a project for GCRI on the ethics of AI and global catastrophic risk. She brings a valuable new perspective to GCRI, and we’re …

Read More »

GCRI Welcomes Research Associate Andrea Owe

GCRI is delighted to announce our newest team member, Andrea Owe. Andrea will work as a research associate, contributing primarily to GCRI’s research on ethics and artificial intelligence.

Andrea holds an M.Phil in Development, Environment and Cultural Change from the Centre for Development and the Environment at the University of Oslo, and a B.A. in Fine Arts from the Royal Academy of Art, The Hague. At the University of Oslo, she was an Arne Næss Stipend holder and researcher at the Arne Næss Programme on …

Read More »

January Newsletter: Insurrection & AGI Survey

Dear friends,

January 6, 2021 was a dark day in the US. The violent insurrection at the US Capitol was terrible in its own right, but, as we discuss on the GCRI blog, it also had several links to global catastrophic risk, including visions of global genocide against non-whites and systems of disinformation that also undermine the governance of climate change and of the COVID-19 pandemic.

GCRI is nonpartisan, and we welcome constructive contributions from people of all political views. We do not, however, welcome those who …

Read More »

November Newsletter: Year in Review

Dear friends,

What a year 2020 has been. The COVID-19 pandemic is already the most severe global catastrophe in decades, and it’s far from over. It shows the importance of addressing global catastrophic risk: a global catastrophe can upend everything else we’re doing and destroy so much of what we care about.

GCRI has been relatively fortunate during the pandemic. We have always been a remote collaboration organization, so we have been able to maintain a high degree of social distancing with relatively little impact on our …

Read More »