Working Paper: On the Promotion of Safe and Socially Beneficial Artificial Intelligence

GCRI is launching a working paper series with a new paper On the promotion of safe and socially beneficial artificial intelligence by Seth Baum.

Abstract
This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to …

Read More »

February Newsletter: The Year Ahead

Dear friends,

One year ago, GCRI announced a new direction focused on research to develop the best ways to confront humanity’s gravest threats. Over the past year, we’ve delivered:

* An edited collection, Confronting Future Catastrophic Threats to Humanity, containing ten original research papers including five by GCRI affiliates
* Six additional research papers, making for a total of nine peer-reviewed journal articles and two book chapters
* 19 popular articles in publications such as the Guardian and the Bulletin of the Atomic Scientists
* Two symposia at the Society …

Read More »

December Newsletter: A Focus On Solutions

Dear friends,

This holiday season, please consider supporting the Global Catastrophic Risk Institute. You can donate online or contact me for further information. At this time, GCRI’s success is limited mainly by its available funding. And nothing beats giving the gift of protection from global catastrophe.

In my view, what’s ultimately important is not the risks themselves but the actions we can take to reduce them. A risk could be very large, but if we can’t do anything about it, then we should focus on something else. …

Read More »

September Newsletter: AI, Nuclear War, and News Projects

Dear friends,

I’m delighted to announce three new funded projects. Two of them are for risk modeling, on artificial intelligence and nuclear war. These follow directly from our established nuclear war and emerging technologies research projects. The third is for covering current events across the breadth of global catastrophic risk topics. This follows directly from our news summaries. It is an honor to be recognized for our work and to have the opportunity to expand it. Please stay tuned as these projects unfold.

As always, thank you …

Read More »

New Global Challenges Foundation Projects

GCRI has two new funded projects with the Global Challenges Foundation, a philanthropic foundation based in Stockholm.

The first project is a quarterly report of everything going on in the world of global catastrophic risks. The reports will be an expanded version of our monthly news summaries, with some new features and an emphasis on work going on around the world to reduce the risks.

The second project is a risk analysis of nuclear war. Prior GCRI nuclear war research modeled the probability of specific nuclear war …

Read More »

FLI Artificial Superintelligence Project

I am writing to announce that GCRI has received a grant from the Future of Life Institute, with funding provided by Elon Musk and the Open Philanthropy Project. The official announcement is here and the full list of awardees is here.

GCRI’s project team includes Tony Barrett, Roman Yampolskiy, and myself. Here is the project title and summary:

Evaluation of Safe Development Pathways for Artificial Superintelligence

Some experts believe that computers could eventually become a lot smarter than humans are. They call it artificial superintelligence, or ASI. If …

Read More »

June Newsletter: The Winter-Safe Deterrence Controversy

Dear friends,

The last few months have gone well for GCRI. We have several new papers out, two new student affiliates, and some projects in the works that I hope to announce in an upcoming newsletter. Meanwhile, I’d like to share with you about a little controversy we recently found ourselves in.

The controversy surrounds a new research paper of mine titled Winter-safe deterrence: The risk of nuclear winter and its challenge to deterrence. The essence of winter-safe deterrence is to seek options for deterrence that would …

Read More »

February Newsletter: New Directions For GCRI

Dear friends,

I am delighted to announce important changes in GCRI’s identity and direction. GCRI is now just over three years old. In these years we have learned a lot about how we can best contribute to the issue of global catastrophic risk. Initially, GCRI aimed to lead a large global catastrophic risk community while also performing original research. This aim is captured in GCRI’s original mission statement, to help mobilize the world’s intellectual and professional resources to meet humanity’s gravest threats.

Our community building has been …

Read More »

January Newsletter: Vienna Conference on Nuclear Weapons

Dear friends,

In December, I had the honor of speaking at the Vienna Conference on the Humanitarian Impact of Nuclear Weapons, hosted by the Austrian Foreign Ministry in the lavish Hofburg Palace. The audience was 1,000 people representing 158 national governments plus leading nuclear weapons NGOs, experts, and members of the media.

My talk “What is the risk of nuclear war?” presented core themes from the risk analysis of nuclear war. I explained that each of us is, on average, more likely to die from nuclear war …

Read More »

GCRI Welcomes Professional Assistant Steven Umbrello

We’re pleased to announce our newest affiliate, Professional Assistant Steven Umbrello. Steven has been helping GCRI with a variety of projects for some time now and we’re delighted to recognize him for his contributions. Here’s his bio from the GCRI People page.

Steven is a B.A. student in philosophy of science at University of Toronto Mississauga. Steven is also Owner & Executive Editor of the Leather Library Blog, a website dedicated academic philosophy, psychology and literature. As a Professional Assistant, Steven contributes to GCRI’s overall research …

Read More »