August Newsletter: New Director of Communications

Dear friends,

It is my pleasure to announce that longtime GCRI Associate Robert de Neufville has been promoted to the position of Director of Communications. Robert will oversee GCRI’s website and newsletter, as well as lead a renewed media outreach program. He also joinsTony Barrett, Grant Wilson, and myself on GCRI’s leadership team. Robert’s work is funded through a donation GCRI recently secured from Pattern, an AI company that, like GCRI, has a “geographically decentralized” structure in which workers can live anywhere in the world. We …

Read More »

Working Paper: On the Promotion of Safe and Socially Beneficial Artificial Intelligence

GCRI is launching a working paper series with a new paper On the promotion of safe and socially beneficial artificial intelligence by Seth Baum.

Abstract
This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to …

Read More »

The Ethics of Outer Space: A Consequentialist Perspective

View the paper “The Ethics of Outer Space: A Consequentialist Perspective”

Outer space is of major interest to consequentialist ethics for two basic reasons. First, the vast expanses of outer space offer opportunities for achieving vastly more good or bad consequences than can be achieved on Earth alone. If consequences are valued equally regardless of where they occur then achieving good consequences in space is of paramount importance. For human civilization, this can mean the building of space colonies or even the macroengineering of structures like Dyson …

Read More »

GCRI Experts Featured In Science Magazine

GCRI’s Seth Baum and Dave Denkenberger are featured in an article in Science Magazine titled “Here’s how the world could end—and what we can do about it”.

Read More »

Podcast: Earthquakes As Global Catastrophic Risk

GCRI’s Seth Baum participated in a podcast hosted by Ariel Conn of the Future of Life Institute (FLI) on the question of whether earthquakes could be a global catastrophic risk. You can listen to the podcast here.

Read More »

Alternative Foods as a Solution to Global Food Supply Catastrophes

View the paper “Alternative Foods as a Solution to Global Food Supply Catastrophes”

Analysis of future food security typically focuses on managing gradual trends such as population growth, natural resource depletion, and environmental degradation. However, several risks threaten to cause large and abrupt declines in food security. For example, nuclear war, volcanic eruptions, and asteroid impact events can block sunlight, causing abrupt global cooling. In extreme but entirely possible cases, these events could make agriculture infeasible worldwide for several years, creating a food supply catastrophe of …

Read More »

GCR News Summary May 2016

President Obama and Prime Minister Abe at the Hiroshima Peace Memorial image courtesy of Pete Souza/The White House

President Obama became the first sitting US president to visit Hiroshima, just a little more than one month after Secretary of State John Kerry became the highest ranking US official to visit the city where the US detonated a nuclear weapon at the end of World War II. Obama laid a wreath at the Hiroshima Peace Memorial, but did not apologize for the use of nuclear weapons against Japan. …

Read More »

GCR News Summary April 2016

Alpha Centauri and Southern Cross image courtesy of Claus Madsen/ESO, CC BY 4.0

John Kerry became the first US Secretary of State to visit the site of the US nuclear attack on Hiroshima. Kerry wrote in the Hiroshima Peace Memorial guest book that the site was “a stark, harsh, compelling reminder not only of our obligation to end the threat of nuclear weapons, but to rededicate all our effort to avoid war itself.” William J. Broad and David E. Sanger wrote in The New York Times …

Read More »

A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis

View the paper “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis”

This paper analyzes the risk of a catastrophe scenario involving self-improving artificial intelligence. An self-improving AI is one that makes itself smarter and more capable. In this scenario, the self-improvement is recursive, meaning that the improved AI makes an even more improved AI, and so on. This causes a takeoff of successively more intelligent AIs. The result is an artificial superintelligence (ASI), which is an AI that is significantly more …

Read More »

GCR News Summary March 2016

Go game image courtesy of Jaro Larnnos under a Creative Commons license

Google DeepMind’s AlphaGo computer program beat 9-dan professional go player Lee Se-dol 4-1 in a five-game match. Lee has won 18 international titles and is widely regarded as one of the best Go players in the world. AlphaGo made a number of decisive moves that the human players found completely surprising and “beautiful”. The South Korean Go Association granted AlphaGo an honorary 9-dan ranking for its “sincere efforts” to master the game at a level approaching …

Read More »