November Newsletter: Organization Updates

Dear friends,

We are currently in the process of implementing some major organization updates. These are making the organization more in line with our ongoing work and future plans.

First, we’ve published new pages for the seven topics that our work focuses on:

Aftermath of Global Catastrophe
Artificial Intelligence
Cross-Risk Evaluation & Prioritization
Nanotechnology
Nuclear War
Risk & Decision Analysis
Solutions & Strategy

Second, we’ve overhauled our affiliates, removing Associates and Junior Associates and creating a new Senior Advisory Council. We are delighted to welcome Gary Ackerman, John Garrick, and Seán Ó hÉigeartaigh as GCRI’s …

Read More »

October Newsletter: The Superintelligence Debate

Dear friends,

When I look at debates about risks from artificial intelligence, I see a lot of parallels with debates over global warming. Both involve global catastrophic risks that are, to a large extent, driven by highly profitable industries. Indeed, today most of the largest corporations in the world are in either the computer technology or fossil fuel industries.

One key difference is that whereas global warming debates have been studied in great detail by many talented researchers, AI debates have barely been studied at all. As …

Read More »

August Newsletter: Long-Term Trajectories

Dear friends,

This month I am proud to announce a new paper, “Long-Term Trajectories of Human Civilization“. The paper calls for attention to the fate of human civilization over time scales of millions, billions, or trillions of years into the future. While most attention goes to nearer-term phenomena, the long-term can be profoundly important to present-day decision-making. For example, one major issue the paper examines is the fate of global catastrophe survivors. How well they fare is a central factor in whether people today should focus …

Read More »

June Newsletter: Summer Talks

Artificial Intelligence

GCRI Associate Roman Yampolskiy gave a talk on AI safety to the Global Challenges Summit 2018 in Astana, Kazakhstan May 17-19.

GCRI Executive Director Seth Baum and GCRI Associate Roman Yampolskiy participated in a workshop on “AI Coordination & Great Powers” hosted by Foresight Institute in San Francisco on June 7.

GCRI Executive Director Seth Baum gave a seminar on “AI Risk, Ethics, Social Science, and Policy” hosted by the University of California, Berkeley Center for Human-Compatible Artificial Intelligence (CHAI) on June 11.

Effective Altruism

GCRI Executive Director Seth Baum …

Read More »

May Newsletter: Molecular Nanotechnology

Dear friends,

It has been a productive month for GCRI, with new papers by several of our affiliates. Here, I would like to highlight one by Steven Umbrello and myself, on the topic of molecular nanotechnology, also known as atomically precise manufacturing (APM).

At present, APM exists only in a crude form, such as the work recognized by the 2016 Nobel Prize in Chemistry. However, it may be able to revolutionize manufacturing, making it inexpensive and easy to produce a wide range of goods, resulting in what …

Read More »

March Newsletter: Nuclear War Probability

Dear friends,

This month we are announcing a new paper, “A Model for the Probability of Nuclear War”, co-authored by Robert de Neufville, Tony Barrett, and myself. The paper presents the most detailed accounting of the probability of nuclear war yet available.

The core of the paper is a model covering 14 scenarios for how nuclear war could occur. In 6 scenarios, a state intentionally starts nuclear war. In the other 8, a state mistakenly believes it is under nuclear attack by another state and starts nuclear …

Read More »

February Newsletter: Military AI – The View From DC

Dear friends,

This past month, GCRI has participated in two exclusive, invitation-only events in Washington, DC, discussing military and international security applications of artificial intelligence. First, GCRI Director of Research Tony Barrett attended a workshop hosted by the AI group at the think tank Center for a New American Security. Then I gave a talk on AI at a workshop on strategic stability hosted by the Federation of American Scientists.

These two events show that the DC international security community is quite interested in AI and its …

Read More »

January Newsletter: Superintelligence & Hawaii False Alarm

Dear friends,

This month marks the release of Superintelligence, a special issue of the journal Informatica co-edited by GCRI’s Matthijs Maas and Roman Yampolskiy along with Ryan Carey and Nell Watson. It contains an interesting mix of papers on AI risk. One of the papers is “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”, co-authored by Yampolskiy, Tony Barrett, and myself. This paper applies our ASI-PATH risk model to an ongoing debate between two leading AI risk experts, Nick Bostrom and Ben Goertzel. It shows how risk analysis can capture …

Read More »

Modeling and Interpreting Expert Disagreement About Artificial Superintelligence

View the paper “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”

Artificial superintelligence (ASI) is artificial intelligence (AI) with capabilities that are significantly greater than human capabilities across a wide range of domains. A hallmark of the ASI issue is disagreement among experts. This paper demonstrates and discusses methodological options for modeling and interpreting expert disagreement about the risk of ASI catastrophe. Using a new model called ASI-PATH, the paper models a well-documented recent disagreement between Nick Bostrom and Ben Goertzel, two distinguished ASI experts. Three …

Read More »

November Newsletter: Survey of AI Projects

Dear friends,

This month we are announcing a new paper, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. This is more than the usual research paper: it’s 99 pages pulling together several months of careful work. It documents and analyzes what’s going on right now in artificial general intelligence (AGI) R&D in terms that are useful for risk management, policy, and related purposes. Essentially, this is what we need to know about AGI R&D to make a difference on the issue.

AGI is AI …

Read More »