March Newsletter: Nuclear War Probability

Dear friends,

This month we are announcing a new paper, “A Model for the Probability of Nuclear War”, co-authored by Robert de Neufville, Tony Barrett, and myself. The paper presents the most detailed accounting of the probability of nuclear war yet available.

The core of the paper is a model covering 14 scenarios for how nuclear war could occur. In 6 scenarios, a state intentionally starts nuclear war. In the other 8, a state mistakenly believes it is under nuclear attack by another state and starts nuclear …

Read More »

A Model For The Probability Of Nuclear War

View the paper “A Model For The Probability Of Nuclear War”

The probability of nuclear war is a major factor in many important policy questions, but it has gotten little scholarly attention. This paper presents a model for calculating the total probability of nuclear war. The model is based on 14 interrelated scenarios for how nuclear war can break out, covering perhaps the entire range of nuclear war scenarios. Scenarios vary based on factors including whether a state intends to make a first strike attack, whether …

Read More »

February Newsletter: Military AI – The View From DC

Dear friends,

This past month, GCRI has participated in two exclusive, invitation-only events in Washington, DC, discussing military and international security applications of artificial intelligence. First, GCRI Director of Research Tony Barrett attended a workshop hosted by the AI group at the think tank Center for a New American Security. Then I gave a talk on AI at a workshop on strategic stability hosted by the Federation of American Scientists.

These two events show that the DC international security community is quite interested in AI and its …

Read More »

January Newsletter: Superintelligence & Hawaii False Alarm

Dear friends,

This month marks the release of Superintelligence, a special issue of the journal Informatica co-edited by GCRI’s Matthijs Maas and Roman Yampolskiy along with Ryan Carey and Nell Watson. It contains an interesting mix of papers on AI risk. One of the papers is “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”, co-authored by Yampolskiy, Tony Barrett, and myself. This paper applies our ASI-PATH risk model to an ongoing debate between two leading AI risk experts, Nick Bostrom and Ben Goertzel. It shows how risk analysis can capture …

Read More »

December Newsletter: Year in Review

Dear friends,

It has been another productive year for GCRI. Though we have a limited budget, we’ve made major contributions to global catastrophic risk research. Here are some highlights:

* GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

* GCRI affiliates presented at numerous other events throughout the year, including dedicated catastrophic risk events at UCLA and Gothenburg.

* …

Read More »

Modeling and Interpreting Expert Disagreement About Artificial Superintelligence

View the paper “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”

Artificial superintelligence (ASI) is artificial intelligence (AI) with capabilities that are significantly greater than human capabilities across a wide range of domains. A hallmark of the ASI issue is disagreement among experts. This paper demonstrates and discusses methodological options for modeling and interpreting expert disagreement about the risk of ASI catastrophe. Using a new model called ASI-PATH, the paper models a well-documented recent disagreement between Nick Bostrom and Ben Goertzel, two distinguished ASI experts. Three …

Read More »

November Newsletter: Survey of AI Projects

Dear friends,

This month we are announcing a new paper, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. This is more than the usual research paper: it’s 99 pages pulling together several months of careful work. It documents and analyzes what’s going on right now in artificial general intelligence (AGI) R&D in terms that are useful for risk management, policy, and related purposes. Essentially, this is what we need to know about AGI R&D to make a difference on the issue.

AGI is AI …

Read More »

A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy

View the paper “A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”

Artificial general intelligence (AGI) is AI that can reason across a wide range of domains. While most AI research and development (R&D) is on narrow AI, not AGI, there is some dedicated AGI R&D. If AGI is built, its impacts could be profound. Depending on how it is designed and used, it could either help solve the world’s problems or cause catastrophe, possibly even human extinction.

This paper presents the first-ever survey …

Read More »

On the Promotion of Safe and Socially Beneficial Artificial Intelligence

View the paper “On the Promotion of Safe and Socially Beneficial Artificial Intelligence”

As AI becomes more and more capable, its impacts on society are also getting larger. AI is now being used in medicine, transportation (self-driving cars), the military (drones), and many other sectors. The impacts of AI on society depend a lot on how the AI is designed. To improve AI design, two challenges must be met. There is the technical challenge of developing safe and beneficial technology designs, and there is the social …

Read More »

Liability Law for Present and Future Robotics Technology

View the paper “Liability Law for Present and Future Robotics Technology”

Advances in robotics technology are causing major changes in manufacturing, transportation, medicine, and a number of other sectors. While many of these changes are beneficial, there will inevitably be some harms. Who or what is liable when a robot causes harm? This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Already, …

Read More »