May Newsletter: Molecular Nanotechnology

Dear friends,

It has been a productive month for GCRI, with new papers by several of our affiliates. Here, I would like to highlight one by Steven Umbrello and myself, on the topic of molecular nanotechnology, also known as atomically precise manufacturing (APM).

At present, APM exists only in a crude form, such as the work recognized by the 2016 Nobel Prize in Chemistry. However, it may be able to revolutionize manufacturing, making it inexpensive and easy to produce a wide range of goods, resulting in what …

Read More »

April Newsletter: Nuclear War Impacts

Dear friends,

This month we are announcing a new paper, “A Model for the Impacts of Nuclear War”, co-authored by Tony Barrett and myself. The paper presents a detailed and comprehensive accounting of the many ways that nuclear war can harm human beings and human society. It is a counterpart to the paper we announced last month, “A Model for the Probability of Nuclear War”.

The model has five main branches corresponding to the five main types of impacts of nuclear weapon detonations: thermal radiation, blast, ionizing …

Read More »

March Newsletter: Nuclear War Probability

Dear friends,

This month we are announcing a new paper, “A Model for the Probability of Nuclear War”, co-authored by Robert de Neufville, Tony Barrett, and myself. The paper presents the most detailed accounting of the probability of nuclear war yet available.

The core of the paper is a model covering 14 scenarios for how nuclear war could occur. In 6 scenarios, a state intentionally starts nuclear war. In the other 8, a state mistakenly believes it is under nuclear attack by another state and starts nuclear …

Read More »

February Newsletter: Military AI – The View From DC

Dear friends,

This past month, GCRI has participated in two exclusive, invitation-only events in Washington, DC, discussing military and international security applications of artificial intelligence. First, GCRI Director of Research Tony Barrett attended a workshop hosted by the AI group at the think tank Center for a New American Security. Then I gave a talk on AI at a workshop on strategic stability hosted by the Federation of American Scientists.

These two events show that the DC international security community is quite interested in AI and its …

Read More »

January Newsletter: Superintelligence & Hawaii False Alarm

Dear friends,

This month marks the release of Superintelligence, a special issue of the journal Informatica co-edited by GCRI’s Matthijs Maas and Roman Yampolskiy along with Ryan Carey and Nell Watson. It contains an interesting mix of papers on AI risk. One of the papers is “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”, co-authored by Yampolskiy, Tony Barrett, and myself. This paper applies our ASI-PATH risk model to an ongoing debate between two leading AI risk experts, Nick Bostrom and Ben Goertzel. It shows how risk analysis can capture …

Read More »

December Newsletter: Year in Review

Dear friends,

It has been another productive year for GCRI. Though we have a limited budget, we’ve made major contributions to global catastrophic risk research. Here are some highlights:

* GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

* GCRI affiliates presented at numerous other events throughout the year, including dedicated catastrophic risk events at UCLA and Gothenburg.

* …

Read More »

November Newsletter: Survey of AI Projects

Dear friends,

This month we are announcing a new paper, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. This is more than the usual research paper: it’s 99 pages pulling together several months of careful work. It documents and analyzes what’s going on right now in artificial general intelligence (AGI) R&D in terms that are useful for risk management, policy, and related purposes. Essentially, this is what we need to know about AGI R&D to make a difference on the issue.

AGI is AI …

Read More »

October Newsletter: How To Reduce Risk

Dear friends,

As we speak, a group of researchers is meeting in Gothenburg, Sweden on the theme of existential risk. I joined it earlier in September. My commendations to Olle Häggström and Anders Sandberg for hosting an excellent event.

My talk in Gothenburg focused on how to find the best opportunities to reduce risk. The best opportunities are often a few steps removed from academic risk and policy analysis. For example, there is a large research literature on climate change policy, much of which factors in catastrophic risk. However, the …

Read More »

Towards an Integrated Assessment of Global Catastrophic Risk

View the paper “Towards an Integrated Assessment of Global Catastrophic Risk” 

Integrated assessment is an analysis of a topic that integrates multiple lines of research. Integrated assessments are thus inherently interdisciplinary. They are generally oriented toward practical problems, often in the context of public policy, and frequently concern topics in science and technology. This paper presents a concept for and some initial work towards an integrated assessment of global catastrophic risk (GCR). Generally speaking, GCR is the risk of significant harm to global human civilization. More …

Read More »

September Newsletter: 2017 Society for Risk Analysis Meeting

Dear friends,

Each year, GCRI hosts sessions on global catastrophic risk at the annual meeting of the Society for Risk Analysis, which is the leading academic and professional society for all things risk. This year, we have gotten three full sessions accepted for the meeting, our most ever. SRA is competitive and we are honored to have three sessions.

Likewise, for those of you who are interested in SRA but haven’t come to the meeting before, this would be a good year to come. SRA has a …

Read More »