October Newsletter: The Superintelligence Debate

Dear friends,

When I look at debates about risks from artificial intelligence, I see a lot of parallels with debates over global warming. Both involve global catastrophic risks that are, to a large extent, driven by highly profitable industries. Indeed, today most of the largest corporations in the world are in either the computer technology or fossil fuel industries.

One key difference is that whereas global warming debates have been studied in great detail by many talented researchers, AI debates have barely been studied at all. As …

Read More »

August Newsletter: Long-Term Trajectories

Dear friends,

This month I am proud to announce a new paper, “Long-Term Trajectories of Human Civilization“. The paper calls for attention to the fate of human civilization over time scales of millions, billions, or trillions of years into the future. While most attention goes to nearer-term phenomena, the long-term can be profoundly important to present-day decision-making. For example, one major issue the paper examines is the fate of global catastrophe survivors. How well they fare is a central factor in whether people today should focus …

Read More »

June Newsletter: Summer Talks

Artificial Intelligence

GCRI Associate Roman Yampolskiy gave a talk on AI safety to the Global Challenges Summit 2018 in Astana, Kazakhstan May 17-19.

GCRI Executive Director Seth Baum and GCRI Associate Roman Yampolskiy participated in a workshop on “AI Coordination & Great Powers” hosted by Foresight Institute in San Francisco on June 7.

GCRI Executive Director Seth Baum gave a seminar on “AI Risk, Ethics, Social Science, and Policy” hosted by the University of California, Berkeley Center for Human-Compatible Artificial Intelligence (CHAI) on June 11.

Effective Altruism

GCRI Executive Director Seth Baum …

Read More »

May Newsletter: Molecular Nanotechnology

Dear friends,

It has been a productive month for GCRI, with new papers by several of our affiliates. Here, I would like to highlight one by Steven Umbrello and myself, on the topic of molecular nanotechnology, also known as atomically precise manufacturing (APM).

At present, APM exists only in a crude form, such as the work recognized by the 2016 Nobel Prize in Chemistry. However, it may be able to revolutionize manufacturing, making it inexpensive and easy to produce a wide range of goods, resulting in what …

Read More »

March Newsletter: Nuclear War Probability

Dear friends,

This month we are announcing a new paper, “A Model for the Probability of Nuclear War”, co-authored by Robert de Neufville, Tony Barrett, and myself. The paper presents the most detailed accounting of the probability of nuclear war yet available.

The core of the paper is a model covering 14 scenarios for how nuclear war could occur. In 6 scenarios, a state intentionally starts nuclear war. In the other 8, a state mistakenly believes it is under nuclear attack by another state and starts nuclear …

Read More »

February Newsletter: Military AI – The View From DC

Dear friends,

This past month, GCRI has participated in two exclusive, invitation-only events in Washington, DC, discussing military and international security applications of artificial intelligence. First, GCRI Director of Research Tony Barrett attended a workshop hosted by the AI group at the think tank Center for a New American Security. Then I gave a talk on AI at a workshop on strategic stability hosted by the Federation of American Scientists.

These two events show that the DC international security community is quite interested in AI and its …

Read More »

January Newsletter: Superintelligence & Hawaii False Alarm

Dear friends,

This month marks the release of Superintelligence, a special issue of the journal Informatica co-edited by GCRI’s Matthijs Maas and Roman Yampolskiy along with Ryan Carey and Nell Watson. It contains an interesting mix of papers on AI risk. One of the papers is “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”, co-authored by Yampolskiy, Tony Barrett, and myself. This paper applies our ASI-PATH risk model to an ongoing debate between two leading AI risk experts, Nick Bostrom and Ben Goertzel. It shows how risk analysis can capture …

Read More »

December Newsletter: Year in Review

Dear friends,

It has been another productive year for GCRI. Though we have a limited budget, we’ve made major contributions to global catastrophic risk research. Here are some highlights:

* GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

* GCRI affiliates presented at numerous other events throughout the year, including dedicated catastrophic risk events at UCLA and Gothenburg.

* …

Read More »

November Newsletter: Survey of AI Projects

Dear friends,

This month we are announcing a new paper, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. This is more than the usual research paper: it’s 99 pages pulling together several months of careful work. It documents and analyzes what’s going on right now in artificial general intelligence (AGI) R&D in terms that are useful for risk management, policy, and related purposes. Essentially, this is what we need to know about AGI R&D to make a difference on the issue.

AGI is AI …

Read More »

September Newsletter: 2017 Society for Risk Analysis Meeting

Dear friends,

Each year, GCRI hosts sessions on global catastrophic risk at the annual meeting of the Society for Risk Analysis, which is the leading academic and professional society for all things risk. This year, we have gotten three full sessions accepted for the meeting, our most ever. SRA is competitive and we are honored to have three sessions.

Likewise, for those of you who are interested in SRA but haven’t come to the meeting before, this would be a good year to come. SRA has a …

Read More »