Countering Superintelligence Misinformation

View the paper “Countering Superintelligence Misinformation”

In any public issue, having the right information can help us make the right decisions. This holds in particular for high-stakes issues like the global catastrophic risks. Unfortunately, sometimes incorrect information, or misinformation, is spread. When this happens, it is important to set the record straight.

This paper studies misinformation about artificial superintelligence, which is AI that is much smarter than humans. Current AI is not superintelligent, but if superintelligence is built, it could have massive consequences. Misinformation about superintelligence could …

Read More »

March Newsletter: Nuclear War Probability

Dear friends,

This month we are announcing a new paper, “A Model for the Probability of Nuclear War”, co-authored by Robert de Neufville, Tony Barrett, and myself. The paper presents the most detailed accounting of the probability of nuclear war yet available.

The core of the paper is a model covering 14 scenarios for how nuclear war could occur. In 6 scenarios, a state intentionally starts nuclear war. In the other 8, a state mistakenly believes it is under nuclear attack by another state and starts nuclear …

Read More »

January Newsletter: Superintelligence & Hawaii False Alarm

Dear friends,

This month marks the release of Superintelligence, a special issue of the journal Informatica co-edited by GCRI’s Matthijs Maas and Roman Yampolskiy along with Ryan Carey and Nell Watson. It contains an interesting mix of papers on AI risk. One of the papers is “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”, co-authored by Yampolskiy, Tony Barrett, and myself. This paper applies our ASI-PATH risk model to an ongoing debate between two leading AI risk experts, Nick Bostrom and Ben Goertzel. It shows how risk analysis can capture …

Read More »

December Newsletter: Year in Review

Dear friends,

It has been another productive year for GCRI. Though we have a limited budget, we’ve made major contributions to global catastrophic risk research. Here are some highlights:

* GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

* GCRI affiliates presented at numerous other events throughout the year, including dedicated catastrophic risk events at UCLA and Gothenburg.

* …

Read More »

Modeling and Interpreting Expert Disagreement About Artificial Superintelligence

View the paper “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”

Artificial superintelligence (ASI) is artificial intelligence (AI) with capabilities that are significantly greater than human capabilities across a wide range of domains. A hallmark of the ASI issue is disagreement among experts. This paper demonstrates and discusses methodological options for modeling and interpreting expert disagreement about the risk of ASI catastrophe. Using a new model called ASI-PATH, the paper models a well-documented recent disagreement between Nick Bostrom and Ben Goertzel, two distinguished ASI experts. Three …

Read More »

May Newsletter: The Value of GCR Research

Dear friends,

People often ask me why we set GCRI up as a think tank instead of something for more direct action at reducing the risk. The reason is that when it comes to the global catastrophic risks, a little bit of well-designed research goes a long way. It helps us make better decisions about how to reduce the risks.

For example, last week I attended a political science workshop at Yale University on how to cost-effectively spend $10 billion to reduce the probability of war between …

Read More »

Call for Papers: Informatica Special Issue on Superintelligence

GCRI associate Roman Yampolskiy and junior associate Matthijs Maas are—along with Ryan Carey and Nell Watson—guest-editing an upcoming Informatica special issue on superintelligence. The special issue will approach the topic of superintelligence in as multidisciplinary and visionary a manner as possible.

They are looking for original research, critical studies, and review articles on topics related to superintelligence, in particular including

– Artificial Superintelligence
– Artificial General Intelligence
– Biological Superintelligence
– Brain-computer Interfaces
– Whole Brain Emulation
– Genetic Engineering
– Cognitive Enhancement
– Collective Superintelligence
– Neural Lace-Mediated Empathy
– Technological Singularity
– Intelligence Explosion
– Definition …

Read More »

Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process

View the paper “Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process”

Already computers can outsmart humans in specific domains, like multiplication. But humans remain firmly in control… for now. Artificial superintelligence (ASI) is AI with intelligence that vastly exceeds humanity’s across a broad range of domains. Experts increasingly believe that ASI could be built sometime in the future, could take control of the planet away from humans, and could cause a global catastrophe. Alternatively, if ASI is built safely, it may …

Read More »