January Newsletter: Superintelligence & Hawaii False Alarm

Dear friends,

This month marks the release of Superintelligence, a special issue of the journal Informatica co-edited by GCRI’s Matthijs Maas and Roman Yampolskiy along with Ryan Carey and Nell Watson. It contains an interesting mix of papers on AI risk. One of the papers is “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”, co-authored by Yampolskiy, Tony Barrett, and myself. This paper applies our ASI-PATH risk model to an ongoing debate between two leading AI risk experts, Nick Bostrom and Ben Goertzel. It shows how risk analysis can capture key features of the debate to guide important AI risk management decisions.

This January also saw a nuclear war false alarm in Hawaii. The Hawaii Emergency Management Agency accidentally sent out text messages stating “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” GCRI Director of Communications Robert de Neufville lives in Honolulu and experienced this incident firsthand. You can read his vivid description of the incident here. It is a good reminder of the terror that can come in a world with nuclear weapons. In upcoming months, GCRI will release new research on the risk of nuclear war that shows how to use events like this to analyze the risk of nuclear war.

Sincerely,
Seth Baum, Executive Director

General Risk

GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference in December, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

GCRI executive director Seth Baum and director of research Tony Barrett have a paper titled “Towards an Integrated Assessment of Global Catastrophic Risk” in B.J. Garrick’s forthcoming edited volume, Proceedings of the First Colloquium on Catastrophic and Existential Risk.

GCRI associate Dave Denkenberger has a paper with Alexey Turchin titled “Global Catastrophic and Existential Risks Communications Scale” proposing a Torino Scale for catastrophic and existential risks forthcoming in Futures.

Artificial Intelligence

GCRI associate Roman Yampolskiy and junior associate Matthijs Maas along with Ryan Carey and Nell Watson edited a special issue of Informatica on superintelligence. GCRI executive director Seth Baum, director of research Tony Barrett, and Yampolskiy contributed a paper titled “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”. GCRI associate Dave Denkenberger also contributed a paper with Mikhail Batin, Alexey Turchin, Sergey Markov, Alisa Zhila titled  “Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence”.

GCRI junior associate Matthijs Maas is presenting a paper titled “Regulating for ‘Normal AI Accidents’: Operational Lessons for the Responsible Governance of AI Deployment” at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in New Orleans in February.

Food Security

GCRI associate Dave Denkenberger gave a presentation titled “Feeding the Earth If There Is a Global Agricultural Catastrophe” at the International Food Policy Research Institute and at the Society for Risk Analysis (SRA) conference in December in Washington, DC.

Volcano Eruptions

GCRI associate Dave Denkenberger has a paper with Robert W. Blair, Jr. titled “Interventions That May Prevent or Mollify Supervolcanic Eruptions” in forthcoming in Futures.

This post was written by
Robert de Neufville is Director of Communications of the Global Catastrophic Risk Institute.
Comments are closed.