October Newsletter: The Superintelligence Debate

Dear friends,

When I look at debates about risks from artificial intelligence, I see a lot of parallels with debates over global warming. Both involve global catastrophic risks that are, to a large extent, driven by highly profitable industries. Indeed, today most of the largest corporations in the world are in either the computer technology or fossil fuel industries.

One key difference is that whereas global warming debates have been studied in great detail by many talented researchers, AI debates have barely been studied at all. As someone with a background in both, I am helping transfer insights from global warming debates to the study of AI debates. This month, we are announcing two new papers that do exactly this. They both focus on “superintelligence”, which is a potential form of future AI that could significantly outsmart humans, with major implications for global catastrophic risk.

The first paper, “Superintelligence skepticism as a political tool,” examines the possibility that doubt about the prospect of superintelligence could be sowed intentionally in order to advance political aims such as avoiding government regulation of industry or protecting research funding. The paper draws on the history of politicized skepticism about risky but profitable technologies such as tobacco and fossil fuel (“climate skepticism”, “climate denialism”, etc.). It finds small hints of politicized superintelligence skepticism in current debates and potential for much more, especially if government regulation becomes a serious prospect.

The second paper, “Countering superintelligence misinformation,” studies how to prevent debates about superintelligence from being dominated by bad information. Whereas the first paper lays out the problem, this paper is all about solutions. Extensive psychology research finds that misinformation is difficult to correct in the human mind. Therefore, this paper emphasizes strategies for preventing superintelligence misinformation from spreading in the first place. It also surveys strategies for correcting misinformation after it has spread.

Both papers are published in the open-access journal Information. You can read more about them in the GCRI blog here and here, or read the papers directly in the journal here and here.

Sincerely,
Seth Baum, Executive Director

Artificial Intelligence

GCRI Executive Director Seth Baum has a pair of new superintelligence papers in the open-access journal Information. “Superintelligence skepticism as a political tool” examines the possibility that doubt about the prospect of superintelligence could be sowed intentionally in order to advance political aims such as avoiding government regulation of industry or protecting research funding. “Countering superintelligence misinformation” studies how to prevent debates about superintelligence from being dominated by bad information.

GCRI Associate Roman Yampolskiy gave a talk on AI governance at the Joint Multi-Conference on Human Level Artificial Intelligence August 22-25 in Prague. Yampolskiy also did an interview on the Super Data Science podcast about AI Safety.

GCRI Associate Roman Yampolskiy’s edited volume on the challenges of constructing safe and secure advanced machine intelligence, Artificial Intelligence Safety & Security, came out in August.

Help us make the world a safer place! The Global Catastrophic Risk Institute depends on your support to reduce the risk of global catastrophe. You can donate online or contact us for further information.

This post was written by
Robert de Neufville is Director of Communications of the Global Catastrophic Risk Institute.
Comments are closed.