Lessons for Artificial Intelligence from Other Global Risks

View the paper “Lessons for Artificial Intelligence from Other Global Risks”

It has become clear in recent years that AI poses important global risks. The study of AI risk is relatively new, but it can potentially learn a lot from the study of similar, better-studied risks. GCRI’s new paper applies to the study of AI risk lessons from four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper is co-authored by GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett, along with GCRI Senior Advisor Gary Ackerman. The paper will be published in a new CRC Press collection edited by Maurizio Tinnirello titled The Global Politics of Artificial Intelligence.

The study of each of the four other risks contains valuable insights for the study of AI risk. Biotechnology and AI are both risky technologies with many beneficial applications. Episodes like the 1975 Asilomar Conference on Recombinant DNA Molecules and the ongoing debate over gain-of-function research show how controversies about the development and use of risky technologies could play out. Nuclear weapons and AI are both potentially of paramount strategic importance to major military powers. The initial race to build nuclear weapons shows what a race to build AI could be like. Global warming and AI risk are both in part the product of the profit-seeking of powerful global corporations. The fossil fuel industry’s attempts to downplay the dangers of global warming show one path corporate AI development could take. Finally, asteroid risk and AI risk are both risks of the highest severity. The history of asteroid risk management shows that policy makers can learn to take even risks that have a high “giggle factor” seriously.

The paper draws several important overarching lessons for AI from the four global risks it surveys. First, the extreme severity of global risks may not be sufficient to motivate action to reduce the risks. Second, how people perceive global risks is influenced by both their incentives and their cultural and intellectual orientations. These influences may be especially strong when the size of the risk is uncertain. Third, the success of efforts to address global risks often depends on whether they have the support of people who stand to lose from those efforts. Fourth, the risks themselves and efforts to address them are often heavily shaped by broader social and political conditions.

The paper also demonstrates the value of learning lessons for global catastrophic risk from other risks. This is one reason why GCRI has always emphasized studying multiple global catastrophic risks. Another reason is that study multiple risk allows cross-risk evaluation and prioritization.

Academic citation:
Baum, Seth D., Robert de Neufville, Anthony M. Barrett, and Gary Ackerman, 2022. Lessons for artificial intelligence from other global risks. In Maurizio Tinnirello (editor), The Global Politics of Artificial Intelligence. Boca Raton: CRC Press, pages 103-131.

Image credits:
Computer chip: Aler Kiv
Influenza virus: US Centers for Disease Control and Prevention
Nuclear weapon explosion: US National Nuclear Security Administration Nevada Field Office
Asteroid: NASA
Smoke stacks: Frank J. Aleksandrowicz

This post was written by
Robert de Neufville is Director of Communications of the Global Catastrophic Risk Institute.
Comments are closed.