Superintelligence Skepticism As A Political Tool

View the paper “Superintelligence Skepticism as a Political Tool”

For decades, there have been efforts to exploit uncertainty about science and technology for political purposes. This practice traces to the tobacco industry’s effort to sow doubt about the link between tobacco and cancer, and it can be seen today in skepticism about climate change and other major risks. This paper analyzes the possibility that the same could happen for the potential future artificial intelligence technology known as superintelligence.

Artificial superintelligence is AI that is much smarter than humans. Current AI is not superintelligent. Some people believe that superintelligence can be built, and that if built, it would have extreme consequences, which could be either good or bad depending on its design. However, other people are skeptical of these claims, and of the claim that this issue is important enough to merit attention today. This skepticism could be the basis for politicized skepticism such as exists for other issues.

The paper examines current superintelligence skepticism and finds that it is sometimes used politically, but not to nearly the same extent as is found for issues like climate change. Some AI researchers appear to profess superintelligence skepticism in order to protect the reputation and funding of their field. Some AI technology corporations show hints of politicized skepticism, but not to any significant extent. However, if superintelligence skepticism is politicized, then it could be very successful, including due to the difficulty of resolving uncertainty about this possible future technology.

The paper is part of an ongoing effort by the Global Catastrophic Risk Institute to accelerate the study of the social and policy dimensions of AI by leveraging insights from other fields. Other examples include the paper On the promotion of safe and socially beneficial artificial intelligence, which leverages insights from environmental psychology to study how to motivate AI researchers to pursue socially beneficial AI designs, and ongoing research modeling the risk of artificial superintelligence (see this, this, and this), which leverage risk analysis techniques that GCRI previously used for the risk of nuclear war. This capacity to leverage insights from other fields speaks to the value of GCRI’s cross-risk approach to the study of global catastrophic risk.

Academic citation:
Baum, Seth D., 2018. Superintelligence skepticism as a political tool. Information, vol. 9, no. 9, article 209, DOI 10.3390/info9090209.

Image credit: Melissa Thomas Baum

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.