Countering Superintelligence Misinformation

View the paper “Countering Superintelligence Misinformation”

In any public issue, having the right information can help us make the right decisions. This holds in particular for high-stakes issues like the global catastrophic risks. Unfortunately, sometimes incorrect information, or misinformation, is spread. When this happens, it is important to set the record straight.

This paper studies misinformation about artificial superintelligence, which is AI that is much smarter than humans. Current AI is not superintelligent, but if superintelligence is built, it could have massive consequences. Misinformation about superintelligence could result in the wrong superintelligence being built, resulting in catastrophe. It could also result in the right superintelligence not being built, which could also result in a catastrophe that the superintelligence could have helped prevent.

This paper focuses less on the superintelligence misinformation itself and more on what to do about it. The paper draws heavily on the extensive study of misinformation in psychology, political science, and related fields, including misinformation about other high-stakes issues such as global warming. It is a follow-up to the paper Superintelligence skepticism as a political tool, which discusses the prospect of doubt about superintelligence being intentionally sowed in order to advance political objectives such as avoiding government regulation on the AI industry or protecting research funding.

A central theme of Countering superintelligence misinformation is that misinformation is difficult to correct in the human mind. Whereas computer memory can be neatly overwritten, human memory lingers even after we receive and accept corrections. This attribute of human memory is advantageous in some respects, but it makes misinformation difficult to correct. Therefore, it is often more effective to prevent the misinformation from spreading in the first place. For misinformation, the cliché “an ounce of prevention is worth a pound of cure” may well be an understatement.

Unfortunately, the research literature on misinformation focuses mainly on correcting it and has little to say on how to prevent it from spreading. So, this paper had to be a bit more creative in developing strategies. It proposes the following:

  • Educate prominent voices about superintelligence, so that they spread the correct information.
  • Create reputational costs for people who spread superintelligence misinformation, to further encourage them not to do so
  • Mobilize against institutional sources of misinformation, such as corporations with a financial incentive to promote the idea that their technologies are safe and good for the world
  • Focus media attention on constructive debates about superintelligence, instead of debates on whether misinformation might actually be correct
  • Establish legal requirements against misinformation, such as was done with the tobacco industry in the court case United States v. Philip Morris

In the event that superintelligence misinformation cannot be prevented, the paper proposes the following for correcting it:

  • Build expert consensus about superintelligence, and then raise awareness about the existence of consensus, because people are often more likely to believe information when it is backed by expert consensus
  • Address the pre-existing motivations that people may have for believing superintelligence misinformation, such as when the correct information conflicts with pre-existing worldviews or threatens their sense of self-worth
  • “Inoculate” people with advance warnings about misinformation, so they are prepared when they are exposed to it
  • When possible, provide detailed explanations of superintelligence misinformation, the correct information, and why one is correct while the other isn’t, so that people can more successfully update their mental conceptions of superintelligence

To a large extent, these ideas apply to misinformation in general, not just to superintelligence misinformation. The paper customizes the ideas for superintelligence misinformation.

For example, an important group of prominent voices about superintelligence are AI researchers, including AI researchers whose specialty is something other than superintelligence. Efforts to improve their understanding of superintelligence can learn from efforts to help broadcast meteorologists better understand climate change. For much of the public, broadcast meteorologists are the most visible and respected voice on climate change, but their expertise is short-term weather, not long-term climate. This is directly analogous to the case of AI researchers with expertise on some part of AI other than superintelligence.

Another example is on how superintelligence can threaten people’s sense of self-worth. Because superintelligence is a prospective technology that could fundamentally outsmart and outperform humans, it could be an especially profound threat to people’s sense of self-worth. This gives people a subconscious reason to doubt claims that superintelligence is likely, even if those claims are logically sound. To counter this effect, these claims could be paired with messages that affirm people’s sense of self-worth, making people more receptive to the claims.

The paper is part of an ongoing effort by the Global Catastrophic Risk Institute to accelerate the study of the social and policy dimensions of AI by leveraging insights from other fields. Other examples include the paper Superintelligence skepticism as a political tool, which is a precursor to this paper, On the promotion of safe and socially beneficial artificial intelligence, which leverages insights from environmental psychology to study how to motivate AI researchers to pursue socially beneficial AI designs, and ongoing research modeling the risk of artificial superintelligence (see this, this, and this), which leverage risk analysis techniques that GCRI previously used for the risk of nuclear war. This capacity to leverage insights from other fields speaks to the value of GCRI’s cross-risk approach to the study of global catastrophic risk.

Academic citation:
Baum, Seth D., 2018. Countering superintelligence misinformation. Information, vol. 9, no. 10, article 244, DOI 10.3390/info9100244.

Image credit: Melissa Thomas Baum

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.