Working Paper: On the Promotion of Safe and Socially Beneficial Artificial Intelligence

GCRI is launching a working paper series with a new paper On the promotion of safe and socially beneficial artificial intelligence by Seth Baum.

Abstract
This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more towards building beneficial AI. Extrinsic measures impose constraints or incentives on AI researchers to induce them to pursue beneficial AI even if they do not want to. Intrinsic measures encourage AI researchers to want to pursue beneficial AI. Prior research focuses on extrinsic measures, but intrinsic measures are at least as important. Indeed, intrinsic factors can determine the success of extrinsic measures. Efforts to promote beneficial AI must consider intrinsic factors by studying the social psychology of AI research communities.

Working papers are published to share ideas and promote discussion. They have not necessarily gone through peer review. The views therein are the authors’ and are not necessarily the views of the Global Catastrophic Risk Institute.

Academic citation:
Baum, Seth D., 2016, On the Promotion Of Safe and Socially Beneficial Artificial Intelligence, Global Catastrophic Risk Institute Working Paper 16-1

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.