Artificial Intelligence

GCRI studies the human process of developing and governing AI, using risk analysis, social science, and the extensive knowledge we have gained from the study of other risks.

AI is now having a major effect on many aspects of society, and it has the prospect of being even more transformative in the long term. One concern is that future AI could be too capable for humans to control, in which case it may be impossible to prevent the AI from causing global catastrophe. This has prompted interest in ensuring that AI will be safe and beneficial to humanity. Accomplishing this requires technical design solutions as well as social and governance solutions to ensure that the right technology is used.

GCRI works mainly on the social and governance side, while ensuring that our work is sensitive to the details of the technology. AI has a long history in computer science, but it is a relatively new societal issue, so there is quite a lot of work that simply hasn’t been done yet. We are helping build the field by importing insights from other fields and applying them to the unique characteristics of AI. This work includes foundational risk models as well as insights from climate change, nuclear weapons, and more.

Featured Publications

2020 survey of artificial general intelligence projects for ethics, risk, and policy

Fitzgerald, McKenna, Aaron Boddy, and Seth D. Baum. Global Catastrophic Risk Institute Technical Report 20-1, 2020

This technical report presents a detailed survey of 72 projects in working across 37 countries to develop artificial general intelligence (AGI). This report updates and improves on GCRI’s 2017 survey of artificial general intelligence projects. The 2020 survey finds three major clusters of projects working on AGI: (1) corporate projects that are active on safety issues and that state that their goal is to benefit humanity; (2) academic projects that are not active on safety issues and that state that their goals are to advance knowledge; and (3) small private corporations that are not active on safety issues and that vary in the goals they state they have.

Moral consideration of nonhumans in the ethics of artificial intelligence

Owe, Andrea and Seth D. Baum, 2021. Moral consideration of nonhumans in the ethics of artificial intelligence. AI & Ethics, vol. 1, no. 4 (November), pages 517-528, DOI 10.1007/s43681-021-00065-0. ReadCube.

This paper presents foundational analysis of the role of nonhumans within AI ethics. The paper outlines ethical theory about nonhumans, documents the state of attention to nonhumans in AI ethics, presents an argument for greater attention to nonhumans, and discusses implications for AI ethics.

A model of pathways to artificial superintelligence catastrophe for risk and decision analysis

Anthony M. Barrett and Seth D. Baum. Journal of Experimental & Theoretical Artificial Intelligence, 29(2), 2017, pp. 397-414, DOI 10.1080/0952813X.2016.1186228

This paper presents a detailed risk model for artificial superintelligence. The model covers the events and conditions that need to hold for artificial superintelligence to be built and for its actions to be catastrophic.

Additional Publications

Owe, Andrea and Seth D. Baum, forthcoming. The ethics of sustainability for artificial intelligenceProceedings of AI for People: Towards Sustainable AI, CAIP’21.

Victor Galaz, Miguel A. Centeno, Peter W. Callahan, Amar Causevic, Thayer Patterson, Irina Brass, Seth Baum, Darryl Farber, Joern Fischer, David Garcia, Timon McPhearson, Daniel Jimenez, Brian King, Paul Larcey, and Karen Levy, 2021. Artificial intelligence, systemic risks, and sustainability. Technology in Society, vol. 67, (November), article 101741, DOI 10.1016/j.techsoc.2021.101741.

Owe, Andrea and Seth D. Baum, 2021. Moral consideration of nonhumans in the ethics of artificial intelligence. AI & Ethics, vol. 1, no. 4 (November), pages 517-528, DOI 10.1007/s43681-021-00065-0. ReadCube.

Cihon, Peter, Moritz J. Kleinaltenkamp, Jonas Schuett, and Seth D. Baum, 2021. AI certification: Advancing ethical practice by reducing information asymmetries. IEEE Transactions on Technology and Society, vol. 2, issue 4 (December), pages 200-209, DOI 10.1109/TTS.2021.3077595.

Peter Cihon, Jonas Schuett, and Seth D. Baum, 2021. Corporate governance of artificial intelligence in the public interest. Information, vol. 12, article 275, DOI 10.3390/info12070275.

Robert de Neufville and Seth D. Baum, 2021. Collective action on artificial intelligence: A primer and review. Technology in Society, vol. 66, (August), article 101649, DOI 10.1016/j.techsoc.2021.101649.

Seth D. Baum, 2020. Artificial interdisciplinarity: Artificial intelligence for research on complex societal problems. Philosophy & Technology, DOI 10.1007/s13347-020-00416-5. ReadCube.

Seth D. Baum, 2020. Medium-term artificial intelligence and society. Information, vol. 11, no. 6, 290, DOI 10.3390/info11060290

Seth D. Baum, Robert de Neufville, Anthony M. Barrett, and Gary Ackerman, forthcoming. Lessons for artificial intelligence from other global risks. In Maurizio Tinnirello (Editor), The Global Politics of Artificial Intelligence. Boca Raton: CRC Press.

Seth D. Baum, 2018. Countering superintelligence misinformationInformation, vol. 9, no. 10 (September), article 244, DOI 10.3390/info9100244.

Seth D. Baum, 2018. Superintelligence skepticism as a political toolInformation, vol. 9, no. 9 (August), article 209, DOI 10.3390/info9090209.

Seth D. Baum, 2018. Social choice ethics in artificial intelligenceAI & SocietyDOI 10.1007/s00146-017-0760-1.

Seth D. Baum, 2018. Reconciliation between factions focused on near-term and long-term artificial intelligenceAI & Society, vol. 33, no. 4 (November), DOI 10.1007/s00146-017-0734-3.

Trevor N. White and Seth D. Baum, 2017. Liability law for present and future robotics technology. In Patrick Lin, Keith Abney, and Ryan Jenkins (editors), Robot Ethics 2.0, Oxford: Oxford University Press, pages 66-79.

Seth D. Baum, 2017. A survey of artificial general intelligence projects for ethics, risk, and policy. Global Catastrophic Risk Institute Working Paper 17-1.

Seth D. Baum, Anthony M. Barrett, and Roman V. Yampolskiy, 2017. Modeling and interpreting expert disagreement about artificial superintelligenceInformatica, vol. 41, no. 7 (December), pages 419-428.

Anthony M. Barrett and Seth D. Baum, 2017. Risk analysis and risk management for the artificial superintelligence research and development process. In Victor Callaghan, James Miller, Roman Yampolskiy, and Stuart Armstrong (editors), The Technological Singularity: Managing the Journey. Berlin: Springer, pages 127-140.