Emerging Technologies Project

PROJECT LEAD: Tony Barrett

Emerging technologies is one of the most important categories of GCR. However, it is also difficult to study. There is a lot of uncertainty about what technologies will exist in the future and what the impacts of these technologies will be. GCRI is working to reduce the uncertainty and develop effective ways of reducing the risks. Our work focuses on artificial intelligence and geoengineering, with a small focus on biotechnology. Much of our work also applies to emerging technologies in general.

Risk Modeling

At the heart of GCRI’s work on emerging technologies is an effort led by GCRI Director of Research Tony Barrett to model how new technologies are developed. This research models the pathways through which people can develop dangerous new technologies. Modeling the development of emerging technologies allows us to characterize and quantify both the risk they pose and the effectiveness of interventions to lower that risk. We use risk modeling methods like fault trees and event trees, which we have successfully used before to model nuclear war risk [1]. The emerging technologies risk modeling project began as “SEER: System for Evaluation of Emerging-Technology Risks”, funded in part by the United States Department of Homeland Security, through the Center for Risk and Economic Analysis of Terrorism Events (CREATE), which is based at the University of Southern California. The DHS-CREATE funding has supported modeling on synthetic biology and cybersecurity risks. GCRI is also modeling artificial intelligence risk with support from the Future of Life Institute, leading to the ASI-PATH model [2]. The emerging technology risk modeling is an important part of our flagship Integrated Assessment project.

Ethics & Policy

GCRI has done extensive research on the ethical and policy issues that surround emerging technologies. GCRI Deputy Director Grant Wilson has published innovative work on the use of existing international treaties to regulate emerging technologies, as well as on the design of new treaties [3]. Wilson and GCRI Executive Director Seth Baum also presented a paper on ethical issues in dual-use biotechnologies at the International Conference on Ethical Issues In Biomedical Engineering [4]. In addition, Baum has written about what he calls “the great downside dilemma” for emerging technologies such as artificial intelligence and geoengineering. The dilemma is whether to use a technology that promises great benefits for humanity but also has the potential to cause a catastrophe [5]. The downside dilemma work was developed for a symposium on Emerging Technologies and the Future of Humanity hosted by the Royal Swedish Academy of Sciences.

Artificial Intelligence

Artificial intelligence is increasingly ubiquitous in the world today, from search engines to self-driving cars. This kind of AI does not by itself pose a significant GCR. But a growing community of people worries that some future forms of AI could cause a major catastrophe. Specifically, if AI could become “superintelligent”, outsmarting humanity across a wide range of domains, then humanity could lose control of the planet to the AI [6]. GCRI’s main contribution to the study of AI risk is the SEER modeling and the ethics and policy research, both of which are described above. Before he co-founded GCRI, Seth Baum published an early study on expert projections of AI [7]. Baum has written several other discussions of AI risk, including reviews of the book Our Final Invention and the film Transcendence, as well as an analysis of how to promote AI that is safe and beneficial for society [8].

Geoengineering

Geoengineering is the intentional manipulation of the global environment. Geoengineering is most commonly talked about today as part of proposals to fight global warming. Geoengineering technologies could lower global surface temperatures by pulling greenhouse gases from the atmosphere or by blocking incoming sunlight. However, these technologies pose major risks and other moral dilemmas. Much of GCRI’s geoengineering work has been led by Associate Jacob Haqq-Misra. Haqq-Misra is a climate modeler, astrobiologist, and ethicist with the Blue Marble Space Institute of Science. Haqq-Misra has written several papers on the ethics of geoengineering and the related technology of terraforming, which roughly means geoengineering on other planets [9]. Haqq-Misra also published a paper with GCRI colleagues Seth Baum and Tim Maher on the risk of geoengineering double catastrophe, in which a geoengineering failure is caused by another catastrophe such as a pandemic or war. Such a double catastrophe could potentially cause human extinction [10]. Finally, Baum has also published a review of the geoengineering-themed film Snowpiercer [11].

References

[1] Anthony M. Barrett, Seth D. Baum, and Kelly R. Hostetler, 2013. Analyzing and reducing the risks of inadvertent nuclear war between the United States and Russia. Science and Global Security 21(2), 106-133.

[2] Anthony M. Barrett and Seth D. Baum. A model of pathways to artificial superintelligence catastrophe for risk and decision analysis. Journal of Experimental & Theoretical Artificial Intelligence, 29(2), 397-414, DOI 10.1080/0952813X.2016.1186228.

[3] Grant S. Wilson, 2013. Minimizing global catastrophic and existential risks from emerging technologies through international law. Virginia Environmental Law Journal 31(2), 307-364. See also Grant Wilson, 2012. Emerging technologies: Should they be internationally regulated? Institute for Ethics and Emerging Technologies, 21 December. Seth Baum, 2013. Seven reasons for integrated emerging technologies governance. Institute for Ethics and Emerging Technologies, 23 January. Seth Baum and Grant Wilson, 2013. How to create an international treaty for emerging technologies. Institute for Ethics and Emerging Technologies, 21 February.

[4] Seth D. Baum and Grant S. Wilson, 2013. The ethics of global catastrophic risk from dual-use bioengineering. Ethics in Biology, Engineering and Medicine 4(1), 59-72.

[5] Seth D. Baum, 2014. The great downside dilemma for risky emerging technologies. Physica Scripta 89(12), article 128004, doi:10.1088/0031-8949/89/12/128004.

[6] Amnon H. Eden, Johnny H. Soraker, James H. Moor, and Eric Steinhart (eds.), 2013. Singularity Hypotheses: A Scientific and Philosophical Assessment. Berlin: Springer. Nick Bostrom, 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

[7] Seth D. Baum, Ben Goertzel, and Ted G. Goertzel, 2011. How long until human-level AI? Results from an expert assessment. Technological Forecasting & Social Change 78(1), 185-195.

[8] Seth Baum, 2013. Our Final Invention: Is AI the defining issue for humanity? Scientific American Blogs, 11 October. Seth D. Baum, 2014. Film review: Transcendence. Journal of Evolution and Technology 24(2), 79-84. Seth D. Baum. On the promotion of safe and socially beneficial artificial intelligence. AI & Society, forthcoming, DOI 10.1007/s00146-016-0677-0.

[9] Jacob Haqq-Misra, 2012. An ecological compass for planetary engineering. Astrobiology 12(10), 985-997. Jacob Haqq-Misra, 2012. It’s a question of ethics: Should we study geoengineering? Earth 57(10), 8.

[10] Seth D. Baum, Timothy M. Maher, Jr., and Jacob Haqq-Misra, 2013. Double catastrophe: Intermittent stratospheric geoengineering induced by societal collapse. Environment, Systems and Decisions 33(1), 168-180. Seth Baum, 2013. When global catastrophes collide: The climate engineering double catastrophe. Scientific American Blogs, 6 February.

[11] Seth D. Baum, 2014. Film review: Snowpiercer. Journal of Sustainability Education 7, December issue (online).