Miles Brundage Gives Online Lecture on Artificial General Intelligence

On Thursday 25 July, GCRI hosted an online lecture by Miles Brundage entitled ‘A Social Science Perspective on Global Catastrophic Risk Debates: The Case of Artificial General Intelligence’. (See the pre-lecture announcement.) Brundage is a PhD student at Arizona State University’s Human and Social Dimensions of Science and Technology, where he is affiliated with the Consortium for Science, Policy, and Outcomes (CSPO). He also spent two years at the Advanced Research Projects Agency – Energy, which funds promising early stage energy technologies.

Brundage’s lecture described how the field of “science, technology and society” (STS) helps to understand the impact of artificial intelligence (AI) and artificial general intelligence (AGI) (i.e., “strong AI,” meaning AI at least as intelligent as humans). Rather than looking at science as driving technology, Brundage looks at science, technology, and society as having an interdependent, dynamic relationship. For example, Brundage discussed the Collingridge dilemma: while technology is easy to shape in its nascent form, we don’t understand its effect on society until down the road when a technology is difficult to change due to path dependence. Path dependence is a tendency to follow past decisions even if they no longer make sense. An example is the use of the QWERTY keyboard layout, which was originally designed to stop typebars from jamming in typewriters—a problem that doesn’t exist with computers. But there is a sweet spot in midstream development when we can foresee social consequences and still influence technology. Applied to AI, we should continuously monitor how AI will affect society and strive to create beneficial outcomes.

While AGI has the potential to benefit society immensely, it also may present a global catastrophic risk. For example, AGI could choose to oppose humans, or a country could use AGI for dangerous military purposes, which is why some research groups like the Machine Intelligence Research Institute (MIRI) work to create “friendly” AI. Brundage described several social science frameworks that apply to these aspects of AI. For example, “boundary work” is a sociological perspective on the boundaries or demarcations between different fields. Demarcating the boundary between “AGI” and ”narrow AI” has social implications, since this may influence decisions whether to influence AI broadly or AGI research in particular. “Visioneering”—imagining and advancing speculative technologies— also affects the interplay between science, technology, and society. For example, AI researchers can speak of future technologies in a manner that society perceives as either favorable or unsettling, which influences social acceptance and thus affects downstream technological development. Brundage also discussed “plausibility,” which focuses on technically feasible outcomes of technology regardless of probability, providing a useful framework to envision scenarios and evaluate responses to them. Finally, he described current research on “responsible innovation,” which assesses how to derive positive social impacts from science and technology research.

One topic discussed by participants is how AI researchers are oftentimes not the same people who create AI safeguards or work on policy issues related to AI. One participant observed that some AI researchers look at challenges posed in the next few years, while others look at challenges many decades ahead. Some AI researchers may not care about the broad implications of their work at all. Brundage thought that one possible solution is for educators to integrate social and ethical considerations into their curriculums, also noting that researchers should reflect upon what goals drive their research when making high-level decisions about research directions. Similarly, a paper by Seth Baum describes how to reform the National Science Foundation’s Broader Impacts Criterion to consider a wider range of ethical and social issues. CSPO, with which Brundage is affiliated, has published extensively on related topics. Finally, participants agreed that reducing risks from AI should be done with a combination of narrow interventions (e.g. affecting individual behavior) and broad interventions (e.g. developing international research standards).

Here is the full abstract of the talk:

Researchers at institutions such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI) have suggested that artificial general intelligence (AGI) may pose catastrophic risks to humanity. This talk will contextualize such concerns using theories and frameworks drawn from science and technology studies (STS) other social science fields. In particular, I will seek to answer the question: what are the conceptual and practical tools available to a non-technical scholar, citizen, or policy-maker seeking to address global catastrophic risks from particular technologies? Using AGI as a case study, I will illustrate relevant concepts that could inform future work on global catastrophic risks such as boundary work (the rhetorical techniques scientists use to demarcate their own work from the speculations of futurists and journalists and thereby cement their own credibility while distancing themselves from potential catastrophic consequences of their disciplines), visioneering (the articulation of and attempt to bring about speculative technological futures, such as those of Eric Drexler in the case of nanotechnology and Ray Kurzweil in the case of AGI), plausibility (a useful framework for assessing future outcomes from technology, as opposed to probability), and responsible innovation (a rapidly growing field of inquiry assessing the various ways in which the public and policy-makers can positively influence the social impacts of scientific and technological research).

The presentation was hosted online via Skype, with slides shown on Prezi. The audience included Daniel Dewey, a Research Fellow at the Oxford Future of Humanity Institute and a Research Associate at the MIRI, Megan Worman, a Cyberlaw and Policy Consultant, and GCRI’s Tony Barrett, Seth Baum, Mark Fusco, and Grant Wilson.

This post was written by

2 Comments on "Miles Brundage Gives Online Lecture on Artificial General Intelligence"

  • Alex says

    This was worth reading for the concept of “Collingridge dilemma” alone. Thanks.

  • eldras says

    Superintelligence is likely to emerge from increasing technologies, or be specifically built.

    If it emerges it may be impossible to contain under the Collingridge dilemma.

    If it is specifically built, which is logically increasingly possible, unless it is safely contained it will be dominant to and powerful enough to wipe out mankind.

    I urge people to lobby their government institutions to examine and prepare for this risk, by contacting named individuals in them, rather than general departments.

    eldras

    https://sites.google.com/site/hawkingprotocolofsafety/