Deep Learning and the Sociology of Human-Level Artificial Intelligence

View the paper “Deep Learning and the Sociology of Human-Level Artificial Intelligence”

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology, especially language, and their implications for AI. This is a worthy contribution, all the more so because social science perspectives are underrepresented in the study of AI relative to perspectives from computer science, cognitive science, and philosophy. On the other hand, the book does not do well in its treatment of the AI techniques it addresses. A better book for that is Rebooting AI by Gary Marcus and Ernest Davis; I would recommend this for readers outside the field of computer science who would like to understand the computer science of AI.

Artifictional Intelligence argues that deep learning—the current dominant AI technique—cannot master human language because it is based on statistical pattern recognition of large datasets, whereas language often addresses novel situations for which data is scarce or absent. (Rebooting AI also makes this argument.) Artifictional Intelligence shows this via some clever and entertaining experiments, such as using Google Translate to translate certain phrases from English to another language then back into English. For example, “I field at short leg”, an expression from cricket, is more successfully translated to and from Afrikaans (“I field on short leg”) than Chinese (“I am in the short leg field”), which makes sense given the geography of cricket. (The translations listed here are from the time of writing this blog post. The translations constantly change as the Google Translate algorithm is updated and as it processes more data.)

The book further argues that for an AI to achieve human-level language ability, it would need to be embedded in human society. Only then would it master the nuances of human language. The book draws on Collins’s experience as a sociologist studying communities of gravitational wave physicists. Collins participated in imitation games in which he tried to pass himself off as a gravitational wave physicist, analogous to the well-known Turing test for AI. Collins attributes his own success at these games to his extensive time embedded in gravitational wave physics communities. This experience, as well as his understanding of the relevant sociology, prompts Collins to conclude that an AI would need to be similarly embedded in order to reach human-level ability in language.

One serious problem with the book is that it consistently treats human-level AI as a scientific endeavor without considering its ethical and societal implications. Collins wishes the field of AI was more like the field of gravitational physics in its narrow focus on big scientific breakthroughs. That is bad advice. The field of AI needs more attention to its ethical and societal implications, not less. AI has profound ethical and societal implications given its many current and potential future applications. AI experts need to participate in efforts to address these matters in order to ensure that these efforts are based on a sound understanding of the technology.

Academic citation:
Baum, Seth D., 2020. Deep learning and the sociology of human-level artificial intelligence. Metascience, vol. 29, no. 2 (July), pages 313-317, DOI 10.1007/s11016-020-00510-6. ReadCube.

Image credit: Wiley

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.