Collective Action on Artificial Intelligence: A Primer and Review

View the paper “Collective Action on Artificial Intelligence: A Primer and Review”

The development of safe and socially beneficial artificial intelligence (AI) will require collective action: outcomes will depend on the actions that many different people take. In recent years, a sizable but disparate literature has looked at the challenges posed by collective action on AI, but this literature is generally not well grounded in the broader social science literature on collective action. This paper advances the study of collective action on AI by providing a primer on the topic and a review of existing literature. It is intended to get an interdisciplinary readership up to speed on the topic, including social scientists, computer scientists, policy analysts, government officials, and other interested people.

The primer describes the theory of collective action and relates it to different types of AI collective action situations. A primary distinction is between situations in which individual and collective interests diverge, as in the prisoner’s dilemma or adversarial AI competition, and in which they converge, as in coordination problems such as establishing common platforms for AI. In general, collective action is easier to achieve when interests converge, because individual actors pursuit of their own self-interest can lead to outcomes that are worse for the group as a whole. The primer also explains how AI collective action situations depend both on whether the goods involved are excludable or rivalrous and whether they hinge on the action of a single actor or on some combination of actors.

One major focus of the AI collective action literature identified in this paper are potentially dangerous AI race scenarios. AI races are not necessarily dangerous and might even hasten the arrival of socially beneficial forms of AI, but could be dangerous if individual actors’ interest in developing AI quickly diverges from the collective interest in ensuring that AI is safe and socially beneficial. The paper looks at both near-term and long-term AI races. The literature identified in this paper looks in particular at near-term races to develop military applications and at long-term AI races to develop advanced forms of AI such as artificial general intelligence and superintelligence. The two types of races are potentially related, since near-term races could affect the long-term development of AI.

Finally, the paper evaluates different types of potential solutions to collective action problems. The collective action literature identifies three major types of solution: government regulation, private markets, and community self-organizing. All three types of solution can advance AI collective action, but no single type is likely to address the entire range of AI collective action problems. Instead of looking for narrow, silver-bullet solutions it may be to pursue a mix of solutions that AI collective action issues in different ways and at different scales. Governance regimes should also account for other factors that could affect collective action, such as the extent to which AI developers are transparent about their technology.

AI collective action issues are increasingly pressing. Collective action will be necessary to ensure that AI serves the public interest rather than just the narrow private interests of those who develop it. Collective action will also be necessary to ensure that AI is developed with adequate safety measures and risk management protocols. Further work could provide more detailed analysis and support practical progress on AI collective action issues.

This paper has also been summarized in the AI Ethics Brief #71 of the Montreal AI Ethics Institute.

This paper extends GCRI’s interdisciplinary research on AI. It builds on GCRI’s prior work on the governance of AI, particularly the papers On the promotion of safe and socially beneficial artificial intelligence and Lessons for artificial intelligence from other global risks.

Academic citation:
de Neufville, Robert and Seth D. Baum, 2021. Collective action on artificial intelligence: A primer and review. Technology in Society, vol. 66, (August), article 101649, DOI 10.1016/j.techsoc.2021.101649.

Image credit: Volodymyr Goinyk

This post was written by
Robert de Neufville is Director of Communications of the Global Catastrophic Risk Institute.
Comments are closed.