Corporate Governance of Artificial Intelligence in the Public Interest

View the paper “Corporate Governance of Artificial Intelligence in the Public Interest”

OCTOBER 28, 2021: This post has been corrected to fix an error.

Private industry is at the forefront of AI technology. About half of artificial general intelligence (AGI) R&D projects, including some of the largest ones, are based in corporations, which is far more than in any other institution type. Given the importance of AI technology, including for global catastrophic risk, it is essential to ensure that corporations develop and use AI in appropriate ways. This paper surveys opportunities to improve corporate governance of AI in ways that advance the public interest. It will be of use to researchers looking for an overview of corporate governance at leading AI companies, levers of influence in corporate AI development, and opportunities to improve corporate governance with an eye towards long-term AI development.

The opportunities to improve AI corporate governance are diverse. The paper surveys opportunities for nine different types of actors:

  • Management can establish policies, translate policies into practice, and create structures such as oversight committees.
  • Workers can directly affect the design and use of AI systems, and can have indirect effects by influencing management.
  • Investors can voice concerns to management, vote on shareholder resolutions, replace a corporation’s board of directors, sell off their investments to signal disapproval, and file lawsuits against the corporation.
  • Corporate partners can use their business-to-business market power and relationships to influence companies, while corporate competitors can push each other in pursuit of market share and reputation.
  • Industry consortia can identify and promote best practices, formalize best practices as standards, and pool resources to advance industry interests, such as by lobbying governments.
  • Nonprofit organizations can conduct research, advocate for change, organize coalitions, and raise awareness.
  • The public can select which corporate AI products and services to use, and also support specific AI public policies.
  • The media can research, document, analyze, and generate attention to corporate governance activities and related matters.
  • Governments can create binding or market-based incentives, procure AI products, support more informal “soft law” standards, provide information about corporate practices, and support public education.

In many cases, the best results will accrue when multiple types of actors work together. The paper shows this via extended discussion of three running examples. First, workers and the media collaborated to influence managers at Google to leave Project Maven, a drone video classification project of the US Department of Defense. Workers initially leaked information about Maven to the media, and then signed an open letter against Maven following media reports. Second, nonprofit research and advocacy on law enforcement use of facial recognition technology fueled worker and investor activism and public pressure (especially the 2020 protests against racism and police brutality) that ultimately pushed multiple competing AI corporations to change their practices. (See also the GCRI Statement on Racism.) Third, workers, management, and industry consortia have interacted to develop and promote best practices concerning the publication of potentially harmful research.

In summary, there are many ways in which different actors can contribute constructively to AI corporate governance to advance the public interest. This is not to say that all opportunities are equally important. To the contrary, some are clearly more important than others. Management in particular can often be especially influential. That said, the paper does not attempt to evaluate the effectiveness of specific intervention opportunities. That could be a direction for future research. The value of this paper comes from its work mapping out the terrain of opportunities.

This paper has also been summarized in the AI Ethics Brief #68 of the Montreal AI Ethics Institute.

This paper extends GCRI’s research on AI. It builds on GCRI’s prior work on AI governance, especially the 2017 and 2020 surveys of AGI R&D projects and papers on superintelligence skepticism and misinformation.

CORRECTION: This paper as published contains—and the original version of this post also contained—the incorrect claim that corporate AI researchers publish more than their academic counterparts. In fact, academic AI researchers publish substantially more than corporate AI researchers.

Academic citation:
Cihon, Peter, Jonas Schuett, and Seth D. Baum, 2021. Corporate governance of artificial intelligence in the public interest. Information, vol. 12, article 275, DOI 10.3390/info12070275.

Image credit: Max Bender

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.