Corporate Governance of Artificial Intelligence in the Public Interest

View the paper “Corporate Governance of Artificial Intelligence in the Public Interest”

OCTOBER 28, 2021: This post has been corrected to fix an error.

Private industry is at the forefront of AI technology. About half of artificial general intelligence (AGI) R&D projects, including some of the largest ones, are based in corporations, which is far more than in any other institution type. Given the importance of AI technology, including for global catastrophic risk, it is essential to ensure that corporations develop and use AI in appropriate ways. …

Read More »

June Newsletter: AI Ethics & Governance

Dear friends,

This month GCRI announces two new research papers. First, Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence led by GCRI Research Associate Andrea Owe, addresses the current state of treatment of nonhumans across the field of AI ethics. The paper finds limited existing attention and calls for more. The paper speaks to major themes in AI ethics, such as the project of aligning AI to human (or nonhuman) values. Given the profound current and potential future impacts of AI technology, how nonhumans …

Read More »

Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence

View the paper “Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence”

In the ethics of artificial intelligence, a major theme is the challenge of aligning AI to human values. This raises the question of the role of nonhumans. Indeed, AI can profoundly affect the nonhuman world, including nonhuman animals, the natural environment, and the AI itself. Given that large parts of the nonhuman world are already under immense threats from human affairs, there is reason to fear potentially catastrophic consequences should AI R&D fail …

Read More »

AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

View the paper “AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries”

Certification is widely used to convey that an entity has met some sort of performance standard. It includes everything from the certificate that people receive for completing a university degree to certificates for energy efficiency in consumer appliances and quality management in organizations. As AI technology becomes increasingly impactful across society, there can be a role for certification to improve AI governance. This paper presents an overview of AI certification, applying insights from prior …

Read More »

GCRI Receives $200,000 for 2021 Work on AI

I am delighted to announce that GCRI has received a new $200,000 donation to fund work on AI in 2021 from Gordon Irlam. Irlam had previously made donations funding AI project work conducted in 2020 and 2019.

Irlam explains in his own words why he chose to support our work:

“It isn’t enough that we research technical AGI alignment. Any such technical AGI alignment scheme must then be implemented. This is the domain of AGI policy. GCRI is one of the leading U.S. organizations working on AGI …

Read More »

In Memory of John Garrick

Late last year, the field of risk analysis lost a pioneer and longtime leader, B. John Garrick. Garrick helped develop the field, first in the nuclear power industry and then later across a wide range of other domains, including global catastrophic risk. He was also a colleague and a friend of GCRI, who contributed to our work as one of our senior advisors. He will be dearly missed by many, including all of us at GCRI.

As histories of risk analysis document (e.g. this and this), …

Read More »

GCRI Receives $209,000 for General Support

I am delighted to announce that GCRI has received $209,000 in a new grant from Jaan Tallinn via the Survival and Flourishing Fund. The grant is for general support for GCRI. We at GCRI are grateful for this donation. We look forward to using it to advance our mission of developing the best ways to confront humanity’s gravest threats.

Read More »

GCRI Statement on the January 6 US Capitol Insurrection

We at the Global Catastrophic Risk Institute were appalled and disgusted to watch as right-wing domestic violent extremists stormed the US Capitol on January 6 to threaten Congress and disrupt certification of the Electoral College vote [1].

Though shocking in its own right, the insurrection fits a broader pattern. Empirical research shows that right-wing domestic violent extremism has been the main source of terrorism in the United States since the attacks of September 11, 2001. We unequivocally oppose all forms of terrorism and political violence. We hope that …

Read More »

2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy

View the paper “2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”

In 2017, GCRI published the first-ever survey of artificial general intelligence (AGI) research and development (R&D) projects for ethics, risk, and policy. This paper updates the 2017 survey. The 2020 survey features improved methodology, enabling it to find more projects than the 2017 survey and characterize them more precisely. The 2020 survey also evaluates how the landscape of AGI R&D projects has changed from 2017 to 2020.

AGI is AI that can …

Read More »

2020 Annual Report

2020 has been a challenging year for GCRI, as it has been for so many other organizations. The pandemic we are currently living through is, by some definitions, a global catastrophe. COVID-19 has already killed more than a million people worldwide, and has disrupted the work and lives of many others. At the same time, political turmoil in the US and around the world has demanded our attention and created both new risks and new opportunities.

Fortunately, GCRI is relatively well-positioned to operate
under these conditions. …

Read More »