AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

View the paper “AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries”

Certification is widely used to convey that an entity has met some sort of performance standard. It includes everything from the certificate that people receive for completing a university degree to certificates for energy efficiency in consumer appliances and quality management in organizations. As AI technology becomes increasingly impactful across society, there can be a role for certification to improve AI governance. This paper presents an overview of AI certification, applying insights from prior …

Read More »

GCRI Receives $200,000 for 2021 Work on AI

I am delighted to announce that GCRI has received a new $200,000 donation to fund work on AI in 2021 from Gordon Irlam. Irlam had previously made donations funding AI project work conducted in 2020 and 2019.

Irlam explains in his own words why he chose to support our work:

“It isn’t enough that we research technical AGI alignment. Any such technical AGI alignment scheme must then be implemented. This is the domain of AGI policy. GCRI is one of the leading U.S. organizations working on AGI …

Read More »

In Memory of John Garrick

Late last year, the field of risk analysis lost a pioneer and longtime leader, B. John Garrick. Garrick helped develop the field, first in the nuclear power industry and then later across a wide range of other domains, including global catastrophic risk. He was also a colleague and a friend of GCRI, who contributed to our work as one of our senior advisors. He will be dearly missed by many, including all of us at GCRI.

As histories of risk analysis document (e.g. this and this), …

Read More »

GCRI Receives $209,000 for General Support

I am delighted to announce that GCRI has received $209,000 in a new grant from Jaan Tallinn via the Survival and Flourishing Fund. The grant is for general support for GCRI. We at GCRI are grateful for this donation. We look forward to using it to advance our mission of developing the best ways to confront humanity’s gravest threats.

Read More »

GCRI Statement on the January 6 US Capitol Insurrection

We at the Global Catastrophic Risk Institute were appalled and disgusted to watch as right-wing domestic violent extremists stormed the US Capitol on January 6 to threaten Congress and disrupt certification of the Electoral College vote [1].

Though shocking in its own right, the insurrection fits a broader pattern. Empirical research shows that right-wing domestic violent extremism has been the main source of terrorism in the United States since the attacks of September 11, 2001. We unequivocally oppose all forms of terrorism and political violence. We hope that …

Read More »

2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy

View the paper “2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”

In 2017, GCRI published the first-ever survey of artificial general intelligence (AGI) research and development (R&D) projects for ethics, risk, and policy. This paper updates the 2017 survey. The 2020 survey features improved methodology, enabling it to find more projects than the 2017 survey and characterize them more precisely. The 2020 survey also evaluates how the landscape of AGI R&D projects has changed from 2017 to 2020.

AGI is AI that can …

Read More »

2020 Annual Report

2020 has been a challenging year for GCRI, as it has been for so many other organizations. The pandemic we are currently living through is, by some definitions, a global catastrophe. COVID-19 has already killed more than a million people worldwide, and has disrupted the work and lives of many others. At the same time, political turmoil in the US and around the world has demanded our attention and created both new risks and new opportunities.

Fortunately, GCRI is relatively well-positioned to operate
under these conditions. …

Read More »

Call For Papers: Governance of Artificial Intelligence

GCRI Executive Director Seth Baum is editor of a new special issue of the journal Information on Governance of Artificial Intelligence. The special issue welcomes manuscripts on all aspects of the governance of AI. Details below.

Note that Information is an open access journal with an author processing charge. GCRI is able to cover the author processing charge for a limited number of submissions. Interested authors should contact Baum directly about this.

Artificial intelligence (AI) technology is playing an increasingly important role in human affairs and in …

Read More »

Accounting for Violent Conflict Risk in Planetary Defense Decisions

View the paper “Accounting For Violent Conflict Risk In Planetary Defense Decisions”

Planetary defense is the defense of planet Earth against collisions with near-Earth objects (NEOs), which include asteroids, comets and meteoroids. A central objective of planetary defense is to reduce risks to Earth and its inhabitants. Whereas planetary defense is mainly focused on risks from NEO, this paper argues that planetary defense decisions should also account for other risks, especially risks from violent conflict. The paper is based on a talk I gave at the 2019 Planetary …

Read More »

August Newsletter: Quantifying Probability

Dear friends,

One of the great features of being part of a wider community of scholars and professionals working on global catastrophic risk is the chance to learn from and build on work that other groups are doing. This month, we announce a paper of mine that builds on an excellent recent paper by Simon Beard, Thomas Rowe, James Fox. The Beard et al. paper surveys the methods used to quantify the probability of existential catastrophe (which is roughly synonymous with global catastrophe).

My paper comments …

Read More »