Common Points of Advice for Students and Early-Career Professionals Interested in Global Catastrophic Risk

GCRI runs a recurring Advising and Collaboration Program in which we connect with people at all career points who are interested in getting more involved in global catastrophic risk. Through that program, I have had the privilege of speaking with many people to share my experience in the field and help them find opportunities to advance their careers in global catastrophic risk. It has been an enriching experience, and I thank all of our program participants.

Many of the people in our program are students and early-career professionals. …

Read More »

Artificial Intelligence, Systemic Risks, and Sustainability

View the paper “Artificial Intelligence, Systemic Risks, and Sustainability”

Artificial intelligence is increasingly a factor in a range of global catastrophic risks. This paper studies the role of AI in two closely related domains. The first is systemic risks, meaning risks involving interconnected networks, such as supply chains and critical infrastructure systems. The second is environmental sustainability, meaning risks related to the natural environment’s ability to sustain human civilization on an ongoing basis. The paper is led by Victor Galaz, Deputy Director of the Stockholm Resilience Centre, …

Read More »

Corporate Governance of Artificial Intelligence in the Public Interest

View the paper “Corporate Governance of Artificial Intelligence in the Public Interest”

OCTOBER 28, 2021: This post has been corrected to fix an error.

Private industry is at the forefront of AI technology. About half of artificial general intelligence (AGI) R&D projects, including some of the largest ones, are based in corporations, which is far more than in any other institution type. Given the importance of AI technology, including for global catastrophic risk, it is essential to ensure that corporations develop and use AI in appropriate ways. …

Read More »

June Newsletter: AI Ethics & Governance

Dear friends,

This month GCRI announces two new research papers. First, Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence led by GCRI Research Associate Andrea Owe, addresses the current state of treatment of nonhumans across the field of AI ethics. The paper finds limited existing attention and calls for more. The paper speaks to major themes in AI ethics, such as the project of aligning AI to human (or nonhuman) values. Given the profound current and potential future impacts of AI technology, how nonhumans …

Read More »

Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence

View the paper “Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence”

In the ethics of artificial intelligence, a major theme is the challenge of aligning AI to human values. This raises the question of the role of nonhumans. Indeed, AI can profoundly affect the nonhuman world, including nonhuman animals, the natural environment, and the AI itself. Given that large parts of the nonhuman world are already under immense threats from human affairs, there is reason to fear potentially catastrophic consequences should AI R&D fail …

Read More »

AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

View the paper “AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries”

Certification is widely used to convey that an entity has met some sort of performance standard. It includes everything from the certificate that people receive for completing a university degree to certificates for energy efficiency in consumer appliances and quality management in organizations. As AI technology becomes increasingly impactful across society, there can be a role for certification to improve AI governance. This paper presents an overview of AI certification, applying insights from prior …

Read More »

GCRI Receives $200,000 for 2021 Work on AI

I am delighted to announce that GCRI has received a new $200,000 donation to fund work on AI in 2021 from Gordon Irlam. Irlam had previously made donations funding AI project work conducted in 2020 and 2019.

Irlam explains in his own words why he chose to support our work:

“It isn’t enough that we research technical AGI alignment. Any such technical AGI alignment scheme must then be implemented. This is the domain of AGI policy. GCRI is one of the leading U.S. organizations working on AGI …

Read More »

In Memory of John Garrick

Late last year, the field of risk analysis lost a pioneer and longtime leader, B. John Garrick. Garrick helped develop the field, first in the nuclear power industry and then later across a wide range of other domains, including global catastrophic risk. He was also a colleague and a friend of GCRI, who contributed to our work as one of our senior advisors. He will be dearly missed by many, including all of us at GCRI.

As histories of risk analysis document (e.g. this and this), …

Read More »

GCRI Receives $209,000 for General Support

I am delighted to announce that GCRI has received $209,000 in a new grant from Jaan Tallinn via the Survival and Flourishing Fund. The grant is for general support for GCRI. We at GCRI are grateful for this donation. We look forward to using it to advance our mission of developing the best ways to confront humanity’s gravest threats.

Read More »

GCRI Statement on the January 6 US Capitol Insurrection

We at the Global Catastrophic Risk Institute were appalled and disgusted to watch as right-wing domestic violent extremists stormed the US Capitol on January 6 to threaten Congress and disrupt certification of the Electoral College vote [1].

Though shocking in its own right, the insurrection fits a broader pattern. Empirical research shows that right-wing domestic violent extremism has been the main source of terrorism in the United States since the attacks of September 11, 2001. We unequivocally oppose all forms of terrorism and political violence. We hope that …

Read More »