June Newsletter: Racism and Global Catastrophic Risk

Dear friends,

The ongoing wave of anti-racism protests are prompting long-overdue conversation and action to establish a more equitable society in the US and worldwide. We at GCRI are saddened by the tragic deaths that have sparked these protests and hopeful that something good can come out of it.

For our part, we have contributed to the conversation by publishing a statement on racism. To summarize: We see moral and practical links between the problems of racism and global catastrophic risk. They are both large-scale issues whose …

Read More »

Deep Learning and the Sociology of Human-Level Artificial Intelligence

View the paper “Deep Learning and the Sociology of Human-Level Artificial Intelligence”

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology,
especially language, and their implications for AI. This is a worthy
contribution, all …

Read More »

Medium-Term Artificial Intelligence and Society

View the paper “Medium-Term Artificial Intelligence and Society”

Discussion of artificial intelligence tends to focus on either near-term or long-term AI. That includes some contentious debate between “presentists” who favor attention to the near-term and “futurists” who favor attention to the long-term. Largely absent from the conversation is any attention to the medium-term. This paper provides dedicated discussion of medium-term AI and its accompanying societal issues. It focuses on how medium-term AI can be defined and how it relates to the presentist-futurist debate. It builds on …

Read More »

GCRI Receives $250,000 for 2020 Work on AI

I am delighted to announce that GCRI has received a new $250,000 donation from Gordon Irlam that will fund GCRI’s work on AI in 2020. Irlam had previously made a donation of the same amount a year ago, also for AI work.

Irlam explains why he made the donation this way:

“Technical AGI safety is an emerging field. Whether its recommendations to make AGI safe are heeded or ignored carries tremendous weight for the future. Having these recommendations heeded is fundamentally is a social, political, and …

Read More »

November Newsletter: A Year of Growth

Dear friends,

2019 has been a year of growth for GCRI. One year ago, we described a turning point for the organization and announced our goal of scaling up to increase our impact on global catastrophic risk. Over the past year, we have made considerable progress toward this goal. We have expanded our team, published work in top journals such as Science and Risk Analysis, and hosted a tremendously successful advising and collaboration program in support of talented people around the world. All this and more …

Read More »

Lessons for Artificial Intelligence from Other Global Risks

View the paper “Lessons for Artificial Intelligence from Other Global Risks”

It has become clear in recent years that AI poses important global risks. The study of AI risk is relatively new, but it can potentially learn a lot from the study of similar, better-studied risks. GCRI’s new paper applies to the study of AI risk lessons from four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper is co-authored by GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett, along with GCRI Senior …

Read More »

July Newsletter: Asteroid-Nuclear Risk Analysis

Dear friends,

One reason it is important to analyze the global catastrophic risks quantitatively is that some decisions involve tradeoffs between them. An action may reduce one risk while increasing another. It’s important to know whether the decrease in the one risk is large enough to offset the increase in the other.

This month, we announce a new paper that presents a detailed analysis of one such decision: the use of nuclear explosives to deflect Earthbound asteroids away. Nuclear deflection is an option under active consideration by …

Read More »

February Newsletter: Nuclear War Risk Analysis

Dear friends, In order to most effectively reduce the risk of global catastrophe, it is often essential to have a quantitative understanding of the risk. It is particularly essential when we are faced with decisions that involve tradeoffs between different risks and decisions that require prioritizing among multiple risks. For this reason, GCRI has long been at the forefront of the risk and decision analysis of global catastrophic risk. This month, we announce a new paper, “Reflections on the Risk Analysis of Nuclear War”. This paper summarizes the …

Read More »

GCRI Receives $250,000 Donation For AI Research And Outreach

I am delighted to announce that GCRI has received a $250,000 donation from Gordon Irlam, to be used for GCRI’s research and outreach on artificial intelligence. The donation will be used mainly to fund Robert de Neufville and myself during 2019.

The donation is a major first step toward GCRI’s goal of raising $1.5 million to enable the organization to start scaling up. Our next fundraising priority is to bring GCRI Director of Research Tony Barrett on full-time, and possibly also one other senior hire whom …

Read More »

December Newsletter: A Turning Point For GCRI

Dear friends,

We believe that GCRI may now be at a turning point. Having established ourselves as leaders in the field of global catastrophic risk, we now seek to scale up the organization so that we can do correspondingly more to address global catastrophic risk. To that end, we have published detailed records of our accomplishments, plans for future work, and financial needs. An overview of this information is contained in our new blog post, Summary of GCRI’s 2018-2019 Accomplishments, Plans, and Fundraising.

To begin scaling up, we …

Read More »