Quantifying the Probability of Existential Catastrophe: A Reply to Beard et al.

View the paper “Quantifying the Probability of Existential Catastrophe: A Reply to Beard et al.”

A major challenge for work on global catastrophic risk and existential risk is that the risks are very difficult to quantify. Global catastrophes rarely occur, and the most severe ones have never happened before, so the risk cannot be quantified using past event data. An excellent recent article by Simon Beard, Thomas Rowe, and James Fox surveys ten methods used in prior research on quantifying the probability of existential catastrophe. My new paper expands on the …

Read More »

July Newsletter: Artificial Interdisciplinarity

Dear friends, 

A major impediment to addressing global catastrophic risk is the cognitive challenge posed by the complex, interdisciplinary nature of the risks. Identifying practical, effective solutions for reducing the risk requires a command of a wide range of subjects. That is not easy for anyone, including those of us who work on it full time. 

This month, we announce a new paper on the use of artificial intelligence to ease the cognitive burdens of interdisciplinary research and better address complex societal problems like global catastrophic risk. …

Read More »

Summary of January-July 2020 Advising and Collaboration Program

In January, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of last year’s successful advising and collaboration program. We anticipate conducting a second round of the program later in 2020. The 2020 programs are made possible by generous support from Gordon Irlam.

This first 2020 program focused on a number of AI projects that are also supported by Irlam. Program participants were mostly people interested in AI risk, ethics, and policy. …

Read More »

Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems

View the paper “Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems”

One major challenge in making progress on global catastrophic risk is its interdisciplinarity. Understanding how best to address the risk requires input from risk analysis, public policy, social science, ethics, and a variety of other fields pertaining to specific risks, such as astronomy for asteroid risk and computer science for artificial intelligence (AI) risk. Working across all these disparate fields is a very difficult challenge for human minds. This paper explores the use …

Read More »

GCRI Receives $140,000 for General Support

I am delighted to announce that GCRI has received $140,000 in new grants via the Survival and Flourishing Fund. $90,000 is from Jaan Tallinn and $50,000 is from Jed McCaleb. Both grants are for general support for GCRI. We at GCRI are grateful for these donations. We look forward to using them to advance our mission of developing the best ways to confront humanity’s gravest threats.

Read More »

June Newsletter: Racism and Global Catastrophic Risk

Dear friends,

The ongoing wave of anti-racism protests are prompting long-overdue conversation and action to establish a more equitable society in the US and worldwide. We at GCRI are saddened by the tragic deaths that have sparked these protests and hopeful that something good can come out of it.

For our part, we have contributed to the conversation by publishing a statement on racism. To summarize: We see moral and practical links between the problems of racism and global catastrophic risk. They are both large-scale issues whose …

Read More »

Deep Learning and the Sociology of Human-Level Artificial Intelligence

View the paper “Deep Learning and the Sociology of Human-Level Artificial Intelligence”

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology,
especially language, and their implications for AI. This is a worthy
contribution, all …

Read More »

GCRI Statement on Racism

Like many of you, we at GCRI are appalled by the tragic killings of Ahmaud Arbery, George Floyd, and Breonna Taylor over the past few months. We are awed by the massive response to these events in the US and around the world, and we are hopeful about their potential to spur meaningful change to address long-standing issues caused by racism in the US and other countries. This is a rare moment in history, and so we feel compelled to comment in our capacity as …

Read More »

Medium-Term Artificial Intelligence and Society

View the paper “Medium-Term Artificial Intelligence and Society”

Discussion of artificial intelligence tends to focus on either near-term or long-term AI. That includes some contentious debate between “presentists” who favor attention to the near-term and “futurists” who favor attention to the long-term. Largely absent from the conversation is any attention to the medium-term. This paper provides dedicated discussion of medium-term AI and its accompanying societal issues. It focuses on how medium-term AI can be defined and how it relates to the presentist-futurist debate. It builds on …

Read More »

May Newsletter: Pandemic and New Hire

Dear friends,

We are amidst the most severe global event in decades. The COVID-19 pandemic is unfortunately showing all too clearly how certain threats can devastate human society and individual lives around the world. We at GCRI offer our condolences to those who have lost loved ones and stand with those working to pull through this tragic ordeal. We are not pandemics experts, but we are pursuing opportunities to apply our background in catastrophic risk to the ongoing pandemic response. For more on our perspective on …

Read More »