Call For Papers: Governance of Artificial Intelligence

GCRI Executive Director Seth Baum is editor of a new special issue of the journal Information on Governance of Artificial Intelligence. The special issue welcomes manuscripts on all aspects of the governance of AI. Details below.

Note that Information is an open access journal with an author processing charge. GCRI is able to cover the author processing charge for a limited number of submissions. Interested authors should contact Baum directly about this.

Artificial intelligence (AI) technology is playing an increasingly important role in human affairs and in …

Read More »

Accounting for Violent Conflict Risk in Planetary Defense Decisions

View the paper “Accounting For Violent Conflict Risk In Planetary Defense Decisions”

Planetary defense is the defense of planet Earth against collisions with near-Earth objects (NEOs), which include asteroids, comets and meteoroids. A central objective of planetary defense is to reduce risks to Earth and its inhabitants. Whereas planetary defense is mainly focused on risks from NEO, this paper argues that planetary defense decisions should also account for other risks, especially risks from violent conflict. The paper is based on a talk I gave at the 2019 Planetary …

Read More »

August Newsletter: Quantifying Probability

Dear friends,

One of the great features of being part of a wider community of scholars and professionals working on global catastrophic risk is the chance to learn from and build on work that other groups are doing. This month, we announce a paper of mine that builds on an excellent recent paper by Simon Beard, Thomas Rowe, James Fox. The Beard et al. paper surveys the methods used to quantify the probability of existential catastrophe (which is roughly synonymous with global catastrophe).

My paper comments …

Read More »

Quantifying the Probability of Existential Catastrophe: A Reply to Beard et al.

View the paper “Quantifying the Probability of Existential Catastrophe: A Reply to Beard et al.”

A major challenge for work on global catastrophic risk and existential risk is that the risks are very difficult to quantify. Global catastrophes rarely occur, and the most severe ones have never happened before, so the risk cannot be quantified using past event data. An excellent recent article by Simon Beard, Thomas Rowe, and James Fox surveys ten methods used in prior research on quantifying the probability of existential catastrophe. My new paper expands on the …

Read More »

Summary of January-July 2020 Advising and Collaboration Program

In January, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of last year’s successful advising and collaboration program. We anticipate conducting a second round of the program later in 2020. The 2020 programs are made possible by generous support from Gordon Irlam.

This first 2020 program focused on a number of AI projects that are also supported by Irlam. Program participants were mostly people interested in AI risk, ethics, and policy. …

Read More »

Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems

View the paper “Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems”

One major challenge in making progress on global catastrophic risk is its interdisciplinarity. Understanding how best to address the risk requires input from risk analysis, public policy, social science, ethics, and a variety of other fields pertaining to specific risks, such as astronomy for asteroid risk and computer science for artificial intelligence (AI) risk. Working across all these disparate fields is a very difficult challenge for human minds. This paper explores the use …

Read More »

GCRI Receives $140,000 for General Support

I am delighted to announce that GCRI has received $140,000 in new grants via the Survival and Flourishing Fund. $90,000 is from Jaan Tallinn and $50,000 is from Jed McCaleb. Both grants are for general support for GCRI. We at GCRI are grateful for these donations. We look forward to using them to advance our mission of developing the best ways to confront humanity’s gravest threats.

Read More »

Deep Learning and the Sociology of Human-Level Artificial Intelligence

View the paper “Deep Learning and the Sociology of Human-Level Artificial Intelligence”

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology,
especially language, and their implications for AI. This is a worthy
contribution, all …

Read More »

GCRI Statement on Racism

Like many of you, we at GCRI are appalled by the tragic killings of Ahmaud Arbery, George Floyd, and Breonna Taylor over the past few months. We are awed by the massive response to these events in the US and around the world, and we are hopeful about their potential to spur meaningful change to address long-standing issues caused by racism in the US and other countries. This is a rare moment in history, and so we feel compelled to comment in our capacity as …

Read More »

Medium-Term Artificial Intelligence and Society

View the paper “Medium-Term Artificial Intelligence and Society”

Discussion of artificial intelligence tends to focus on either near-term or long-term AI. That includes some contentious debate between “presentists” who favor attention to the near-term and “futurists” who favor attention to the long-term. Largely absent from the conversation is any attention to the medium-term. This paper provides dedicated discussion of medium-term AI and its accompanying societal issues. It focuses on how medium-term AI can be defined and how it relates to the presentist-futurist debate. It builds on …

Read More »