November Newsletter: Organization Updates

Dear friends,

We are currently in the process of implementing some major organization updates. These are making the organization more in line with our ongoing work and future plans.

First, we’ve published new pages for the seven topics that our work focuses on:

Aftermath of Global Catastrophe
Artificial Intelligence
Cross-Risk Evaluation & Prioritization
Nanotechnology
Nuclear War
Risk & Decision Analysis
Solutions & Strategy

Second, we’ve overhauled our affiliates, removing Associates and Junior Associates and creating a new Senior Advisory Council. We are delighted to welcome Gary Ackerman, John Garrick, and Seán Ó hÉigeartaigh as GCRI’s …

Read More »

GCRI Affiliates Overhaul

GCRI has made several major changes to our roster of affiliates, as reflected on our People page. These changes make our listing of affiliates more consistent with how GCRI is actually operating at this time and prepares us for future directions we hope to pursue.

First, the GCRI leadership team now consists only of Tony Barrett (Director of Research), Robert de Neufville (Director of Communications), and myself (Executive Director). Grant Wilson (Deputy Director) has been removed. Grant has made excellent contributions since the early days of …

Read More »

New Topics Pages

GCRI is in the process of several general organization updates. The first is a new collection of topics pages published on our website. They cover the major topics that GCRI currently works on. These replace our previous collection of projects pages, which had fallen out of date. The new topics pages briefly summarize the topic itself and GCRI’s work on it, and then list GCRI’s publications on the topic.

The topics are as follows:

Aftermath of Global Catastrophe. How well would survivors of global catastrophe fare? This …

Read More »

October Newsletter: The Superintelligence Debate

Dear friends,

When I look at debates about risks from artificial intelligence, I see a lot of parallels with debates over global warming. Both involve global catastrophic risks that are, to a large extent, driven by highly profitable industries. Indeed, today most of the largest corporations in the world are in either the computer technology or fossil fuel industries.

One key difference is that whereas global warming debates have been studied in great detail by many talented researchers, AI debates have barely been studied at all. As …

Read More »

Countering Superintelligence Misinformation

View the paper “Countering Superintelligence Misinformation”

In any public issue, having the right information can help us make the right decisions. This holds in particular for high-stakes issues like the global catastrophic risks. Unfortunately, sometimes incorrect information, or misinformation, is spread. When this happens, it is important to set the record straight.

This paper studies misinformation about artificial superintelligence, which is AI that is much smarter than humans. Current AI is not superintelligent, but if superintelligence is built, it could have massive consequences. Misinformation about superintelligence could …

Read More »

Superintelligence Skepticism As A Political Tool

View the paper “Superintelligence Skepticism as a Political Tool”

For decades, there have been efforts to exploit uncertainty about science and technology for political purposes. This practice traces to the tobacco industry’s effort to sow doubt about the link between tobacco and cancer, and it can be seen today in skepticism about climate change and other major risks. This paper analyzes the possibility that the same could happen for the potential future artificial intelligence technology known as superintelligence.

Artificial superintelligence is AI that is much smarter than …

Read More »

Uncertain Human Consequences in Asteroid Risk Analysis and the Global Catastrophe Threshold

View the paper “Uncertain Human Consequences in Asteroid Risk Analysis and the Global Catastrophe Threshold”

Asteroid collision is probably the most well-understood global catastrophic risk. This paper shows that it’s not so well understood after all, due to uncertainty in the human consequences. This finding matters both for asteroid risk and for the wider study of global catastrophic risk. If asteroid risk is not well understood, then neither are other risks such as nuclear war and pandemics.

In addition to our understanding of the risks, two other …

Read More »

August Newsletter: Long-Term Trajectories

Dear friends,

This month I am proud to announce a new paper, “Long-Term Trajectories of Human Civilization“. The paper calls for attention to the fate of human civilization over time scales of millions, billions, or trillions of years into the future. While most attention goes to nearer-term phenomena, the long-term can be profoundly important to present-day decision-making. For example, one major issue the paper examines is the fate of global catastrophe survivors. How well they fare is a central factor in whether people today should focus …

Read More »

Long-Term Trajectories of Human Civilization

View the paper “Long-Term Trajectories of Human Civilization”

Society today needs greater attention to the long-term fate of human civilization. Important present-day decisions can affect what happens millions, billions, or trillions of years into the future. The long-term effects may be the most important factor for present-day decisions and must be taken into account. An international group of 14 scholars calls for the dedicated study of “long-term trajectories of human civilization” in order to understand long-term outcomes and inform decision-making. This new approach is presented in …

Read More »

June Newsletter: Summer Talks

Artificial Intelligence

GCRI Associate Roman Yampolskiy gave a talk on AI safety to the Global Challenges Summit 2018 in Astana, Kazakhstan May 17-19.

GCRI Executive Director Seth Baum and GCRI Associate Roman Yampolskiy participated in a workshop on “AI Coordination & Great Powers” hosted by Foresight Institute in San Francisco on June 7.

GCRI Executive Director Seth Baum gave a seminar on “AI Risk, Ethics, Social Science, and Policy” hosted by the University of California, Berkeley Center for Human-Compatible Artificial Intelligence (CHAI) on June 11.

Effective Altruism

GCRI Executive Director Seth Baum …

Read More »