Greening the Universe: The Case for Ecocentric Space Expansion

View the paper “Greening the Universe: The Case for Ecocentric Space Expansion”

One reason for focusing on global catastrophic risk is because if a global catastrophe occurs, it could prevent human civilization from accomplishing great things in the future. Arguably, some of the greatest things it could accomplish involve expansion into outer space. This paper presents an ecocentric vision for future space expansion, in which human civilization spreads flourishing ecosystems across the cosmos. The paper is part of a broader collection of visions for space exploration …

Read More »

December Newsletter: Thank You & Happy New Year

Dear friends,

As this year comes to a close, we at GCRI would like to formally express our gratitude for your continued support. Support comes in many forms, and we recognize that not everyone has the ability to support us financially. However, we are lucky enough to receive a variety of other helpful forms of support, such as when someone shares our work, reads our research papers, collaborates with us on projects, introduces us to their colleagues, or just finds time to connect with us. We’ve …

Read More »

GCRI Receives $200,000 for 2022 Work on AI

I am delighted to announce that GCRI has received a new $200,000 donation to fund work on AI in 2022 from Gordon Irlam. Irlam had previously made donations funding AI project work conducted in 2021, 2020, and 2019.

All of us at GCRI are grateful for this donation. We are excited to continue our work addressing AI risk.

Our projects for 2022 cover the following topics:

Continuation of prior projects: We will continue work on select projects from previous years.

Further support for the AI and global catastrophic risk talent pools: This project extends …

Read More »

From AI for People to AI for the World and the Universe

View the paper “From AI for People to AI for the World and the Universe”

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for …

Read More »

November Newsletter: Year in Review

Dear friends,

2021 has been a year of overcoming challenges, of making the most of it under difficult circumstances. The Delta variant dashed hopes for a smooth recovery from the COVID-19 pandemic. Outbreaks surge even in places with high vaccination rates, raising questions of when or even if the pandemic will ever end. As we at GCRI are abundantly aware, it could be a lot worse. But it has still been bad, and we send our condolences to those who have lost loved ones.Despite the circumstances, …

Read More »

2021 Annual Report

2021 has been a good year for GCRI. Our productivity is up relative to previous years, boosted by a growing team and rich network of outside collaborators. Our work over the past year is broadly consistent with the plans we outlined one year ago. We have adjusted well to the new realities of the COVID-19 pandemic, aided by the fact that GCRI was designed for remote collaboration from the start. Because of the pandemic, there is a sense in which no one has had a truly great …

Read More »

Summary of the 2021 Advising and Collaboration Program

In May, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of our successful 2019 and 2020 Advising and Collaboration Programs. We anticipate conducting future iterations of the program in 2022 and beyond. The 2021 Program was made possible by generous support from Gordon Irlam and the Survival and Flourishing Fund.

The GCRI Advising and Collaboration Program is an opportunity for anyone interested in global catastrophic risk to get more involved in the field. There is practically no barrier to entry …

Read More »

2021 Advising and Collaboration Program Testimonials

The GCRI Advising and Collaboration Program welcomes people from all backgrounds and career points to connect with GCRI, get advice, and collaborate on projects. The program provides personalized experiences that are uniquely tailored to the individual depending upon their particular needs and circumstances. Therefore, every participant gets something different out of the program. Below are testimonials from six participants describing their experiences in the 2021 Advising and Collaboration Program.

Uliana Certan, International relations scholarManon Gouiran, Research intern at the Swiss Center for Affective SciencesAaron Martin, PhD …

Read More »

Collaborative Publishing with GCRI

Global catastrophic risk is a highly complex, interdisciplinary topic. It benefits from contributions from many people with a variety of backgrounds. For this reason, GCRI emphasizes collaborative publishing. We publish extensively with outside scholars at all career points, including early-career scholars who are relatively new to the field, as well as mid-career and senior scholars at other organizations who bring complementary expertise.

This post describes our approach to collaborative publishing and documents our collaborative publications. Researchers interested in publishing with GCRI should visit our get involved page. The …

Read More »

The Ethics of Sustainability for Artificial Intelligence

View the paper “The Ethics of Sustainability for Artificial Intelligence”

Access the data used in the paper. 

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not …

Read More »