New Website At gcri.org
GCRI has launched a new website at gcri.org. Please go there for the latest in GCRI activities.
This website, gcrinstitute.org, is being transitioned out. We expect that it will no longer be active soon.
Read More »GCRI has launched a new website at gcri.org. Please go there for the latest in GCRI activities.
This website, gcrinstitute.org, is being transitioned out. We expect that it will no longer be active soon.
Read More »Letter from the Executive Director
GCRI ends 2024 with our smallest team ever, but it’s not for lack of risks in the world. The war in Ukraine rages on, and tensions among nuclear-armed states are far from resolved. H5N1 influenza has spread widely among birds, with some human cases reported. Climate change continues to worsen, causing an array of harms. AI technology has proliferated, prompting highly divergent reactions among experts and observers. Those of us concerned about global catastrophic risk certainly have our work cut out …
Read More »GCRI co-founder Tony Barrett has left his position as Director of Research and is now in the position of Senior Advisor. The move comes as Dr. Barrett begins a new role in the US government, which precluded his continuing as Director of Research. Dr. Barrett’s new government role is as a Senior Technical Advisor at the National Risk Management Center of the Cybersecurity and Infrastructure Security Agency, part of the United States Department of Homeland Security.
The idea for GCRI began as a conversation between Dr. …
Read More »GCRI is pleased to announce the 2024 Fellowship Program. The Fellowship Program aims to highlight exceptional collaborators GCRI had the opportunity to partner with over the course of the year.
This year, we have three 2024 Fellows. One of them is collaborating with GCRI on projects focused on international peace and East Asia. One is collaborating on connections with the insurance industry and undergraduate education. And, one is collaborating on the field of global catastrophic risk in Africa.
Congratulations to our 2024 GCRI Fellows.
Orlanda GillLondon
Orlanda Gill is …
Read More »In September, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of our successful 2019, 2020, 2021, 2022, and 2023 Advising and Collaboration Programs.
The GCRI Advising and Collaboration Program is an opportunity for anyone interested in global catastrophic risk to get more involved in the field. There is practically no barrier to entry in the program: the only thing people need to do is to send us a short email expressing …
Read More »UPDATE: The open call for advisees and collaborators is now closed. Thank you to everyone who applied. However, anyone interested in seeking our advice and/or collaborating with us is still welcome to contact us as per the instructions below and we will include them in our next advisees and collaborators program.
————————————————————————————————
GCRI is currently welcoming inquiries from people who are interested in seeking our advice and/or collaborating with us as part of our sixth annual Advising and Collaboration Program. Inquiries may cover any aspect of global …
Read More »View the paper “Climate Change, Uncertainty, and Global Catastrophic Risk”
Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on climate change, the potential for extreme global harms remains highly uncertain and controversial. This paper addresses the question by examining the theoretical definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. …
Read More »View the paper “Assessing the Risk of Takeover Catastrophe from Large Language Models”
Recent large language models (LLMs) have shown some impressive capabilities, but this has raised concerns about their potential to cause harm. Once concern is that LLMs could take over the world and cause catastrophic harm, potentially even killing everyone on the planet. However, this concern has been questioned and hotly debated. Therefore, this paper presents a careful analysis of LLM takeover catastrophe risk.
Concern about LLM takeover is noteworthy across the entire history of …
Read More »View the paper “On the Intrinsic Value of Diversity”
Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s additionally relevant to scenarios involving extreme good, such as in well-designed advanced AI. However, the ethics of diversity has been remarkably understudied. To help address the full range of issues involving diversity, this paper presents a foundational study of the …
Read More »View the paper “Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics”
Vote suppression, disinformation, sham elections that give authoritarians the veneer of democracy, and even genocide: all of these are means of manipulating the outcomes of elections. (Shown above: a ballot from the sham 1938 referendum for the annexation of Austria by Nazi Germany; notice the larger circle for Ja/Yes.) Countering these manipulations is an ongoing challenge. Meanwhile, work on AI ethics often proposes that AI systems use something similar to democracy. Therefore, this …
Read More »