Letter from the Executive Director
GCRI ends 2024 with our smallest team ever, but it’s not for lack of risks in the world. The war in Ukraine rages on, and tensions among nuclear-armed states are far from resolved. H5N1 influenza has spread widely among birds, with some human cases reported. Climate change continues to worsen, causing an array of harms. AI technology has proliferated, prompting highly divergent reactions among experts and observers. Those of us concerned about global catastrophic risk certainly have our work cut out for us.
In late 2024, GCRI co-founder Tony Barrett began a new position in the US Department of Homeland Security. He will be working at the National Risk Management Center of the DHS Cybersecurity and Infrastructure Security Agency. In taking this position, Dr. Barrett needed to step down from his role on the core GCRI team, ending his tenure as Director of Research that began with GCRI’s founding in 2011. While his contributions to the team will certainly be missed, we are grateful that he is remaining involved in GCRI as a Senior Advisor.
Following Dr. Barrett’s departure, I am now the only member of the core GCRI team. We had previously peaked at five team members in 2021-2022, which is still a rather small size for an organization that had been active for a full decade. I take full responsibility for this. It is true that GCRI has faced a siloed environment more interested in targeted work on specific risks than our cross-risk agenda. However, there was still a lot that we could have done differently.
Being small imposes profound limitations, but it does come with heightened agility. Given the current state of the risks around the world, how should GCRI proceed? We have considered many possibilities, including disbanding the organization. However, given the myriad risks the world faces, we still see a clear role for an independent institute working across the global catastrophic risks. It is needed to assess the size and urgency of the various risks, identify their interconnections, and above all develop, evaluate, and advance solutions for addressing the full landscape of risks alongside the numerous adjacent issues and the social, economic, and political contexts that the solutions play out in. Siloed work only goes so far, and indeed it can even be counterproductive by increasing one risk while reducing another.
And so, much of my own work over the past year has gone to positioning GCRI to be more successful as an organization. Frankly, this includes a lot of things that I should have done a long time ago. Much of this should become visible during 2025. I look forward to sharing it with you as it comes online. There is no guarantee that this will bring success to the organization or safety to the world, but I believe this is our best path forward.
2024 Outputs
Research
GCRI has five new publications to report:
The origin and implications of the COVID-19 pandemic: An expert survey, a GCRI Technical Report. This report presents a survey of 168 virologists, infectious disease epidemiologists, and other scientists from around the world. It addresses whether COVID-19 came from wild animals or scientific research, plus implications for future pandemics.
Climate change, uncertainty, and global catastrophic risk, in Futures. This paper addresses the status of climate change as a global catastrophic risk. It critiques prior studies for using an overly restrictive conception of global catastrophic risk and argues that the uncertainty about climate change means that it should indeed be classified as a global catastrophic risk.
Manipulating aggregate societal values to bias AI social choice ethics, in AI and Ethics. This paper addresses a major limitation of certain approaches to AI ethics. These approaches seek to align AI systems with human values similar to how democracy seeks to align governments with citizen values, but both are vulnerable to manipulation.
On the intrinsic value of diversity, in Inquiry. This paper presents a fundamental philosophical analysis of the concept of diversity as a something that may be good in its own right. Diversity relates to global catastrophic risk in several ways and has gone remarkably understudied in moral philosophy despite its social importance.
Assessing the risk of takeover catastrophe from large language models, in Risk Analysis. This paper develops a framework for evaluating the potential for large language models to take over the world and cause catastrophic harm, potentially even killing everyone. Current models fall short, but future models should be monitored according to the paper’s criteria.
Community Support
As senior members of a growing field, GCRI is active in supporting the broader community of people and organizations working on global catastrophic risk.
Our sixth annual Advising and Collaboration Program went well. We spoke with 23 people from around the world, providing them with feedback and guidance on how they can get more involved in global catastrophic risk. We also made 16 introductions between program participants and people in our wider professional networks, providing them with further leads for getting more involved.
Our third annual Fellowship Program features three Fellows. One of them is collaborating with GCRI on projects focused on international peace and East Asia. One is collaborating on connections with the insurance industry and undergraduate education. And, one is collaborating on the field of global catastrophic risk in Africa.