GCRI Receives $250,000 Donation For AI Research And Outreach

I am delighted to announce that GCRI has received a $250,000 donation from Gordon Irlam, to be used for GCRI’s research and outreach on artificial intelligence. The donation will be used mainly to fund Robert de Neufville and myself during 2019.

The donation is a major first step toward GCRI’s goal of raising $1.5 million to enable the organization to start scaling up. Our next fundraising priority is to bring GCRI Director of Research Tony Barrett on full-time, and possibly also one other senior hire whom I can only discuss privately.

In regards to his donation, Irlam states:

“GCRI does solid and important work on vitally important topics and is one of the only US organizations working on these issues. They have done this work in the past on a very small budget. Advanced AI will have a profound effect on society. It is important that this effect be beneficial. My giving to GCRI is in the hope that they can scale up their research, and scale up their research outreach, so that societal and corporate policies and responses to artificial general intelligence are shaped appropriately.”

All of us at GCRI are grateful for this donation and excited for the work it will enable us to do.

Here is a summary of the specific research and outreach projects funded by this donation:

Corporate governance of AI: Following GCRI’s recent publications on AI skepticism and misinformation, this project seeks to improve how the for-profit sector handles AI risks. It will begin with outreach to people at AI companies and may include further research on strategies for improving corporate governance of AI.

National security dimensions of AI: This project conducts research and outreach on the risks associated with national security and military involvement in AI. The project builds on GCRI’s recent success in outreach to the US national security community on AI, as well as our backgrounds in AI and national security.

Anthropocentrism in AI ethics: This project evaluates the extent to which AI ethics favors humans, develops proposals for how AI ethics should handle questions of human favoritism, and conducts outreach to improve the state of AI ethics conversations. The project extends recent GCRI research on social choice ethics in AI.

Prospects for collective action on AI: This project assesses how to promote positive interactions between different AI groups and avoid dangerous forms of competition, such as races in which groups cut corners on safety in order to build AI first. The project applies GCRI’s expertise on social science topics such as the governance of common-pool resources.

Governance of AI and global catastrophic risk: This project draws on prior scholarship and experience on risk governance to develop general insights and strategies for the governance of global catastrophic risk, with emphasis on AI risk.

Support for the AI and global catastrophic risk talent pools: Finally, this project involves GCRI identifying, training, and mentoring people who are trying to be more active in AI and global catastrophic risk. The project will support GCRI’s efforts to scale up and will also support the wider AI and global catastrophic risk community.

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.