FLI Artificial Superintelligence Project

I am writing to announce that GCRI has received a grant from the Future of Life Institute, with funding provided by Elon Musk and the Open Philanthropy Project. The official announcement is here and the full list of awardees is here.

GCRI’s project team includes Tony Barrett, Roman Yampolskiy, and myself. Here is the project title and summary:

Evaluation of Safe Development Pathways for Artificial Superintelligence

Some experts believe that computers could eventually become a lot smarter than humans are. They call it artificial superintelligence, or ASI. If people build ASI, it could be either very good or very bad for humanity. However, ASI is not well understood, which makes it difficult for people to act to enable good ASI and avoid bad ASI. Our project studies the ways that people could build ASI in order to help people act in better ways. We will model the different steps that need to occur for people to build ASI. We will estimate how likely it is that these steps will occur, and when they might occur. We will also model the actions people can take, and we will calculate how much the actions will help. For example, governments may be able to require that ASI researchers build in safety measures. Our models will include both the government action and the ASI safety measures, to learn about how well it all works. This project is an important step towards making sure that humanity avoids bad ASI and, if it wishes, creates good ASI.

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.