November Newsletter: Survey of AI Projects

Dear friends,

This month we are announcing a new paper, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. This is more than the usual research paper: it’s 99 pages pulling together several months of careful work. It documents and analyzes what’s going on right now in artificial general intelligence (AGI) R&D in terms that are useful for risk management, policy, and related purposes. Essentially, this is what we need to know about AGI R&D to make a difference on the issue.

AGI is AI that can reason across a wide range of domains. It’s also the type of AI that could most readily cause a global catastrophe, due to its potential to outsmart humanity and gain control of the planet. There’s a lot of research about AGI risk and related topics in ethics and policy, including work by us at GCRI. However, the work has been largely disconnected from the actual state of affairs in AGI R&D. This new survey changes that. We think it will be a valuable resource for AGI risk analysis and risk management.

You can download the paper here.

Sincerely,
Seth Baum, Executive Director

 

 

 

 

 

General Risk

GCRI Associate Jacob Haqq-Misra is guest editing a special issue of Futures on the detectability of future Earths and terraformed worlds. This special issue is looking for papers that consider the future evolution of the Earth system from an astrobiological perspective as well as how humanity or other technological civilizations could artificially create sustainable ecosystems on lifeless planets. The deadline for submissions is November 30.

Artificial Intelligence

GCRI Associate Roman Yampolskiy recently gave several talks on AI safety and AI security: a talk titled “Taxonomy of Pathways to Dangerous Artificial Intelligence” at the Tech, Security and Democracy colloquium at Laval University on October 5; a talk on AI safety and security for the Society for Information Management at Bellarmine University on October 10; and a talk on AI safety as part of the AI With the Best (#AIWTB) online conference on October 14.

Food Security

GCRI Associate Dave Denkenberger wrote in the Effective Altruism Forum about research he conducted as part of a Centre for Effective Altruism grant showed that investing in alternate foods may be as important as investing in AI safety.

Popular Media

GCRI Executive Director Seth Baum is featured in a Tech2025 podcast on “Evil AI, Killer Robots, Dragon Kings and Cupcakes”.

This post was written by
Robert de Neufville is Director of Communications of the Global Catastrophic Risk Institute.
Comments are closed.