A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy

View the paper “A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”

Artificial general intelligence (AGI) is AI that can reason across a wide range of domains. While most AI research and development (R&D) is on narrow AI, not AGI, there is some dedicated AGI R&D. If AGI is built, its impacts could be profound. Depending on how it is designed and used, it could either help solve the world’s problems or cause catastrophe, possibly even human extinction.

This paper presents the first-ever survey …

Read More »

On the Promotion of Safe and Socially Beneficial Artificial Intelligence

View the paper “On the Promotion of Safe and Socially Beneficial Artificial Intelligence”

As AI becomes more and more capable, its impacts on society are also getting larger. AI is now being used in medicine, transportation (self-driving cars), the military (drones), and many other sectors. The impacts of AI on society depend a lot on how the AI is designed. To improve AI design, two challenges must be met. There is the technical challenge of developing safe and beneficial technology designs, and there is the social …

Read More »

October Newsletter: How To Reduce Risk

Dear friends,

As we speak, a group of researchers is meeting in Gothenburg, Sweden on the theme of existential risk. I joined it earlier in September. My commendations to Olle Häggström and Anders Sandberg for hosting an excellent event.

My talk in Gothenburg focused on how to find the best opportunities to reduce risk. The best opportunities are often a few steps removed from academic risk and policy analysis. For example, there is a large research literature on climate change policy, much of which factors in catastrophic risk. However, the …

Read More »

Social Choice Ethics in Artificial Intelligence

View the paper “Social Choice Ethics in Artificial Intelligence”

A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom-up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of …

Read More »

September Newsletter: 2017 Society for Risk Analysis Meeting

Dear friends,

Each year, GCRI hosts sessions on global catastrophic risk at the annual meeting of the Society for Risk Analysis, which is the leading academic and professional society for all things risk. This year, we have gotten three full sessions accepted for the meeting, our most ever. SRA is competitive and we are honored to have three sessions.

Likewise, for those of you who are interested in SRA but haven’t come to the meeting before, this would be a good year to come. SRA has a …

Read More »

July Newsletter: Summer Talks and Presentations

Integrated Assessment

GCRI Executive Director Seth Baum gave a talk on “Integrated Assessment of Global Catastrophic Risk and Artificial Intelligence” at the Cambridge University Centre for the Study of Existential Risk (CSER) on June 28. Dr. Baum will also participate in a Tech2025 workshop on future AI risk on July 11 in New York City.

GCRI Director of Research Tony Barrett gave a talk on integrated assessment, nuclear war, AI, and risk reduction opportunities at an Effective Altruism DC event on global catastrophic risks on June 17.

Artificial Intelligence

GCRI Associate …

Read More »

June Newsletter: Nuclear Weapons Ban Treaty

Dear friends,

This past May, a draft treaty to ban nuclear weapons was released at the United Nations. Nuclear weapons are a major global catastrophic risk, one that GCRI has done extensive work on. At first glance, the nuclear ban treaty would seem like something to wholeheartedly support. However, upon closer inspection, its merits are ambiguous.

The treaty is not expected to eliminate nuclear weapons because the nuclear-armed countries won’t sign it. Instead, it seeks to strengthen the norm against nuclear weapons and increase pressure for disarmament. …

Read More »

Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence

View the paper “Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence”

AI experts are divided into two factions. A “presentist” faction focuses on near-term AI, meaning the AI that either already exists or could be built within a small number of years. A “futurist” faction focuses on long-term AI, especially advanced AI that could equal or exceed human cognition. Each faction argues that its AI focus is the more important one, and the dispute between the two factions sometimes gets heated. This paper argues …

Read More »

May Newsletter: The Value of GCR Research

Dear friends,

People often ask me why we set GCRI up as a think tank instead of something for more direct action at reducing the risk. The reason is that when it comes to the global catastrophic risks, a little bit of well-designed research goes a long way. It helps us make better decisions about how to reduce the risks.

For example, last week I attended a political science workshop at Yale University on how to cost-effectively spend $10 billion to reduce the probability of war between …

Read More »

Call for Papers: Informatica Special Issue on Superintelligence

GCRI associate Roman Yampolskiy and junior associate Matthijs Maas are—along with Ryan Carey and Nell Watson—guest-editing an upcoming Informatica special issue on superintelligence. The special issue will approach the topic of superintelligence in as multidisciplinary and visionary a manner as possible.

They are looking for original research, critical studies, and review articles on topics related to superintelligence, in particular including

– Artificial Superintelligence
– Artificial General Intelligence
– Biological Superintelligence
– Brain-computer Interfaces
– Whole Brain Emulation
– Genetic Engineering
– Cognitive Enhancement
– Collective Superintelligence
– Neural Lace-Mediated Empathy
– Technological Singularity
– Intelligence Explosion
– Definition …

Read More »