Social Choice Ethics in Artificial Intelligence

View the paper “Social Choice Ethics in Artificial Intelligence”

A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom-up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of …

Read More »

Value of GCR Information: Cost Effectiveness-Based Approach for Global Catastrophic Risk (GCR) Reduction

View the paper  “Value of GCR Information: Cost Effectiveness-Based Approach for Global Catastrophic Risk (GCR) Reduction”

In this paper, we develop and illustrate a framework for determining the potential value of global catastrophic risk (GCR) research in reducing uncertainties in the assessment of GCR risk levels and the effectiveness of risk-reduction options. The framework uses the decision-analysis concept of the expected value of perfect information (EVPI) in terms of the cost-effectiveness of GCR reduction. We illustrate these concepts using available information on impact risks from two types …

Read More »

Book Review: The Age of Em

View the paper “Book Review: The Age of Em“

Book Review of The Age of Em: Work, Love, and Life When Robots Rule the Earth, by Robin Hanson, Oxford University Press, 2016.

A new book by Robin Hanson, The Age of Em: Work, Love, and Life When Robots Rule the Earth, is reviewed. The Age of Emdescribes a future scenario in which human minds are uploaded into computers, becoming emulations or “ems”. In the scenario, ems take over the global economy by running on fast computers and copying themselves to multitask. …

Read More »

Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence

View the paper “Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence”

AI experts are divided into two factions. A “presentist” faction focuses on near-term AI, meaning the AI that either already exists or could be built within a small number of years. A “futurist” faction focuses on long-term AI, especially advanced AI that could equal or exceed human cognition. Each faction argues that its AI focus is the more important one, and the dispute between the two factions sometimes gets heated. This paper argues …

Read More »

December Newsletter: The US Election & Global Catastrophic Risk

Dear friends,

The recent US election offers a vivid reminder of how large and seemingly unlikely events can and do sometimes occur. Just as we cannot assume that elections will continue to be won by normal politicians, we also cannot assume that humanity will continue to avoid global catastrophe.

The outcome of this election has many implications for global catastrophic risk, which I outline in a new article in the Bulletin of the Atomic Scientists. To my eyes, the election increases the importance of nuclear weapons risk …

Read More »

Working Paper: On the Promotion of Safe and Socially Beneficial Artificial Intelligence

GCRI is launching a working paper series with a new paper On the promotion of safe and socially beneficial artificial intelligence by Seth Baum.

Abstract
This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to …

Read More »

The Ethics of Outer Space: A Consequentialist Perspective

View the paper “The Ethics of Outer Space: A Consequentialist Perspective”

Outer space is of major interest to consequentialist ethics for two basic reasons. First, the vast expanses of outer space offer opportunities for achieving vastly more good or bad consequences than can be achieved on Earth alone. If consequences are valued equally regardless of where they occur then achieving good consequences in space is of paramount importance. For human civilization, this can mean the building of space colonies or even the macroengineering of structures like Dyson …

Read More »

Alternative Foods as a Solution to Global Food Supply Catastrophes

View the paper “Alternative Foods as a Solution to Global Food Supply Catastrophes”

Analysis of future food security typically focuses on managing gradual trends such as population growth, natural resource depletion, and environmental degradation. However, several risks threaten to cause large and abrupt declines in food security. For example, nuclear war, volcanic eruptions, and asteroid impact events can block sunlight, causing abrupt global cooling. In extreme but entirely possible cases, these events could make agriculture infeasible worldwide for several years, creating a food supply catastrophe of …

Read More »

A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis

View the paper “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis”

This paper analyzes the risk of a catastrophe scenario involving self-improving artificial intelligence. An self-improving AI is one that makes itself smarter and more capable. In this scenario, the self-improvement is recursive, meaning that the improved AI makes an even more improved AI, and so on. This causes a takeoff of successively more intelligent AIs. The result is an artificial superintelligence (ASI), which is an AI that is significantly more …

Read More »

February Newsletter: The Year Ahead

Dear friends,

One year ago, GCRI announced a new direction focused on research to develop the best ways to confront humanity’s gravest threats. Over the past year, we’ve delivered:

* An edited collection, Confronting Future Catastrophic Threats to Humanity, containing ten original research papers including five by GCRI affiliates
* Six additional research papers, making for a total of nine peer-reviewed journal articles and two book chapters
* 19 popular articles in publications such as the Guardian and the Bulletin of the Atomic Scientists
* Two symposia at the Society …

Read More »