Superintelligence Skepticism As A Political Tool

View the paper “Superintelligence Skepticism as a Political Tool”

For decades, there have been efforts to exploit uncertainty about science and technology for political purposes. This practice traces to the tobacco industry’s effort to sow doubt about the link between tobacco and cancer, and it can be seen today in skepticism about climate change and other major risks. This paper analyzes the possibility that the same could happen for the potential future artificial intelligence technology known as superintelligence.

Artificial superintelligence is AI that is much smarter than …

Read More »

Modeling and Interpreting Expert Disagreement About Artificial Superintelligence

View the paper “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”

Artificial superintelligence (ASI) is artificial intelligence (AI) with capabilities that are significantly greater than human capabilities across a wide range of domains. A hallmark of the ASI issue is disagreement among experts. This paper demonstrates and discusses methodological options for modeling and interpreting expert disagreement about the risk of ASI catastrophe. Using a new model called ASI-PATH, the paper models a well-documented recent disagreement between Nick Bostrom and Ben Goertzel, two distinguished ASI experts. Three …

Read More »

A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy

View the paper “A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”

Artificial general intelligence (AGI) is AI that can reason across a wide range of domains. While most AI research and development (R&D) is on narrow AI, not AGI, there is some dedicated AGI R&D. If AGI is built, its impacts could be profound. Depending on how it is designed and used, it could either help solve the world’s problems or cause catastrophe, possibly even human extinction.

This paper presents the first-ever survey …

Read More »

On the Promotion of Safe and Socially Beneficial Artificial Intelligence

View the paper “On the Promotion of Safe and Socially Beneficial Artificial Intelligence”

As AI becomes more and more capable, its impacts on society are also getting larger. AI is now being used in medicine, transportation (self-driving cars), the military (drones), and many other sectors. The impacts of AI on society depend a lot on how the AI is designed. To improve AI design, two challenges must be met. There is the technical challenge of developing safe and beneficial technology designs, and there is the social …

Read More »

Liability Law for Present and Future Robotics Technology

View the paper “Liability Law for Present and Future Robotics Technology”

Advances in robotics technology are causing major changes in manufacturing, transportation, medicine, and a number of other sectors. While many of these changes are beneficial, there will inevitably be some harms. Who or what is liable when a robot causes harm? This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Already, …

Read More »

Social Choice Ethics in Artificial Intelligence

View the paper “Social Choice Ethics in Artificial Intelligence”

A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom-up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of …

Read More »

Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence

View the paper “Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence”

AI experts are divided into two factions. A “presentist” faction focuses on near-term AI, meaning the AI that either already exists or could be built within a small number of years. A “futurist” faction focuses on long-term AI, especially advanced AI that could equal or exceed human cognition. Each faction argues that its AI focus is the more important one, and the dispute between the two factions sometimes gets heated. This paper argues …

Read More »

Working Paper: On the Promotion of Safe and Socially Beneficial Artificial Intelligence

GCRI is launching a working paper series with a new paper On the promotion of safe and socially beneficial artificial intelligence by Seth Baum.

Abstract
This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to …

Read More »

A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis

View the paper “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis”

This paper analyzes the risk of a catastrophe scenario involving self-improving artificial intelligence. An self-improving AI is one that makes itself smarter and more capable. In this scenario, the self-improvement is recursive, meaning that the improved AI makes an even more improved AI, and so on. This causes a takeoff of successively more intelligent AIs. The result is an artificial superintelligence (ASI), which is an AI that is significantly more …

Read More »

Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process

View the paper “Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process”

Already computers can outsmart humans in specific domains, like multiplication. But humans remain firmly in control… for now. Artificial superintelligence (ASI) is AI with intelligence that vastly exceeds humanity’s across a broad range of domains. Experts increasingly believe that ASI could be built sometime in the future, could take control of the planet away from humans, and could cause a global catastrophe. Alternatively, if ASI is built safely, it may …

Read More »