A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis

View the paper “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis”

This paper analyzes the risk of a catastrophe scenario involving self-improving artificial intelligence. An self-improving AI is one that makes itself smarter and more capable. In this scenario, the self-improvement is recursive, meaning that the improved AI makes an even more improved AI, and so on. This causes a takeoff of successively more intelligent AIs. The result is an artificial superintelligence (ASI), which is an AI that is significantly more …

Read More »

Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process

View the paper “Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process”

Already computers can outsmart humans in specific domains, like multiplication. But humans remain firmly in control… for now. Artificial superintelligence (ASI) is AI with intelligence that vastly exceeds humanity’s across a broad range of domains. Experts increasingly believe that ASI could be built sometime in the future, could take control of the planet away from humans, and could cause a global catastrophe. Alternatively, if ASI is built safely, it may …

Read More »