A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis
View the paper “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis”
This paper analyzes the risk of a catastrophe scenario involving self-improving artificial intelligence. An self-improving AI is one that makes itself smarter and more capable. In this scenario, the self-improvement is recursive, meaning that the improved AI makes an even more improved AI, and so on. This causes a takeoff of successively more intelligent AIs. The result is an artificial superintelligence (ASI), which is an AI that is significantly more …
Read More »