The Ethics of Outer Space: A Consequentialist Perspective

View the paper “The Ethics of Outer Space: A Consequentialist Perspective”

Outer space is of major interest to consequentialist ethics for two basic reasons. First, the vast expanses of outer space offer opportunities for achieving vastly more good or bad consequences than can be achieved on Earth alone. If consequences are valued equally regardless of where they occur then achieving good consequences in space is of paramount importance. For human civilization, this can mean the building of space colonies or even the macroengineering of structures like Dyson …

Read More »

Alternative Foods as a Solution to Global Food Supply Catastrophes

View the paper “Alternative Foods as a Solution to Global Food Supply Catastrophes”

Analysis of future food security typically focuses on managing gradual trends such as population growth, natural resource depletion, and environmental degradation. However, several risks threaten to cause large and abrupt declines in food security. For example, nuclear war, volcanic eruptions, and asteroid impact events can block sunlight, causing abrupt global cooling. In extreme but entirely possible cases, these events could make agriculture infeasible worldwide for several years, creating a food supply catastrophe of …

Read More »

A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis

View the paper “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis”

This paper analyzes the risk of a catastrophe scenario involving self-improving artificial intelligence. An self-improving AI is one that makes itself smarter and more capable. In this scenario, the self-improvement is recursive, meaning that the improved AI makes an even more improved AI, and so on. This causes a takeoff of successively more intelligent AIs. The result is an artificial superintelligence (ASI), which is an AI that is significantly more …

Read More »

False Alarms, True Dangers? Current and Future Risks of Inadvertent U.S.-Russian Nuclear War

View the paper “False Alarms, True Dangers? Current and Future Risks of Inadvertent U.S.-Russian Nuclear War”

In the post–Cold War era, it is tempting to see the threat of nuclear war between the United States and Russia as remote: Both nations’ nuclear arsenals have shrunk since their Cold War peaks, and neither nation is actively threatening the other with war. A number of analysts, however, warn of the risk of an inadvertent nuclear conflict between the United States and Russia — that is, a conflict that …

Read More »

The Far Future Argument for Confronting Catastrophic Threats to Humanity: Practical Significance and Alternatives

View the paper “The Far Future Argument for Confronting Catastrophic Threats to Humanity: Practical Significance and Alternatives”

Certain major global catastrophes could cause permanent harm to humanity. A large body of scholarship makes a moral argument for confronting the threat of these catastrophes based on a concern for far future generations. The far future can be defined as anything beyond the next several millennia, including millions or billions of years from now, or even longer. Given the moral principle of caring about everyone equally, including people …

Read More »

Confronting the Threat of Nuclear Winter

View the paper “Confronting the Threat of Nuclear Winter”

Nuclear weapons explosions send large quantities of smoke high into the atmosphere. The smoke blocks incoming sunlight and destroys ozone, causing major environmental harms worldwide, including cold temperatures, reduced precipitation, and increased ultraviolet radiation. In technical terms, nuclear winter refers to cooling such that winter-like temperatures occur during summer, as caused by nuclear war. This paper uses the term nuclear winter more generally to refer to the full set of global environmental harms from nuclear war. The …

Read More »

Isolated Refuges for Surviving Global Catastrophes

View the paper “Isolated Refuges for Surviving Global Catastrophes”

The long-term success of human civilization is of immense importance because of the huge number of lives at stake, in particular the lives of countless future generations. A catastrophe that causes permanent harm to human civilization would be a similarly immense loss. Some measures taken pre-catastrophe could help people survive and carry humanity into the future. This paper analyzes how refuges could keep a small population alive through a range of global catastrophe scenarios. The paper considers …

Read More »

Introduction: Confronting Future Catastrophic Threats to Humanity

View the paper “Introduction: Confronting Future Catastrophic Threats to Humanity”

Humanity faces a range of threats to its viability as a civilization and its very survival. These catastrophic threats include natural disasters such as supervolcano eruptions and large asteroid collisions as well as disasters caused by human activity such as nuclear war and global warming. The threats are diverse, but their would-be result is the same: the collapse of global human civilization or even human extinction.

These diverse threats are increasingly studied as one integrated field, using …

Read More »

Confronting Future Catastrophic Threats to Humanity

Confronting future catastrophic threats to humanity is a special issue of the journal Futures co-edited by Bruce Tonn and myself. It contains 11 original articles discussing a range of issues on catastrophic threats. It is part of ongoing attention to catastrophic threats in Futures.

Relative to prior collections, such as the 2008 book Global Catastrophic Risks and the 2009 special issue of Futures Human Extinction, this special issue aims to contribute more of a practical focus to the study of catastrophic threats to humanity, in order to better guide humanity’s …

Read More »

Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process

View the paper “Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process”

Already computers can outsmart humans in specific domains, like multiplication. But humans remain firmly in control… for now. Artificial superintelligence (ASI) is AI with intelligence that vastly exceeds humanity’s across a broad range of domains. Experts increasingly believe that ASI could be built sometime in the future, could take control of the planet away from humans, and could cause a global catastrophe. Alternatively, if ASI is built safely, it may …

Read More »