Dear friends,
People often ask me why we set GCRI up as a think tank instead of something for more direct action at reducing the risk. The reason is that when it comes to the global catastrophic risks, a little bit of well-designed research goes a long way. It helps us make better decisions about how to reduce the risks.
For example, last week I attended a political science workshop at Yale University on how to cost-effectively spend $10 billion to reduce the probability of war between the great powers. We discussed many great ideas for things like reducing misperceptions and avoiding conflicts over other important countries. But the best idea may be a good research agenda. If that much money is to be spent, then first we should spend at least a few hundred thousand to get more confidence in how best to spend the rest.
A new paper by GCRI’s Tony Barrett develops this idea further. The paper, forthcoming in the journal Decision Analysis, applies the concept of value of information (VOI) to global catastrophic risk. Research has high VOI to the extent that it improves decision making. For example, if research shows you can get a 1% risk reduction via a $20 million project instead of a $25 million project, then the research is worth $5 million, even if the research itself costs much less. The paper outlines how an integrated assessment of global catastrophic risk could yield especially high value information by producing decision-relevant information across the full breadth of the risk.
The VOI perspective speaks to why GCRI is a think tank and how our research agenda is designed: we see great opportunities for select research to improve decision making on global catastrophic risk.
Sincerely,
Seth Baum, Executive Director
General Risk
GCRI Director of Research Tony Barrett’s new paper on the “Value of GCR Information: Cost Effectiveness-Based Approach for Global Catastrophic Risk (GCR) Reduction” is forthcoming in Decision Analysis (a non-technical summary of the paper is available here). The paper uses a value-of-information (VOI) approach to argue that a comprehensive, integrated assessment of global catastrophic risks and risk-reduction options would greatly help in assessing the effectiveness of GCR reduction and GCR research decisions.
Artificial Intelligence
GCRI Associate Roman Yampolskiy gave a talk titled “Towards Good AI” at the Machine Learning Prague conference on the pathways that could lead to the development of dangerous artificial general intelligence (AGI).
GCRI Director of Research Tony Barrett will give a talk on superintelligence risk and policy analysis at the 2017 Governance of Emerging Technology conference at Arizona State.
Calls for Papers
GCRI associate Roman Yampolskiy and GCRI junior associate Matthijs Maas are among the guest-editors of an special issue of Informatica on superintelligence. They are looking for original research, critical studies, and review articles on topics related to superintelligence. The deadline for submitting papers is August 31, 2017. Final manuscripts are due November 30, 2017.
GCRI Associate Jacob Haqq-Misra is guest-editing a special issue of Futures on the detectability of the future Earth and of terraformed worlds. He is looking for papers that consider the future evolution of the Earth system from an astrobiological perspective as well as how humanity or other technological civilizations could artificially create sustainable ecosystems on lifeless planets. Abstracts of 200-300 words should be sent to Jacob Haqq-MIsra by May 31, 2017.
Help us make the world a safer place! The Global Catastrophic Risk Institute depends on your support to reduce the risk of global catastrophe. You can donate online or contact us for further information.