October Newsletter: The Superintelligence Debate

Dear friends,

When I look at debates about risks from artificial intelligence, I see a lot of parallels with debates over global warming. Both involve global catastrophic risks that are, to a large extent, driven by highly profitable industries. Indeed, today most of the largest corporations in the world are in either the computer technology or fossil fuel industries.

One key difference is that whereas global warming debates have been studied in great detail by many talented researchers, AI debates have barely been studied at all. As …

Read More »

August Newsletter: Long-Term Trajectories

Dear friends,

This month I am proud to announce a new paper, “Long-Term Trajectories of Human Civilization“. The paper calls for attention to the fate of human civilization over time scales of millions, billions, or trillions of years into the future. While most attention goes to nearer-term phenomena, the long-term can be profoundly important to present-day decision-making. For example, one major issue the paper examines is the fate of global catastrophe survivors. How well they fare is a central factor in whether people today should focus …

Read More »

December Newsletter: Year in Review

Dear friends,

It has been another productive year for GCRI. Though we have a limited budget, we’ve made major contributions to global catastrophic risk research. Here are some highlights:

* GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

* GCRI affiliates presented at numerous other events throughout the year, including dedicated catastrophic risk events at UCLA and Gothenburg.

* …

Read More »

November Newsletter: Survey of AI Projects

Dear friends,

This month we are announcing a new paper, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. This is more than the usual research paper: it’s 99 pages pulling together several months of careful work. It documents and analyzes what’s going on right now in artificial general intelligence (AGI) R&D in terms that are useful for risk management, policy, and related purposes. Essentially, this is what we need to know about AGI R&D to make a difference on the issue.

AGI is AI …

Read More »

On the Promotion of Safe and Socially Beneficial Artificial Intelligence

View the paper “On the Promotion of Safe and Socially Beneficial Artificial Intelligence”

As AI becomes more and more capable, its impacts on society are also getting larger. AI is now being used in medicine, transportation (self-driving cars), the military (drones), and many other sectors. The impacts of AI on society depend a lot on how the AI is designed. To improve AI design, two challenges must be met. There is the technical challenge of developing safe and beneficial technology designs, and there is the social …

Read More »

October Newsletter: How To Reduce Risk

Dear friends,

As we speak, a group of researchers is meeting in Gothenburg, Sweden on the theme of existential risk. I joined it earlier in September. My commendations to Olle Häggström and Anders Sandberg for hosting an excellent event.

My talk in Gothenburg focused on how to find the best opportunities to reduce risk. The best opportunities are often a few steps removed from academic risk and policy analysis. For example, there is a large research literature on climate change policy, much of which factors in catastrophic risk. However, the …

Read More »

May Newsletter: The Value of GCR Research

Dear friends,

People often ask me why we set GCRI up as a think tank instead of something for more direct action at reducing the risk. The reason is that when it comes to the global catastrophic risks, a little bit of well-designed research goes a long way. It helps us make better decisions about how to reduce the risks.

For example, last week I attended a political science workshop at Yale University on how to cost-effectively spend $10 billion to reduce the probability of war between …

Read More »

April Newsletter

Centre for the Study of Existential Risk

GCRI executive director Seth Baum has joined the Cambridge Centre for the Study of Existential Risk (CSER) as a research affiliate. The affiliation is in recognition of the contribution Baum has made to CSER.

Colloquium on Catastrophic and Existential Threats

GCRI executive director Seth Baum gave a talk titled “Integrated Assessment of Global Catastrophic Risk” and GCRI director of research Tony Barrett gave a talk titled “Towards Integrated, Comprehensive Assessment of Global Catastrophic Risks to Inform Risk Reduction” at the Garrick Institute …

Read More »

October Newsletter: Society for Risk Analysis Symposium

2016 Society for Risk Analysis Meeting

GCRI will lead a symposium on global catastrophic risk at the 2016 meeting of the Society for Risk Analysis (SRA), December 11-15 in San Diego. SRA is the premier academic and professional society for risk analysis. GCRI has led symposiums at SRA meetings since 2010. The 2016 GCRI symposium features five talks focused on risks from artificial intelligence and nuclear weapons.

Artificial Intelligence

GCRI Executive Director Seth Baum’s paper, “On the promotion of safe and socially beneficial artificial intelligence” has been accepted for publication …

Read More »

September Newsletter: Media Internship Opportunity

Dear friends,

GCRI is currently recruiting for a volunteer/intern position on media engagement. We aim both to improve the quality of global catastrophic risk media coverage—including coverage of GCRI—and to help someone launch their career. The ideal fit would be a college student or recent graduate who seeks a career in global catastrophic risk in something media-related such as journalism or social science. Details can be found on GCRI’s website. We would be grateful if you would share the announcement with anyone who might be interested or apply …

Read More »