GCR News Summary July 2014

1280px-Smallpox

Boy with smallpox image courtesy of the CDC Public Health Image Library

Workers found vials containing the smallpox virus in a storage room at a Food and Drug Administration (FDA) laboratory at the National Institutes of Health (NIH) campus in Bethesda, Maryland. The vials appear to have been in the laboratory since the 1950s. They were moved to a Centers for Disease Control (CDC) containment laboratory shortly after they were discovered. The CDC said that there was no sign the vials had been compromised and that there was no reason to think anyone had been exposed to the virus. The variola virus that causes smallpox has been considered extinct in the wild since 1980, but forgotten samples of the virus have been found on several other occasions. Experts suspect that more samples may still be kept in secret labs or preserved in the tissue of people who had the virus. Before the discovery, the only known samples of smallpox were held in official repositories designated by the World Health Organization (WHO) in Atlanta, Georgia and Novosibirsk, Russia. “Experience tells us that scientists working in laboratories with the highest biosafety standards are still caught off guard by technical breakdowns, that their staffs make mistakes and break rules, and that a predictable institutional reflex is to cover up blunders,” Jeanne Guillemin wrote in the Bulletin of the Atomic Scientists. “The best protection for both scientists and the public is to load the autoclave and bid a final goodbye to the variola virus.”

The Independent reported that University of Wisconsin-Madison professor Yoshihiro Kawaoka genetically manipulated the 2009 pandemic strain of the H1N1 swine flu virus so that it could circumvent the antibodies that make people resistant to the disease. The virus probably killed close to 300,000 people the year it emerged but is less dangerous now, since many people have some immunity. Kawaoka deliberately engineered the virus to get around our immune response in order to study the genetic changes that might make the virus more dangerous. Kawaoka has conducted this kind of “gain-of-function” research before; in 2012, he was involved in a controversial study making a strain of the H5N1 bird flu virus more transmissible in ferrets. Kawaoka’s latest research was cleared by Wisconsin’s Institutional Biosafety Committee, even though it was conducted in a facility two biosafety levels lower than the laboratories where the most dangerous pathogens are generally studied. Marc Lipsitch and Alison Galvani pointed out in a CIDRAP commentary that the 1977 H1N1 Russian flu epidemic may have originated in a laboratory accident. Lipsitch and Galvani argued that there is “no unique public health benefit of [gain-of-function] experiments, unachievable by safer means, that outweighs their risk”.

The Ebola outbreak in West Africa continues to spread. More than twice as many people have now died than in any previous outbreak. Liberia announced that it is closing all of it schools and most of its borders in an effort to control the disease. Nigeria confirmed its first case of the disease when a Liberian man developed symptoms while on plane to Lagos. Nigeria quarantined the hospital where the man was treated and ultimately died. Nigerian authorities said they are monitoring 59 people who came into contact with the man. Arik Air, Nigeria’s largest airline, suspended flights to Liberia and Sierra Leone. The CDC’s Stephan Monroe said that because the disease can be spread only through direct contact with the body fluids of someone who is showing symptoms, the disease is unlikely to spread out of West Africa. Scott Dowell, another CDC official, said that part of the reason the disease continues to spread in Africa is that the recent budget cuts have reduced WHO’s ability to respond to the outbreak. “If there are poor areas of the world where pathogens can get a head start,” Dowell said,  “we’re all vulnerable.”

The US formally accused Russia of violating the 1987 Intermediate-range Nuclear Forces Treaty (INF) eliminating medium-range ground-launched missiles. The US alleged that Russia has tested the prohibited missiles since 2008. In a letter to Russian President Vladimir Putin, President Obama called for high-level talks to address the breach of the treaty. Some Russian officials have argued that the INF hamstrings Russia’s ability to respond to possible threats from countries like China and Pakistan. The US accusation came as the US and the EU expanded sanctions against Russia after 283 people were killed when a civilian airliner was shot down, apparently by Russian-backed Ukrainian separatists. Administration officials said that the US had decided not to retaliate by deploying medium-range ground-launched missiles of its own, but that it might deploy other systems that are allowed under the accord. “This is a very serious matter we have attempted to address with Russia for some time now,” a US administration official told CBS News.

The US Council of Economic Advisers (CEA), which advises President Obama, released a report arguing that coping with climate change will be much more expensive the longer we delay action. The CEA estimated that the cost of limiting climate change will increase roughly 40% with each decade of delay. The report also said that we should think of climate policy as insurance against the possibility of a catastrophe caused by something like the sudden melting of ice sheets. “Unlike conventional insurance policies,” the report added, “climate policy that serves as climate insurance also leads to cleaner air, energy, security, and benefits that are difficult to monetize like biological diversity.”

The US National Aeronautics and Space Administration (NASA) said that Earth was nearly hit by a potentially devastating solar storm two years ago. On July 23, 2012, an unusually powerful coronal mass ejection (CME)—essentially, a plume of matter and electromagnetic radiation ejected from the Sun’s surface—crossed Earth’s orbit. According to a paper published in December 2013 in Space Weather, if the 2012 storm had come just one week earlier, it would have hit Earth. The paper’s authors estimate the storm would have been at least as strong as the 1859 “Carrington Event”, which caused telegraph equipment across Europe and North America to spark and catch fire. If the storm had hit Earth, it would have severely damaged the global electrical and communications infrastructure, as well as disrupted food, water, and fuel supplies. A separate paper in Space Weather published in February estimated that on the basis of recent solar activity there is a 12% chance of another Carrington-class event in just the next 10 years. And a 2008 National Academy of Science report found that a storm of that magnitude could cause $1-2 trillion worth of immediate damages. Daniel Baker, one of the authors of the December paper, told NASA that “we need to be prepared.”

Google, which recently purchased two leading robotics companies, pulled out of the Defense Advanced Research Projects Agency (DARPA) Robotic Challenge as part of its commitment not to take military funding. The DARPA Robotic Challenge is to develop robots with the skills necessary to help rescue people in the aftermath of a disaster. But DARPA is a defense agency, and the technology that allows robots to navigate a disaster area could also be used to allow them to maneuver through a battlefield.

In the Journal of Experimental and Theoretical Artificial Intelligence, Steve Omohundro argued that autonomous systems are “likely to behave in anti-social and harmful ways unless they are very carefully designed”. Autonomous systems that are programmed to protect themselves, acquire resources, reproduce, and operate efficiently will tend to pursue their ends even at the expense of their designers’ interests. Omohundro calls for designers to follow a “safe-AI scaffolding strategy” in which more sophisticated autonomous systems are built on a base of simpler, provably safe systems.

Oxford philosopher Nick Bostrom argued in his new book Superintelligence that unless we manage the development of artificial intelligence (AI) properly, we risk our own extinction. Bostrom wrote that a human-level machine intelligence could rapidly improve itself to the point of superintelligence far beyond our own individual or collective intelligence. Such a machine superintelligence would likely be impossible to contain or control. Unless we are extremely careful about how we initially go about developing AI, a superintelligent machine might simply decide to dispense with humanity in pursuit of its own purposes. “Before the prospect of an intelligence explosion,” Bostrom wrote, “we humans are like small children playing with a bomb.”

This news summary was put together in collaboration with Anthropocene. Thanks to Seth Baum, Kaitlin Butler, and Grant Wilson for help compiling the news.

For last month’s news summary, please see GCR News Summary June 2014.

You can help us compile future news posts by putting any GCR news you see in the comment thread of this blog post, or send it via email to Grant Wilson (grant [at] gcrinstitute.org).

This post was written by
Robert de Neufville is Director of Communications of the Global Catastrophic Risk Institute.

Trackbacks

  • Trackback from Links for August 2014 - foreXiv
    Thursday, 21 August, 2014

    […] and the alleged violation of the 1987 Intermediate-range Nuclear Forces Treaty are covered in the July round-up of threats to humanity by the Global Catastrophic Risk […]

  • Trackback from Epidemie – Ocasapiens - Blog - Repubblica.it
    Friday, 5 September, 2014

    […] “patogeni potenzialmente pandemici”, scrive Jocelyn Kaiser. Vista la serie di “incidenti” nei laboratori americani ad alta sicurezza, il suo articolo con Alison Galvani su PLoS […]