GCR News Summary August/September 2016

european_parliament_strasbourg_hemicycle_-_diliff

EU Parliament in Strasbourg image courtesy of  David Iliff under a CC-BY-SA 3.0 license

By Matthijs Maas

In the early hours of September 9, North Korea carried out its fifth nuclear test, its biggest ever at an estimated 10 kilotons. The test, which was first detected as a magnitude 5.3-earthquake, was condemned by the UN Security Council, as well as leaders from across the world. In a statement, North Korea said it had tested a “nuclear warhead… standardized to be able to be mounted on strategic ballistic rockets”. Coming shortly in the wake of the North’s successful launch of a ballistic missile from a submarine, the incident has highlighted the continued progress of the North Korean military nuclear program. Analysts at 38 North have estimated that by 2020 Pyongyang might possess the capability to produce a reliable nuclear ICBM, as well as sufficient fissile material for 100 warheads. “They’ve greatly increased the tempo of their testing in a way, showing off their capabilities, showing us images of ground tests they could have kept hidden,” noted John Schilling, an expert on North Korea’s missile program and one of the study’s authors. In response to the test, the US Air Force flew nuclear-capable B1B bombers over South Korea, as South Korea’s Defense Ministry expressed concern that the North may conduct additional tests at any time. There were also renewed calls in South Korea for it to pursue its own nuclear weapons program.

Tensions flared in South Asia as India claimed it launched “surgical strikes” into Pakistan-ruled Kashmir on militants suspected of preparing for attacks on major cities. While Pakistan officially denies that India carried out any strikesclaiming instead that two of its soldiers were killed by Indian shelling across the Line of Control (LoC)—the move threatens the 2003 Kashmir ceasefire and risks escalating the frozen conflict between the nuclear-armed rivals. These raids came less than a day after Pakistan’s Defense Minister Khawaja Asif threatened to use nuclear weapons in a war with India. Anonymous Pakistani security officials have since warned that Pakistan would use tactical nuclear weapons in self-defense. The development marks a pronounced deterioration in relations between India and Pakistan since August, when Pakistan still declared it was ready to enter into a bilateral moratorium on nuclear testing with India.

In August, the UN Open-Ended Working Group taking forward multilateral nuclear disarmament negotiations (OEWG) voted in favor of a final report recommending states begin negotiations on an international ban on nuclear weapons in 2017. The report is the culmination of an extended public campaign highlighting the humanitarian impacts of nuclear weapons. The OEWG recommendations are set to be submitted to the UN General Assembly in October.

The UN Security Council adopted resolution 2310 calling for a complete ban on all nuclear weapons testing. The passing coincided with the 20th anniversary of the Comprehensive Test Ban Treaty (CTBT). The new, non-binding resolution calls on all States who have not done so to ratify the treaty, specifically addressing a group of 44 advanced, “nuclear capable” countries. The resolution was submitted by the US, even though the US is one of the countries that has not ratified the CTBT. The resolution was opposed by Republican senators as an attempt by President Obama to sidestep the need for a supermajority of the Senate to approve the treaty before ratification.

Facing strong opposition from the Pentagon as well as from US allies abroad, President Obama appears to be backing down from a series of major nuclear weapons reform proposals, such as an unequivocal pledge of no first-use; “de-alerting” nuclear missiles that remain ready to fire on short notice; or eliminating one leg of the nuclear triad of land-, air- and submarine-launched missiles. Reports over the summer had previously suggested that Mr. Obama was seeking to enact major changes to the US nuclear force posture during his final months in office, as part of a push to secure his nuclear legacyfirst articulated in Prague in 2009, and reiterated at Hiroshima last spring. The most far-reaching option still under active consideration is a one-third cut to the US deployed strategic arsenal, from 1,550 warheads under the New START treaty, to just over 1,000; other options on the table include reductions in reserve warheads or military stores of highly-enriched uranium (HEU), or delaying parts of the ambitious US nuclear modernization program that is slated to cost up to $1 trillion in the coming decades. Mr. Obama is due to make a final decision on the reforms in October. Arms control advocate Joseph Cirincione of the Ploughshares Fund expressed pessimism, however, saying that far-reaching changes might have been possible last year, but that “[the administration] took their eye off the ball…. it’s not outside elements that are the only problem, but the politics during the presidential election.”

Citing “unfriendly actions” by the U.S. towards Russia, President Vladimir Putin unilaterally suspended a landmark agreement with the U.S. for the disposal of surplus weapons-grade plutonium, marking a further deterioration in relations between Moscow and Washington. Under the 2000 Plutonium Management and Disposition Agreement (PMDA), both countries committed to destroying 34 metric tonnes of plutonium each by burning it in reactors—jointly disposing of enough fissile material for 17,000 nuclear weapons. However, in a new decree and bill (in Russian), President Putin claimed that while Russia had abided by these terms, the US had not, and he issued a number of conditions for resuming the agreement, such as reducing the American military presence in bordering NATO countries, and cancelling sanctions against Russia.

Five major Silicon Valley Tech companies (Google DeepMind, Amazon, Facebook, IBM & Microsoft) announced a joint Partnership on Artificial Intelligence in order “to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influence on people and society.” In a press release, the companies said they will conduct joint research on AI ethics, interoperability, and reliability, and publish it under an open license. The Partnership is the first effort at industry-wide coordination on ethics. “I think this is exactly the right time,” Dr. Seán Ó hÉigeartaigh, from the University of Cambridge’s Centre for the Study of Existential Risk (CSER), said in an interview; “The issues around data privacy, automation in employment, algorithmic fairness, are not only being raised by academics in ivory towers but the people who are at the forefront of the research and understand where it is going to go in five, ten or fifteen years.”

The Stanford One Hundred Year Study on Artificial intelligence (AI100) published its 2016 report, on “AI and Life in 2030”. The report argues that “attempts to regulate AI in general would be misguided, since there is no clear definition of AI… and the risks and considerations are very different in different domains.” The study dismissed concerns that artificial intelligence could be a threat to humankind on the grounds that “[n]o machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.” One of the report’s authors, Professor Oren Etzioni of the Allen Institute for Artificial Intelligence, reiterated this claim in the MIT Technology Review, arguing that fears of an artificial superintelligence may be overblown, since the large majority of American Association for Artificial Intelligence (AAAI) members believe artificial superintelligence to be at least more than 25 years off, and therefore “beyond the foreseeable horizon”.

The White House Office of Science and Technology Policy (OSTP) published the first public responses to its open ‘Request for Information on the Future of Artificial Intelligence’, which it issued last spring as part of a broader effort to solicit the views of citizens, academic and industry researchers, private companies and foundations. The responses, which include entries by leading AI-safety foundations such as the Future of Life Institute, Machine Intelligence Research Institute, Future of Humanity Institute, as well as submissions by DeepMind, cover a broad range of issues, from the legal and governance implications of AI, and the use of AI for public good, to safety approaches and research gaps.

The COP21 Paris Climate Accord is set to enter into force as soon as next month, after the European Union overwhelmingly voted to ratify the climate deal. The vote, coming on the heel of 31 new signatures at the UN General Assembly in September, will be sufficient to to fulfill both of the accord’s two thresholds for ratification—55 countries representing at least 55% of emissions. The rapid EU decision comes as a surprise; while 7 member states had already ratified the agreement independently, the bloc had previously estimated that it would only be able to ratify the agreement in 2017. The landmark climate agreement aims to limit global temperature rise to 2.0°C (3.6°F) above pre-industrial levels. However, temperatures have already risen almost half that much. At the same time, US scientists at the National Oceanic and Atmospheric Administration (NOAA) have reported increasing and persistent flooding of US coasts as a result of sea level rise, and measurements by the Scripps Institute of Oceanography in September suggested that carbon dioxide levels may have permanently passed the key threshold of 400 ppm.

The internet hosting provider OVH was hit by the largest DDoS (Distributed Denial-of-Service) attack ever recorded on the internet. According to its CTO Octave Klaba, most of the DDoS attack originated in infected “internet-of-things” devices, such as digital video recorders or cameras. The attack comes in the wake of a recent warning, by cybersecurity expert Bruce Schneier, that various major internet infrastructure companies have seen a marked increase in systematic attacks against them. “Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.”, Schneier wrote, arguing that this systematic pattern of probes is beyond the capabilities of activists and criminals, and instead appears suggestive of “a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar.

Thanks to Tony Barrett, Seth Baum, Kaitlin Butler, Robert de Neufville, and Grant Wilson for help compiling the news.

For the last news summary, please see GCR News Summary June/July 2016.

This post was written by
Robert de Neufville is Director of Communications of the Global Catastrophic Risk Institute.
Comments are closed.