A report written by 26 authors from 14 institutions, spanning academia, civil society, and industry builds on a two day workshop held in Oxford, UK, in February 2017. These universities and policy groups have published a 100-page report which addresses the potential threats of artificial intelligence. The groups involved include The Future of Humanity Institute, University of Oxford, Centre for Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, and Open AI.
According to the report, artificial intelligence and machine learning capabilities are growing
at an unprecedented rate. While there are countless beneficial applications, less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. The report analyzes, but does not resolve the question of what the long-term equilibrium between attackers and defenders will or should be. It focuses instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.
In response to the changing threat landscape the authors make four high-level
1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case
4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.