Unethical Research: How to Create a Malevolent Artificial Intelligence

Sapience Technical Report STR 2016-03

Author: Roman V. Yampolskiy

Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine.

Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).

Contents

  • Why the paper on this topic?
  • Hazardous Intelligent Software
  • Who might be interested in creating Malevolent AI?
  • How to create a Malevolent AI
  • Societal Impact

Cite as: Roman V. Yampolskiy. “Unethical Research: How to Create a Malevolent Artificial Intelligence“. arXiv:1605.02817 [cs.AI], Sapience Project, Technical Report STR 2016-3 (May 2016)

Full Report

Leave a Comment

Scroll to Top