The Singularity Controversy, Part I

Sapience Technical Report STR 2016-1

Author: Amnon H. Eden

The Singularity Controversy, Part I: Lessons Learned and Open Questions: Conclusions from the Battle on the Legitimacy of the Debate‘ informs policy makers on the nature and the merit of the arguments for and against the concerns associated with a potential technological singularity.

Part I describes the lessons learned from our investigation of the subject, separating the arguments of merit from the fallacies and misconceptions that confuse the debate and undermine its rational resolution.

Cite As: Amnon H. Eden. “The Singularity Controversy, Part I: Lessons Learned and Open Questions”. arXiv:1601.05977 [cs.AI], Sapience Project, Technical Report STR 2016-1 (January 2016), DOI 10.13140/RG.2.1.3416.6809


  • Studying the Singularity
  • Conclusions drawn
    • Singularity = Acceleration + Discontinuity + Superintelligence
    • Which singularity do you mean? AI or IA?
    • Some ‘singularities’ are implausible, incoherent, or no singular
    • Pulp “singularities” are not scientific hypotheses
    • The risks of AI arise from indifference, not malevolence
    • The risk of AI is essentially like the risk of any powerful technology
    • We’re not clear what “artificial intelligence” means
    • The debate hasn’t ended; it has barely begun
  • Open Questions
    • Can AI be controlled?
    • AI or IA?
    • Can we prove AIs are becoming more intelligent?
 Full Report


Related posts:

Energetics of the Brain and AI

Sapience Technical Report STR 2016-02 Author: Anders Sandberg Does the energy requirements for the human brain give energy constraints that give reason to doubt the feasibility of artificial intelligence? In Energetics of the Brain & AI I review some relevant estimates of brain bioenergetics and analyze some of the methods of estimating brain emulation energy requirements.

Unethical Research: How to Create a Malevolent Artificial Intelligence

Sapience Technical Report STR 2016-03 Author: Roman V. Yampolskiy Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine.

Leave a Reply

Your email address will not be published. Required fields are marked *