Log

The Singularity Controversy, Part I

Sapience Technical Report STR 2016-1

Author: Amnon H. Eden

The Singularity Controversy, Part I: Lessons Learned and Open Questions: Conclusions from the Battle on the Legitimacy of the Debate‘ informs policy makers on the nature and the merit of the arguments for and against the concerns associated with a potential technological singularity.

Part I describes the lessons learned from our investigation of the subject, separating the arguments of merit from the fallacies and misconceptions that confuse the debate and undermine its rational resolution.

Cite As: Amnon H. Eden. “The Singularity Controversy, Part I: Lessons Learned and Open Questions”. arXiv:1601.05977 [cs.AI], Sapience Project, Technical Report STR 2016-1 (January 2016), DOI 10.13140/RG.2.1.3416.6809

Contents

  • Studying the Singularity
  • Conclusions drawn
    • Singularity = Acceleration + Discontinuity + Superintelligence
    • Which singularity do you mean? AI or IA?
    • Some ‘singularities’ are implausible, incoherent, or no singular
    • Pulp “singularities” are not scientific hypotheses
    • The risks of AI arise from indifference, not malevolence
    • The risk of AI is essentially like the risk of any powerful technology
    • We’re not clear what “artificial intelligence” means
    • The debate hasn’t ended; it has barely begun
  • Open Questions
    • Can AI be controlled?
    • AI or IA?
    • Can we prove AIs are becoming more intelligent?
 Full Report

 

Related posts:

Five More Lessons about Documentation: A follow-up to Mark Birch’s “Developer Documentation”

Last week’s DevBizOps blog entry (“Developer Documentation: Developers don’t like writing docs, what’s the alternative?“) asked: How programmers can get their answers from documentation, and are there alternatives? As ever Mark’s post advises on questions every developer has to ask in understanding software. Open source or proprietary, documentation is necessary either for using, extending, or changing software. The costs of searching for answers are known to developers and project managers. Empirical literature and decades of research in software comprehension and reverse engineering show that the costs of understanding software are significant. What, then, can be done? We offer five more lessons in addition to Mark’s post, presented as “formulas”:

Unethical Research: How to Create a Malevolent Artificial Intelligence

Sapience Technical Report STR 2016-03 Author: Roman V. Yampolskiy Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine.

Leave a Reply