Sapience.org is a Thinktank: A Public Policy Research Institute based in London, UK
The media is increasingly reporting on a budding public debate about the potential impact of disruptive computation, and in particular the effects of artificial intelligence. Eminent scientists and industry leaders raised concerns about existential risks. Public awareness is raised by declarations of concern such as the recent Open Letter on Autonomous Weapons, films (eg Ex Machina) and TV dramas (eg Black Mirror). Steven Hawking and colleagues have appealed to the public after the release of the film Transcendence, urging against complacence about AI risks.
The public debate is fuelled by confusion among scholars about what exactly is alleged. Research we conducted in Singularity Hypotheses (Springer 2013) since 2009 shows that the debate is often ill-informed. In particular, the term ‘technological singularity’ describes two distinct and entirely different scenarios. And critiques of singularity hypotheses frequently aim their objections to the plausibility of apocalyptic scenarios are often distracted by the singularity ‘meme’ — such as Skynet in the Terminator franchise — where in fact the literature is entirely focused on risks from indifferent, not malevolent AI.