Sapience.org is a Thinktank: A Public Policy Research Institute based in London, UK
We advise advise venture capital investors, blockchain startups, health organizations, policy makers, and charity groups on the disruptive impact of artificial intelligence and increasingly disruptive computing technology. We support strategic technological decisions requiring answers to questions such as: Which machine learning models will replace deep neural networks? How will AI affect computer trading, now that almost all stocks are traded by algorithms? Which investment algorithms will hedge funds use next decade? How will driverless cars affect transportation? Will AI displace 40% of jobs within 20 years?
The media is increasingly reporting on a budding public debate about the potential impact of disruptive computation, and in particular the effects of artificial intelligence. Eminent scientists and industry leaders raised concerns about existential risks. Public awareness is raised by declarations of concern such as the recent Open Letter on Autonomous Weapons, films (eg Ex Machina) and TV dramas (eg Black Mirror). Steven Hawking and colleagues have appealed to the public after the release of the film Transcendence, urging against complacence about AI risks.
The public debate is fuelled by confusion among scholars about what exactly is alleged. Research we conducted in Singularity Hypotheses (Springer 2013) since 2009 shows that the debate is often ill-informed. In particular, the term ‘technological singularity’ describes two distinct and entirely different scenarios. And critiques of singularity hypotheses frequently aim their objections to the plausibility of apocalyptic scenarios are often distracted by the singularity ‘meme’ — such as Skynet in the Terminator franchise — where in fact the literature is entirely focused on risks from indifferent, not malevolent AI.