The Singularity is Back

An open letter orchestrated by the Future of Life Institute calling for a six months moratorium on research in artificial intelligence, quoting risks attached to ‘the loss of control’ over self-improving AI. The experts — and cosignatories such as opinionated oligarchs notwithstanding — insist that the risks are both existential and imminent. The Singularity is back in the news.

“Singularity Hypotheses” was published ten years ago. Since then I’ve been asked multiple times whether the volume merits a follow-up (beyond the second volume). FLI’s open letter is as good as any opportunity to pause and ask, what do we know and what have we learned.

The Singularity is Back

The subject of a deadly ‘Singularity’ last occupied the headlines nine years ago when a flurry of headlines followed Johnny Depp’s film Transcendence. Steven Hawking weighed in on the risk of a superintelligence such as this film and many others discussed, saying that “Success in creating [superintelligent] AI would be the biggest event in human history. Unfortunately, it might also be the last.” This particular scenario describes the potential of an artificial superintelligence, a possibility that has been studied elsewhere extensively, with similar conclusions.

What are the dangers?

Setting aside “Terminator” and similarly fantastic nonsense, the real risks are not from a malevolent superintelligence but rather from one that is indifferent. An artificial superintelligence, namely an AI that is able to beat any human in any task, as Hawking and  his distinguished co-authors of the article explained, “could outsmart financial markets, out-inventing human researchers, out-manipulate human leaders, and develop weapons we cannot even understand”. These are only a few scenarios that end with a loss of control out of many others that experts in the field have pointed out in our volume.

Why is it likely?

Many possible paths end in a singularity of artificial superintelligence, with recursive self-improvement often considered the most plausible scenario. AI is already “writing AI”, programs can write programs effectively and much faster than any human. It is not hard to imagine an AI that rewrites itself, and being better at it than any human, improving itself at superhuman speeds. Such a ‘hard takeoff”, which Yudwoski’s chapter in our volume describes articulately, will leave humans far behind.

What can be done?

Not much, apparently. A moratorium, if agreed upon and somehow effectively “pauses” research in AI for six months, what then? Besides, moratoria need good faith. How likely is Beijing to agree and pause its effort to develop the powerful, ubiquitous AI necessary to control over 1.4 billion people? What stops the efforts of the military leaders of Iran, Russia, Israel, and The US Department to create superior weapons, cut down on casualties, and develop swarms of autonomous killer drones? The incentive for corporations to create an AI that outsmart everyone else in the stock markets and for armies to win in the battle fields are too high, and the bar is too low, for a moratorium to hold for any length of time.

What cannot be done?

Why is keeping superintelligence from getting out of control such a hard problem? Experts refer to the containment problem, aka the boxing problem: an inferior intelligence cannot guarantee that a superintelligence shall remain contained, away from influencing the “outside”. The film Ex Machina and the TV series Next both illustrate why a superintelligence, if sufficiently cunning, will ultimately overcome any limitations set by humans. In conclusion, even with good faith all round, a superintelligence may escape our control. Another problem is the ‘alignment problem’, which has taught us that a superintelligence may have different goals, specifically with relation to moral judgements related to the right for human life, freedom, and happiness. Humans cannot guarantee that a superintelligence with any agency over changing itself would ‘align’ itself with the ethics that guide us.

The term ‘singularity’ hints on our inability to look beyond the event’s horizons. It therefore remains to be seen whether an artificial intelligence will ever be capable to improve itself exponentially and break the superintelligence barrier, and if so, what will follow.

Sapience.org is a thinktank: a public policy research institute focused on the future of progressively intelligent computing and its disruptive effects. on economy and society. Our reports inform policy makers and the public on disruptive computing technology and on effective methods of managing its risks.

References

Scroll to Top