Agree:

Open uri20180629 4 qa906n?1530306018

Stephen Hawking British physicist

The primitive forms of artificial intelligence developed so far have already proved very useful, but I fear the consequences of creating something that can match or surpass humans.
Open uri20170114 4 1hvxjz4?1484413669

Nick Bostrom

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb [...] We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound
Open uri20170610 4 1tfzxpr?1497101680

Alan Turing British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI

Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Open uri20170108 4 ilw63q?1483870094

Stuart Russell Professor of Computer Science at Berkeley

The question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?
Open uri20170610 4 10aymhs?1497102676

Marvin Minsky Mathematician, computer scientist, and pioneer in the field of artificial intelligence

The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful.
Data?1496913341

Hans Moravec Former professor at the Robotics Institute of CMU, and founder of the SeeGrid Corporation

He states that by the end of this process “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”
Open uri20170608 4 15toaaj?1496915222

Marcus Hutter Professor in the Research School of Computer Science at Australian National University

Way before the singularity, even when setting up a virtual society in our imagine, there are likely some immediate difference, for example that the value of an individual life suddenly drops, with drastic consequences.
Open uri20170114 4 4g3lxg?1484413693

David Chalmers Australian National University Professor

An intelligence explosion has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet

Biased? Please add more opinions