Agree:

Open uri20170114 4 1hvxjz4?1484413669

Nick Bostrom

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb [...] We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound
Open uri20170610 4 1tfzxpr?1497101680

Alan Turing British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI

Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Open uri20170608 4 18i0r2o?1496905114

Sam Harris American author, philosopher, and neuroscientist

It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would gu
Open uri20170114 4 18xec1r?1484414825

Daniel C. Dennett Philosopher and Austin B. Fletcher Professor of Philosophy

The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being cededauthority far beyond their competence.
Open uri20170114 4 4g3lxg?1484413693

David Chalmers Australian National University Professor

An intelligence explosion has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet
Open uri20170114 4 a8hktt?1484425894

Toby Ord Moral philosopher at Oxford University. His work focuses on the big picture questions.

His current research is on avoiding the threat of human extinction and thus safeguarding a positive future for humanity... He is a leading expert on the potential threats and opportunities posed by advanced artificial intelligence over the coming decades

Biased? Please add more opinions