Agree:

Open uri20170608 4 1apz5cy?1496906234

Demis Hassabis Founder & CEO, DeepMind

He accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
Open uri20170328 4 1uozrfm?1490732902

Clive Sinclair Entrepreneur and inventor

Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very dificult for us to survive
Open uri20170108 4 ilw63q?1483870094

Stuart Russell Professor of Computer Science at Berkeley

The question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?
Open uri20170608 4 bg8x3o?1496918250

Jaan Tallinn Co-founder of Skype and Kazaa

A superintelligent AI could be a serious problem
Open uri20170610 4 3ijgty?1497103260

K. Eric Drexler Founding father of nanotechnology

AI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.
Open uri20170608 4 1pfx6hc?1496910469

David McAllester Professor and Chief Academic Officer at the Toyota Technological Institute at Chicago

The Singularity would enable machines to become infinitely intelligent, and would pose an ‘incredibly dangerous scenario’, he says.
Data?1496913341

Hans Moravec Former professor at the Robotics Institute of CMU, and founder of the SeeGrid Corporation

He states that by the end of this process “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”
Open uri20170608 4 1ovk6wa?1496916249

Richard Sutton Professor and iCORE chair of computer science at University of Alberta

He states that there is “certainly a significant chance within all of our expected lifetimes” that human-level AI will be created, then goes on to say the AIs “will not be under our control”, and so on
Data?1526280861

Elon Musk Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal

AI potentially more dangerous than nukes
Open uri20170608 4 bg8x3o?1496918250

Jaan Tallinn Co-founder of Skype and Kazaa

He says our biggest existential threat is artificial intelligence

Disagree:

Open uri20170328 4 f5c94u?1490732898

Paul G. Allen Co-founder of Microsoft

Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Open uri20171215 4 152pec3?1513343226

Steve Wozniak Co-Founder of Apple Inc, inventor of the personal computer

It's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Open uri20170114 4 1g190uw?1484413620

Rodney A. Brooks Fellow of the Australian Academy of Science, author, and robotics entrepreneur

If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Open uri20170328 4 7gurdp?1490732908

Grady Booch Software engineer. Developed UML

Might a superintelligent AI emerge? In some distant future, perhaps
Create a new topic