Agree:

Open uri20180514 4 1pxd6qn?1526281024

Bill Gates Philanthropist. Founder and former CEO of Microsoft.

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent...A few decades after that though the intelligence is strong enough to be a concern.
Open uri20170114 4 1hvxjz4?1484413669

Nick Bostrom

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb [...] We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound
Open uri20170608 4 1nixpqa?1496907914

Yoshua Bengio Computer scientist at University of Montreal

One thing I came with is also … this subject of safe AI came up in many discussions, and I would say that these discussions left a strong [positive] impression on me.
Open uri20170610 4 1tfzxpr?1497101680

Alan Turing British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI

Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Open uri20170608 4 3f1ra1?1496916876

Roman Yampolskiy Computer scientist at the University of Louisville

Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence
Data?1496917318

Eric Horvitz Director of Microsoft Research's main Redmond lab

Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems.
Open uri20170328 4 1uozrfm?1490732902

Clive Sinclair Entrepreneur and inventor

Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very dificult for us to survive
Data?1484413691

Vernor Vinge Retired San Diego State University Professor and author

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended (1993)
Open uri20180514 4 qt9hr0?1526279702

Ray Kurzweil Author, computer scientist, inventor and futurist

The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far mo... See More
Open uri20170114 4 1cpw9un?1484413645

Francesca Rossi Computer Scientist, Professor at the University of Padova

AI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions. [...] Consider automated trading systems. A bad decision in these systems may be (and has been) a financial disaster for many people. That will also be the case for self-driving cars. Some of their ... See More

Sign up to see 34 opinions on this topic:

By clicking Sign up, you agree to our terms and privacy conditions.

or Log in

Disagree:

Open uri20160822 3 ppr8p4?1471892223

Andrew Ng Baidu; Stanford CS faculty; founded Coursera and Google Brain

Worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it - an unnecessary distraction.
Open uri20170328 4 f5c94u?1490732898

Paul G. Allen Co-founder of Microsoft

Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Open uri20171215 4 152pec3?1513343226

Steve Wozniak Co-Founder of Apple Inc, inventor of the personal computer

It's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Open uri20180214 4 10wtwyq?1518628351

Mark Zuckerberg CEO at Facebook

I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Open uri20170114 4 1g190uw?1484413620

Rodney A. Brooks Fellow of the Australian Academy of Science, author, and robotics entrepreneur

If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Open uri20170114 4 fubgdq?1484413600

Roger Schank John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern Univ

Machines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
Open uri20170105 4 ulrmht?1483658174

Yann LeCun Computer scientist working in machine learning and computer vision

There are several real or imagined dangers about AI. Today, the danger of a Terminator scenario or something like this... those are not things that we’re worried about because we just don’t have the technology to build machines like that.
Open uri20160822 3 11ie95d?1471892166

Tim O'Reilly Founder and CEO, O'Reilly Media. Investor. Studied at Harvard University.

Fear is not the right frame of mind to think about AI's impact on our society
Open uri20170129 4 96drk?1485693114

Ben Goertzel

Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way
Open uri20180517 4 11fvo0?1526586853

Oren Etzioni CEO of the Allen Institute for Artificial Intelligence

Predictions that superintelligence is on the foreseeable horizon are not supported by the available data. Moreover, it’s possible that AI systems could collaborate with people to create a symbiotic superintelligence. That would be very different from the pernicious and autonomous kind envisioned by Professor Bostrom

Sign up to see 34 opinions on this topic:

By clicking Sign up, you agree to our terms and privacy conditions.

or Log in

Biased? Please add more opinions