Bill Gates
Philanthropist. Founder and former CEO of Microsoft.
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent...A few decades after that though the intelligence is strong enough to be a concern.
Some serious thinkers fear that AI could one day pose an existential threat: a ‘superintelligence’ might pursue goals that prove not to be aligned with the continued existence of humankind
The primitive forms of artificial intelligence developed so far have already proved very useful, but I fear the consequences of creating something that can match or surpass humans.
AI will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb [...] We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound
Yoshua Bengio
Computer scientist at University of Montreal
One thing I came with is also … this subject of safe AI came up in many discussions, and I would say that these discussions left a strong [positive] impression on me.
Roman Yampolskiy
Computer scientist at the University of Louisville
Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
Worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it - an unnecessary distraction.
Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Steve Wozniak
Co-Founder of Apple Inc, inventor of the personal computer
It's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.
I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Steven Pinker
Johnstone Family Professor in the Department of Psychology at Harvard University
There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power...See More
Rodney A. Brooks
Fellow of the Australian Academy of Science, author, and robotics entrepreneur
If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Roger Schank
John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern Univ
Machines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
Yann LeCun
Computer scientist working in machine learning and computer vision
There are several real or imagined dangers about AI. Today, the danger of a Terminator scenario or something like this... those are not things that we’re worried about because we just don’t have the technology to build machines like that.
How close to thinking are the machines we have built, or are going to be built soon? The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean. Differences are in performance, structural, functional, and more. Any maundering about how to deal with thinking machines is totally premature to say the least.