Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
Worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it - an unnecessary distraction.
Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Steve Wozniak
Co-Founder of Apple Inc, inventor of the personal computer
It's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.
I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Steven Pinker
Johnstone Family Professor in the Department of Psychology at Harvard University
There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power...See More
Rodney A. Brooks
Fellow of the Australian Academy of Science, author, and robotics entrepreneur
If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Roger Schank
John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern Univ
Machines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
Yann LeCun
Computer scientist working in machine learning and computer vision
There are several real or imagined dangers about AI. Today, the danger of a Terminator scenario or something like this... those are not things that we’re worried about because we just don’t have the technology to build machines like that.
How close to thinking are the machines we have built, or are going to be built soon? The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean. Differences are in performance, structural, functional, and more. Any maundering about how to deal with thinking machines is totally premature to say the least.