He accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
AI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.
Hans Moravec
Former professor at the Robotics Institute of CMU, and founder of the SeeGrid Corporation
He states that by the end of this process “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”
Richard Sutton
Professor and iCORE chair of computer science at University of Alberta
He states that there is “certainly a significant chance within all of our expected lifetimes” that human-level AI will be created, then goes on to say the AIs “will not be under our control”, and so on
Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Steve Wozniak
Co-Founder of Apple Inc, inventor of the personal computer
It's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Rodney A. Brooks
Fellow of the Australian Academy of Science, author, and robotics entrepreneur
If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering