Ilya Sutskever
Co-founder and Research Director of OpenAI
It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.
AI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.
Marvin Minsky
Mathematician, computer scientist, and pioneer in the field of artificial intelligence
The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful.
Alan Turing
British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI
Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Jed McCaleb
co-founder of Stellar Development Foundation
By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good.
... As concern grows, Dustin Moskovitz and Cari Tuna’s funding outfit is also paying attention, with several recent grants focused on the risks of AI, including one for $5.5 million.
Eric Horvitz
Director of Microsoft Research's main Redmond lab
Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems.
Roman Yampolskiy
Computer scientist at the University of Louisville
Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence
I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Steven Pinker
Johnstone Family Professor in the Department of Psychology at Harvard University
There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power...See More
Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.
Douglas Hofstadter
Professor of cognitive science. Pulitzer prize winner
Life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries