Opinions from computer scientistsSee all occupations
Mark Zuckerberg, CEO at FacebookI have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google BrainNeed to time technology well: 2007 was good time to launch iPhone; but not 1993 (Apple Newton) since battery/screen/chip tech not there. Extreme example: Leonardo da Vinci (1480s) invention of helicopters was way too early. Engine technology didn’t get there until 1900s. Maybe 2007 was early for autonomous driving (DARPA Urban Challenge) since AI, sensors not yet there. From ~2015 ecosystem more r... See More
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google BrainTech world is used to tectonic shift every 5 years from new inventions. Now tech has infected other industries so everyone has to shift.
agrees Disagree and commitThis phrase will save a lot of time. If you have conviction on a particular direction even though there's no consensus, it's helpful to say, "Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?" By the time you're at this point, no one can know the answer for sure, and you'll probably get a quick yes.
I don’t like the ‘Plan B’ idea that we want to go to space so we have a backup planet, we have sent probes to every planet in this solar system, and believe me, this is the best planet. There is no doubt. This is the one that you want to protect.
Marvin Minsky, Mathematician, computer scientist, and pioneer in the field of artificial intelligenceThe ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful.
Alan Turing, British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AIEven if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Peter Thiel, Technology entrepreneur and investorThiel foundation is the single largest donor to MIRI, an organization founded by the illustrious Eliezer Yudkowsky
Eric Horvitz, Director of Microsoft Research's main Redmond labDeeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems.
Roman Yampolskiy, Computer scientist at the University of LouisvilleYampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence
Richard Sutton, Professor and iCORE chair of computer science at University of AlbertaHe states that there is “certainly a significant chance within all of our expected lifetimes” that human-level AI will be created, then goes on to say the AIs “will not be under our control”, and so on
Marcus Hutter, Professor in the Research School of Computer Science at Australian National UniversityWay before the singularity, even when setting up a virtual society in our imagine, there are likely some immediate difference, for example that the value of an individual life suddenly drops, with drastic consequences.
David McAllester, Professor and Chief Academic Officer at the Toyota Technological Institute at ChicagoThe Singularity would enable machines to become infinitely intelligent, and would pose an ‘incredibly dangerous scenario’, he says.
Stephen Omohundro, Scientist, Self-Aware Systems; Co-founder, Center for Complex Systems ResearchOmohundro’s research concludes that the drives of superintelligent machines will be on a collision course with our own, unless we design them very carefully.
Yoshua Bengio, Computer scientist at University of MontrealOne thing I came with is also … this subject of safe AI came up in many discussions, and I would say that these discussions left a strong [positive] impression on me.
Bill Gates, Philanthropist. Founder and former CEO of Microsoft.More than 120 retired generals and admirals recently wrote a letter to Congress arguing that U.S. aid programs are critical to preventing conflict and reducing the need to put our men and women in uniform in harm’s way.
Eliezer Yudkowsky, AI researcher who popularized the idea of friendly artificial intelligenceYudkowsky argues that as AI systems become increasingly intelligent, new formal tools will be needed in order to avert default incentives for harmful behavior, as well as to inductively teach correct behavior.
Clive Sinclair, Entrepreneur and inventorOnce you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very dificult for us to survive
Vernor Vinge, Retired San Diego State University Professor and authorWithin thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended (1993)
Tim O'Reilly, Founder and CEO, O'Reilly Media. Investor. Studied at Harvard University.Fear is not the right frame of mind to think about AI's impact on our society
Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way
Paul G. Allen, Co-founder of MicrosoftGaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Ray Kurzweil, Author, computer scientist, inventor and futuristThe existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far mo... See More
Francesca Rossi, Computer Scientist, Professor at the University of PadovaAI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions. [...] Consider automated trading systems. A bad decision in these systems may be (and has been) a financial disaster for many people. That will also be the case for self-driving cars. Some of their ... See More
Gordon Moore, Co-founder and chairman emeritus of Intel. Proponent of Moore's LawThe singularity is unlikely ever to occur because of the complexity with which the human brain operates
Douglas Hofstadter, Professor of cognitive science. Pulitzer prize winnerLife and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries
Rodney A. Brooks, Fellow of the Australian Academy of Science, author, and robotics entrepreneurIf we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Roger Schank, John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern UniversityMachines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
Oren Etzioni, CEO of the Allen Institute for Artificial IntelligencePredictions that superintelligence is on the foreseeable horizon are not supported by the available data. Moreover, it’s possible that AI systems could collaborate with people to create a symbiotic superintelligence. That would be very different from the pernicious and autonomous kind envisioned by Professor Bostrom
Steve Wozniak, Co-Founder of Apple Inc, inventor of the personal computerIt's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Yann LeCun, Computer scientist working in machine learning and computer visionThere are several real or imagined dangers about AI. Today, the danger of a Terminator scenario or something like this... those are not things that we’re worried about because we just don’t have the technology to build machines like that.
Stuart Russell, Professor of Computer Science at BerkeleyThe question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google BrainUS govt should focus on accelerating US AI, rather than trying to slow down anyone else.
Filter by occupation/university by clicking on the pies: