Biased? Please add more opinions or donate
Advanced artificial intelligence will pose a serious risk to society within the next 50 years Follow
Opinions from 90 influencers(add more?) of which:
Bill Gates Philanthropist. Founder and former CEO of Microsoft.I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent...A few decades after that though the intelligence is strong enough to be a concern.
Sam Altman President of Y Combinator. Investor at Reddit, Stripe, Change.org, Pinterest and many othersDevelopment of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.
Stephen Hawking British physicistThe primitive forms of artificial intelligence developed so far have already proved very useful, but I fear the consequences of creating something that can match or surpass humans.
James Barrat Filmmaker, speaker and authorAI will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine
Roman Yampolskiy Computer scientist at the University of LouisvilleYampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence
Yoshua Bengio Computer scientist at University of MontrealOne thing I came with is also … this subject of safe AI came up in many discussions, and I would say that these discussions left a strong [positive] impression on me.
Alan Turing British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AIEven if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Dustin Moskovitz co-founder of Facebook and Asana... As concern grows, Dustin Moskovitz and Cari Tuna’s funding outfit is also paying attention, with several recent grants focused on the risks of AI, including one for $5.5 million.
World Economic Forum World Economic Forum ReportSome serious thinkers fear that AI could one day pose an existential threat: a ‘superintelligence’ might pursue goals that prove not to be aligned with the continued existence of humankind
Sam Harris American author, philosopher, and neuroscientistIt is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would gu
Demis Hassabis Founder & CEO, DeepMindHe accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
Max Tegmark Professor at MIT & co-founder at Future of Life InstituteSuperintelligent machines are quite feasible
Shane Legg Machine learning researcher and founder of DeepMindIt's my number 1 risk for this century
Eric Horvitz Director of Microsoft Research's main Redmond labDeeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems.
William Poundstone JournalistThere is going to be interest in creating machines with will, whose interests are not our own. And that's without considering what machines that terrorists, rogue regimes, and intelligence agencies of the less roguish nations, may devise. I think the notion of Frankensteinian AI, which turns on its creators, is something worth taking seriously
Clive Sinclair Entrepreneur and inventorOnce you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very dificult for us to survive
Jaan Tallinn Co-founder of Skype and KazaaA superintelligent AI could be a serious problem
Frank Wilczek Physicist, MIT and Recipient, 2004 Nobel Prize in PhysicsWithout careful restraint and tact, researchers could wake up to discover they've enabled the creation of armies of powerful, clever, vicious paranoiacs
Stuart Russell Professor of Computer Science at BerkeleyThe question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?
Ray Kurzweil Author, computer scientist, inventor and futuristThe existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far mo... See More
Vernor Vinge Retired San Diego State University Professor and authorWithin thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended (1993)
Francesca Rossi Computer Scientist, Professor at the University of PadovaAI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions. [...] Consider automated trading systems. A bad decision in these systems may be (and has been) a financial disaster for many people. That will also be the case for self-driving cars. Some of their ... See More
Jack Ma Alibaba founderSocial conflicts in the next three decades will have an impact on all sorts of industries and walks of life. If trade stops, war starts.
Chris Olah Google researcherWe believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably
Bart Selman Computer scientist at Cornell UniversityIt's a societal risk. Society will have to adapt. How we will adapt is not fully clear yet. But I think it's something we'll have to think about.
Andrew Davison Professor at Imperial College LondonExponentially increasing technology might lead to super-human AI and other developments that will change the world utterly in the surprisingly near future (i.e. perhaps the next 20--30 years)
David McAllester Professor and Chief Academic Officer at the Toyota Technological Institute at ChicagoThe Singularity would enable machines to become infinitely intelligent, and would pose an ‘incredibly dangerous scenario’, he says.
Stephen Omohundro Scientist, Self-Aware Systems; Co-founder, Center for Complex Systems ResearchOmohundro’s research concludes that the drives of superintelligent machines will be on a collision course with our own, unless we design them very carefully.
Andrew Ng Baidu; Stanford CS faculty; founded Coursera and Google BrainWorrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it - an unnecessary distraction.
Paul G. Allen Co-founder of MicrosoftGaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Steve Wozniak Co-Founder of Apple Inc, inventor of the personal computerIt's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Stanford University Stanford University ReportContrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.
Mark Zuckerberg CEO at FacebookI have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Steven Pinker Johnstone Family Professor in the Department of Psychology at Harvard UniversityThere is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power... See More
Rodney A. Brooks Fellow of the Australian Academy of Science, author, and robotics entrepreneurIf we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Roger Schank John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern UnivMachines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
Carlo Rovelli Theoretical Physicist and AuthorHow close to thinking are the machines we have built, or are going to be built soon? The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean. Differences are in performance, structural, functional, and more. Any maundering about how to deal with thinking machines is totally premature to say the least.
Yann LeCun Computer scientist working in machine learning and computer visionThere are several real or imagined dangers about AI. Today, the danger of a Terminator scenario or something like this... those are not things that we’re worried about because we just don’t have the technology to build machines like that.
Astro Teller Head of Google XI’ve been working for over twenty years to help people understand AI and to calm dystopian hysteria that has wormed its way into discussions about the future of AI and robotics
Robert Provine Research Professor/Professor Emeritus, University of MarylandThere is no indication that we will have a problem keeping our machines on a leash, even if they misbehave. We are far from building teams of swaggering, unpredictable, Machiavellian robots with an attitude problem and urge to reproduce
Tim O'Reilly Founder and CEO, O'Reilly Media. Investor. Studied at Harvard University.Fear is not the right frame of mind to think about AI's impact on our society
Oren Etzioni Entrepreneur and professor of Computer Science at University of WasingtonPredictions that superintelligence is on the foreseeable horizon are not supported by the available data. Moreover, it’s possible that AI systems could collaborate with people to create a symbiotic superintelligence. That would be very different from the pernicious and autonomous kind envisioned by Professor Bostrom
T. J. Rodgers Founder of Cypress SemiconductorI don't believe in technological singularities
Denny Vrandečić Wikidata founder, Google ontologistThere are plenty of consequences of the development of AI that warrant intensive discussion (economical consequences, ethical decisions made by AIs, etc.), but it is unlikely that they will bring the end of humanity
Guruduth S. Banavar Vice President, IBM ResearchSensationalism and speculation around general-purpose, human-level machine intelligence is little more than good entertainment
Donald D. Hoffman Cognitive Scientist, UC, IrvineAll species go extinct. Homo sapiens will be no exception. We don't know how it will happen—virus, an alien invasion, nuclear war, a super volcano, a large meteor, a red-giant sun. Yes, it could be AIs, but I would bet long odds against it. I would bet, instead, that AIs will be a source of awe, insight, inspiration, and yes, profit, for years to come.
Zengchang Qin Director, Intelligent Computing and Machine Learning Lab, Beihang UniversityPeople are worried about the free will of machines. So far, no scientific evidence can support such a statement. Even human beings’ free will seems to be an enigma, let alone that of machines. Deep diving AI researchers have a crystal clear picture of the industry status quo and risks that may not be manageable. The reality is far from what people might think of.
Babak Hodjat Co-founder and chief scientist of SentientAI is no more or less dangerous than any other one of humanity’s inventions, and so far, the verdict on human technology has been pretty positive
Neil Jacobstein Artificial Intelligence & Robotics, Singularity UniversityI think we will live in a world that is, frankly, a lot better, cleaner, safer, healthier than the one we live in today
Chamath Palihapitiya First they ignore you, then they ridicule you, then they fight you, and then you win. CEO @SocI do think we will get ever precise capabilities in strictly defined systems (autonomous driving) where most of the hairiest and ambiguous rules will be ratified or voted on, but i don't see an "intelligent" brain anywhere around the corner
Grady Booch Software engineer. Developed UMLMight a superintelligent AI emerge? In some distant future, perhaps
Miguel Nicolelis Neuroscientist at Duke UniversityComputers will never replicate the human brain and that the technological Singularity is “a bunch of hot air”
Peter Stone Computer scientist at the University of Texas, AustinI don't think there's a single change that going to be black and white once we're on one side and now there's a change and we're on the other side. It's a cumulative effect of everything, AI is embedded in many of the technologies that have been changing our world over the last several decades and will continue to do so.
Gordon Moore Co-founder and chairman emeritus of Intel. Proponent of Moore's LawThe singularity is unlikely ever to occur because of the complexity with which the human brain operates
Douglas Hofstadter Professor of cognitive science. Pulitzer prize winnerLife and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries
Michael Littman Computer scientist at Brown UniversityWe can turn machines into workers — they can be labor, and that actually deeply undercuts human value. My biggest concern at the moment is that we as a society find a way of valuing people not just for the work they do.
Filter by occupation/university by clicking on the pies: