Opinions from engineersSee all occupations
Elon Musk, Founder of SpaceX, cofounder of Tesla, SolarCity & PayPalMost people think we have too many people on the planet. But actually this is an outdated view ... The biggest issue in 20 years will be population collapse—not explosion, collapse.
Elon Musk, Founder of SpaceX, cofounder of Tesla, SolarCity & PayPalWhat to do about mass unemployment? This is going to be a massive social challenge. There will be fewer and fewer jobs that a robot cannot do better [than a human]. These are not things that I wish will happen. These are simply things that I think probably will happen.
Eric Schmidt, Executive Chairman & former CEO, GoogleWe have to make them [workers] more productive through automation, through tools. So I'm convinced that there is in fact going to be a jobs shortage. There is going to be jobs that are unfulfilled, and that the way we'll fill them is to take people plus computers, and the computers will make people smarter. If you make the people smarter, their wages go up. They don't go down, and the number of jo... See More
disagrees Electronic votingMy position hasn't changed over the years. Which is that online voting is a very unsafe idea and a very bad idea and something I think no technological breakthrough I can foresee can ever change. People's computers are not getting more secure. They're getting more infected with viruses. They're getting more under the control of malware.
Moshe Vardi, AI and Automation Expert With 30 Yrs ExperienceWe are approaching the time when machines will be able to outperform humans at almost any task. Society needs to confront this question before it is upon us: if machines are capable of doing almost any work humans can do, what will humans do? A typical answer is that we will be free to pursue leisure activities. [But] I do not find the prospect of leisure-only life appealing. I believe that work i... See More
Steve Wozniak, Co-Founder of Apple Inc, inventor of the personal computer
agrees Net neutralityFast lanes or “paid prioritization” create anti-competitive incentives for ISPs to favor their own services over those of their competitors. Though Pai thinks paid prioritization would somehow benefit consumers, allowing ISPs to make such arrangements would stifle innovation online and make it harder for the next great streaming service or social network to reach the market. This is not an idle wo... See More
Paul Buchheit, Lead developer of Gmail, founder of FriendFeed. and investor in Y CombinatorI don't have to work. I choose to work. And I believe that everyone deserves the same freedom I have. If done right, it's also economically superior, meaning that we will all have more wealth. We often talk about how brilliant or visionary Steve Jobs was, but there are probably millions of people just as brilliant as he was. The difference is that they likely didn't grow up with great parents, am... See More
Elon Musk, Founder of SpaceX, cofounder of Tesla, SolarCity & PayPalI really think there are two fundamental paths [for humans]: One path is we stay on Earth forever, and some eventual extinction event wipes us out
K. Eric Drexler, Founding father of nanotechnologyAI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.
Richard Sutton, Professor and iCORE chair of computer science at University of AlbertaHe states that there is “certainly a significant chance within all of our expected lifetimes” that human-level AI will be created, then goes on to say the AIs “will not be under our control”, and so on
Hans Moravec, Former professor at the Robotics Institute of CMU, and founder of the SeeGrid CorporationHe states that by the end of this process “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”
David McAllester, Professor and Chief Academic Officer at the Toyota Technological Institute at ChicagoThe Singularity would enable machines to become infinitely intelligent, and would pose an ‘incredibly dangerous scenario’, he says.
Demis Hassabis, Founder & CEO, DeepMindHe accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
Tom Harris, Executive director of the Ottawa-based International Climate Science Coalitionhe current state of “climate change” science is quite clear: There is essentially zero evidence that carbon dioxide from human activities is causing catastrophic climate change
Stephen Schneider, Professor of Environmental Biology and Global Change at Stanford UniversityYou can't adapt to melting the Greenland ice sheet. You can't adapt to species that have gone extinct
Clive Sinclair, Entrepreneur and inventorOnce you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very dificult for us to survive
Paul G. Allen, Co-founder of MicrosoftGaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Rodney A. Brooks, Fellow of the Australian Academy of Science, author, and robotics entrepreneurIf we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Steve Wozniak, Co-Founder of Apple Inc, inventor of the personal computerIt's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Stuart Russell, Professor of Computer Science at BerkeleyThe question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?
Filter by occupation/university by clicking on the pies: