We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably
Imagine a computer that wants to calculate π to as many digits as possible. That computer will see humans as being made of atoms which it could use to build more computers; and worse, since we would object to that and might try to stop it, we’d be a potential threat that it would be in the AI’s interest to eliminate
Exponentially increasing technology might lead to super-human AI and other developments that will change the world utterly in the surprisingly near future (i.e. perhaps the next 20--30 years)
Michael Vassar
Co-founder and Chief Science Officer of MetaMed Research
If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.
Paul Christiano
PhD Theoretical Computer Science, UC Berkeley | Researcher at OpenAI
Ideally, “each time you build a more powerful machine, it effectively models human values and does what humans would like,” says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence.
Stuart Armstrong
PhD Mathematics, Oxford | Research Fellow, Future of Humanity Institute
One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk.
Jaime Sevilla
Student of Math and CompSci, future AI Risk researcher, dabbling entrepreneur.
In strategy games, the most powerful abilities are those which let you take more actions per turn or have a wider array of possible actions to perform the one best suited to the situation at hand. AI is literally a machine producing ideas, which let you act faster (and thus perfom more actions) or execute different plans (and thus have more choices). This is a serious game imbalance, and is one we...See More
I’ve been working for over twenty years to help people understand AI and to calm dystopian hysteria that has wormed its way into discussions about the future of AI and robotics
Robert Provine
Research Professor/Professor Emeritus, University of Maryland
There is no indication that we will have a problem keeping our machines on a leash, even if they misbehave. We are far from building teams of swaggering, unpredictable, Machiavellian robots with an attitude problem and urge to reproduce
There are plenty of consequences of the development of AI that warrant intensive discussion (economical consequences, ethical decisions made by AIs, etc.), but it is unlikely that they will bring the end of humanity
All species go extinct. Homo sapiens will be no exception. We don't know how it will happen—virus, an alien invasion, nuclear war, a super volcano, a large meteor, a red-giant sun. Yes, it could be AIs, but I would bet long odds against it. I would bet, instead, that AIs will be a source of awe, insight, inspiration, and yes, profit, for years to come.
Zengchang Qin
Director, Intelligent Computing and Machine Learning Lab, Beihang University
People are worried about the free will of machines. So far, no scientific evidence can support such a statement. Even human beings’ free will seems to be an enigma, let alone that of machines. Deep diving AI researchers have a crystal clear picture of the industry status quo and risks that may not be manageable. The reality is far from what people might think of.
Lili Cheng
Corporate vice president of Microsoft AI & Research
n make sense of it. AI can truly help solve some of the world’s most vexing problems, from improving day-to-day communication to energy, climate, health care, transportation and more. The real magic of AI, in the end, won’t be magic at all. It will be technology that adapts to people. This will be profoundly transformational for humans
We can turn machines into workers — they can be labor, and that actually deeply undercuts human value. My biggest concern at the moment is that we as a society find a way of valuing people not just for the work they do.