Philanthropist. Founder and former CEO of Microsoft.
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent...A few decades after that though the intelligence is strong enough to be a concern.
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb [...] We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound
British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI
Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
American author, philosopher, and neuroscientist
It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would gu
He accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
There is going to be interest in creating machines with will, whose interests are not our own. And that's without considering what machines that terrorists, rogue regimes, and intelligence agencies of the less roguish nations, may devise. I think the notion of Frankensteinian AI, which turns on its creators, is something worth taking seriously
AI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions. [...] Consider automated trading systems. A bad decision in these systems may be (and has been) a financial disaster for many people. That will also be the case for self-driving cars. Some of their ...See More
Author, computer scientist, inventor and futurist
The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far mo...See More
Professor and iCORE chair of computer science at University of Alberta
He states that there is “certainly a significant chance within all of our expected lifetimes” that human-level AI will be created, then goes on to say the AIs “will not be under our control”, and so on
Professor of Cognitive Robotics at Imperial College London, and Research Scientist at DeepMind
The singularity presents both an existential threat to humanity and an existential opportunity for humanity to transcend its limitations. Shanahan makes it clear that we need to imagine both possibilities if we want to bring about the better outcome.