Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb [...] We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound
British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI
Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
American author, philosopher, and neuroscientist
It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would gu
The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being cededauthority far beyond their competence.
Moral philosopher at Oxford University. His work focuses on the big picture questions.
His current research is on avoiding the threat of human extinction and thus safeguarding a positive future for humanity... He is a leading expert on the potential threats and opportunities posed by advanced artificial intelligence over the coming decades