Open uri20170608 4 18i0r2o?1496905114

Sam Harris American author, philosopher, and neuroscientist

It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would gu
Open uri20180514 4 qt9hr0?1526279702

Ray Kurzweil Author, computer scientist, inventor and futurist

The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far mo... See More

Vernor Vinge Retired San Diego State University Professor and author

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended (1993)
Open uri20170610 4 3ijgty?1497103260

K. Eric Drexler Founding father of nanotechnology

AI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.
Open uri20170328 4 1ddio23?1490732910

Eliezer Yudkowsky AI researcher who popularized the idea of friendly artificial intelligence

Yudkowsky argues that as AI systems become increasingly intelligent, new formal tools will be needed in order to avert default incentives for harmful behavior, as well as to inductively teach correct behavior.
Open uri20170114 4 18xec1r?1484414825

Daniel C. Dennett Philosopher and Austin B. Fletcher Professor of Philosophy

The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being cededauthority far beyond their competence.


Open uri20170114 4 182t7do?1484413718

Steven Pinker Johnstone Family Professor in the Department of Psychology at Harvard University

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power... See More
Open uri20170114 4 1p4as6t?1484413591

Carlo Rovelli Theoretical Physicist and Author

How close to thinking are the machines we have built, or are going to be built soon? The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean. Differences are in performance, structural, functional, and more. Any maundering about how to deal with thinking machines is totally premature to say the least.
Open uri20160822 3 11ie95d?1471892166

Tim O'Reilly Founder and CEO, O'Reilly Media. Investor. Studied at Harvard University.

Fear is not the right frame of mind to think about AI's impact on our society
Open uri20170114 4 1qh73xk?1484413623

Douglas Hofstadter Professor of cognitive science. Pulitzer prize winner

Life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries
Create a new topic