Agree:

Open uri20170615 4 b17x3l?1497557663

Ilya Sutskever Co-founder and Research Director of OpenAI

It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.
Open uri20170610 4 3ijgty?1497103260

K. Eric Drexler Founding father of nanotechnology

AI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.
Open uri20170610 4 10aymhs?1497102676

Marvin Minsky Mathematician, computer scientist, and pioneer in the field of artificial intelligence

The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful.
Open uri20170610 4 1tfzxpr?1497101680

Alan Turing British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI

Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Open uri20170608 4 10ukkmy?1496934108

Jed McCaleb co-founder of Stellar Development Foundation

By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good.
Open uri20170608 4 1i6gpx9?1496932103

Dustin Moskovitz co-founder of Facebook and Asana

... As concern grows, Dustin Moskovitz and Cari Tuna’s funding outfit is also paying attention, with several recent grants focused on the risks of AI, including one for $5.5 million.
Open uri20170608 4 bg8x3o?1496918250

Jaan Tallinn Co-founder of Skype and Kazaa

He says our biggest existential threat is artificial intelligence
Open uri20170608 4 qh6gsu?1496917799

Peter Thiel Technology entrepreneur and investor

Thiel foundation is the single largest donor to MIRI, an organization founded by the illustrious Eliezer Yudkowsky
Data?1496917318

Eric Horvitz Director of Microsoft Research's main Redmond lab

Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems.
Open uri20170608 4 3f1ra1?1496916876

Roman Yampolskiy Computer scientist at the University of Louisville

Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence

Sign up to see 65 opinions on this topic:

By clicking Sign up, you agree to our terms and privacy conditions.

or Log in

Disagree:

Open uri20180214 4 10wtwyq?1518628351

Mark Zuckerberg CEO at Facebook

I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Open uri20170114 4 182t7do?1484413718

Steven Pinker Johnstone Family Professor in the Department of Psychology at Harvard University

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power... See More
Open uri20170328 4 7gurdp?1490732908

Grady Booch Software engineer. Developed UML

Might a superintelligent AI emerge? In some distant future, perhaps
Open uri20160822 3 11ie95d?1471892166

Tim O'Reilly Founder and CEO, O'Reilly Media. Investor. Studied at Harvard University.

Fear is not the right frame of mind to think about AI's impact on our society
Open uri20170129 4 96drk?1485693114

Ben Goertzel

Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way
Open uri20170328 4 f5c94u?1490732898

Paul G. Allen Co-founder of Microsoft

Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Open uri20170328 4 1mh09qv?1490732892

Stanford University Stanford University Report

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.
Open uri20170114 4 77irrx?1484415064

T. J. Rodgers Founder of Cypress Semiconductor

I don't believe in technological singularities
Open uri20170114 4 ne1f8f?1484413625

Gordon Moore Co-founder and chairman emeritus of Intel. Proponent of Moore's Law

The singularity is unlikely ever to occur because of the complexity with which the human brain operates
Open uri20170114 4 1qh73xk?1484413623

Douglas Hofstadter Professor of cognitive science. Pulitzer prize winner

Life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries

Sign up to see 65 opinions on this topic:

By clicking Sign up, you agree to our terms and privacy conditions.

or Log in

Biased? Please add more opinions