Agree:

Open uri20170615 4 b17x3l?1497557663

Ilya Sutskever Co-founder and Research Director of OpenAI

It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.
Open uri20170610 4 3ijgty?1497103260

K. Eric Drexler Founding father of nanotechnology

AI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.
Open uri20170610 4 10aymhs?1497102676

Marvin Minsky Mathematician, computer scientist, and pioneer in the field of artificial intelligence

The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful.
Open uri20170610 4 1tfzxpr?1497101680

Alan Turing British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI

Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Open uri20170608 4 10ukkmy?1496934108

Jed McCaleb co-founder of Stellar Development Foundation

By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good.
Open uri20170608 4 1i6gpx9?1496932103

Dustin Moskovitz co-founder of Facebook and Asana

... As concern grows, Dustin Moskovitz and Cari Tuna’s funding outfit is also paying attention, with several recent grants focused on the risks of AI, including one for $5.5 million.
Open uri20170608 4 bg8x3o?1496918250

Jaan Tallinn Co-founder of Skype and Kazaa

He says our biggest existential threat is artificial intelligence
Open uri20170608 4 qh6gsu?1496917799

Peter Thiel Technology entrepreneur and investor

Thiel foundation is the single largest donor to MIRI, an organization founded by the illustrious Eliezer Yudkowsky
Data?1496917318

Eric Horvitz Director of Microsoft Research's main Redmond lab

Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems.
Open uri20170608 4 3f1ra1?1496916876

Roman Yampolskiy Computer scientist at the University of Louisville

Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence
Open uri20170608 4 1ovk6wa?1496916249

Richard Sutton Professor and iCORE chair of computer science at University of Alberta

He states that there is “certainly a significant chance within all of our expected lifetimes” that human-level AI will be created, then goes on to say the AIs “will not be under our control”, and so on
Open uri20170608 4 tl9dlp?1496915501

Jurgen Schmidhuber Professor of Artificial Intelligence at the University of Lugano

Stuart Russell’s concerns [about AI risk] seem reasonable.
Open uri20170608 4 15toaaj?1496915222

Marcus Hutter Professor in the Research School of Computer Science at Australian National University

Way before the singularity, even when setting up a virtual society in our imagine, there are likely some immediate difference, for example that the value of an individual life suddenly drops, with drastic consequences.
Open uri20170608 4 la9oxf?1496914647

Murray Shanahan Professor of Cognitive Robotics at Imperial College London, and Research Scientist at DeepMind

The singularity presents both an existential threat to humanity and an existential opportunity for humanity to transcend its limitations. Shanahan makes it clear that we need to imagine both possibilities if we want to bring about the better outcome.
Data?1496913341

Hans Moravec Former professor at the Robotics Institute of CMU, and founder of the SeeGrid Corporation

He states that by the end of this process “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”
Open uri20170608 4 1pfx6hc?1496910469

David McAllester Professor and Chief Academic Officer at the Toyota Technological Institute at Chicago

The Singularity would enable machines to become infinitely intelligent, and would pose an ‘incredibly dangerous scenario’, he says.
Open uri20170608 4 153hymp?1496909517

Stephen Omohundro Scientist, Self-Aware Systems; Co-founder, Center for Complex Systems Research

Omohundro’s research concludes that the drives of superintelligent machines will be on a collision course with our own, unless we design them very carefully.
Open uri20170608 4 1nixpqa?1496907914

Yoshua Bengio Computer scientist at University of Montreal

One thing I came with is also … this subject of safe AI came up in many discussions, and I would say that these discussions left a strong [positive] impression on me.
Open uri20170608 4 1apz5cy?1496906234

Demis Hassabis Founder & CEO, DeepMind

He accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
Open uri20170608 4 18i0r2o?1496905114

Sam Harris American author, philosopher, and neuroscientist

It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would gu
Open uri20180521 4 10uptxu?1526874973

Jack Ma Alibaba founder

Social conflicts in the next three decades will have an impact on all sorts of industries and walks of life. If trade stops, war starts.
Data?1488314325

Masayoshi Son Founder and CEO of SoftBank

A superintelligence will become a reality in the next 30 years. If we misuse it, it's a risk.
Open uri20170114 4 a8hktt?1484425894

Toby Ord Moral philosopher at Oxford University. His work focuses on the big picture questions.

His current research is on avoiding the threat of human extinction and thus safeguarding a positive future for humanity... He is a leading expert on the potential threats and opportunities posed by advanced artificial intelligence over the coming decades
Open uri20170328 4 1ddio23?1490732910

Eliezer Yudkowsky AI researcher who popularized the idea of friendly artificial intelligence

Yudkowsky argues that as AI systems become increasingly intelligent, new formal tools will be needed in order to avert default incentives for harmful behavior, as well as to inductively teach correct behavior.
Open uri20170114 4 1ajfffm?1484413714

International Labour Organization International Labour Organization Report

New and available technologies will increasingly allow multinational brands and retailers to bring production closer to markets. Ultimately, ASEAN’s TCF [textile, clothing and footwear] sector may no longer offer jobs to millions who are looking for formal employment opportunities.
Open uri20170129 4 t8o6hg?1485693123

James Barrat Filmmaker, speaker and author

AI will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine
Open uri20170328 4 1uozrfm?1490732902

Clive Sinclair Entrepreneur and inventor

Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very dificult for us to survive
Open uri20170114 4 b16dmb?1484413694

Bill Hibbard Scientist

The threat level from AI justifies addressing AI dangers now and with significant resources
Open uri20170114 4 4g3lxg?1484413693

David Chalmers Australian National University Professor

An intelligence explosion has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet
Data?1484413691

Vernor Vinge Retired San Diego State University Professor and author

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended (1993)

Disagree:

Open uri20180214 4 10wtwyq?1518628351

Mark Zuckerberg CEO at Facebook

I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible
Open uri20170114 4 182t7do?1484413718

Steven Pinker Johnstone Family Professor in the Department of Psychology at Harvard University

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power... See More
Open uri20170328 4 7gurdp?1490732908

Grady Booch Software engineer. Developed UML

Might a superintelligent AI emerge? In some distant future, perhaps
Open uri20160822 3 11ie95d?1471892166

Tim O'Reilly Founder and CEO, O'Reilly Media. Investor. Studied at Harvard University.

Fear is not the right frame of mind to think about AI's impact on our society
Open uri20170129 4 96drk?1485693114

Ben Goertzel

Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way
Open uri20170328 4 f5c94u?1490732898

Paul G. Allen Co-founder of Microsoft

Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
Open uri20170328 4 1mh09qv?1490732892

Stanford University Stanford University Report

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.
Open uri20170114 4 77irrx?1484415064

T. J. Rodgers Founder of Cypress Semiconductor

I don't believe in technological singularities
Open uri20170114 4 ne1f8f?1484413625

Gordon Moore Co-founder and chairman emeritus of Intel. Proponent of Moore's Law

The singularity is unlikely ever to occur because of the complexity with which the human brain operates
Open uri20170114 4 1qh73xk?1484413623

Douglas Hofstadter Professor of cognitive science. Pulitzer prize winner

Life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries
Open uri20170114 4 1g190uw?1484413620

Rodney A. Brooks Fellow of the Australian Academy of Science, author, and robotics entrepreneur

If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. [...] Worrying about AI that will be intentionally evil to us is pure fear mongering
Open uri20170114 4 fubgdq?1484413600

Roger Schank John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern Univ

Machines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
Open uri20170114 4 1hb2ra5?1484413597

Chamath Palihapitiya First they ignore you, then they ridicule you, then they fight you, and then you win. CEO @Soc

I do think we will get ever precise capabilities in strictly defined systems (autonomous driving) where most of the hairiest and ambiguous rules will be ratified or voted on, but i don't see an "intelligent" brain anywhere around the corner
Open uri20180517 4 11fvo0?1526586853

Oren Etzioni CEO of the Allen Institute for Artificial Intelligence

Predictions that superintelligence is on the foreseeable horizon are not supported by the available data. Moreover, it’s possible that AI systems could collaborate with people to create a symbiotic superintelligence. That would be very different from the pernicious and autonomous kind envisioned by Professor Bostrom
Open uri20170114 4 1p4as6t?1484413591

Carlo Rovelli Theoretical Physicist and Author

How close to thinking are the machines we have built, or are going to be built soon? The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean. Differences are in performance, structural, functional, and more. Any maundering about how to deal with thinking machines is totally premature to say the least.
Open uri20171215 4 152pec3?1513343226

Steve Wozniak Co-Founder of Apple Inc, inventor of the personal computer

It's actually going to turn out really good for humans. And it will be hundreds of years down the stream before they'd even have the ability. They'll be so smart by then that they'll know they have to keep nature, and humans are part of nature. So I got over my fear that we'd be replaced by computers. They're going to help us. We're at least the gods originally.
Open uri20170105 4 ulrmht?1483658174

Yann LeCun Computer scientist working in machine learning and computer vision

There are several real or imagined dangers about AI. Today, the danger of a Terminator scenario or something like this... those are not things that we’re worried about because we just don’t have the technology to build machines like that.
Open uri20160822 3 ppr8p4?1471892223

Andrew Ng Baidu; Stanford CS faculty; founded Coursera and Google Brain

Worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it - an unnecessary distraction.
Create a new topic