Open uri20180514 4 1pxd6qn?1526281024

Bill Gates Philanthropist. Founder and former CEO of Microsoft.

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent...A few decades after that though the intelligence is strong enough to be a concern.
Open uri20180621 4 1885gg7?1529610663

Sam Altman President of Y Combinator. Investor at Reddit, Stripe,, Pinterest and many others

Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.
Open uri20180629 4 qa906n?1530306018

Stephen Hawking British physicist

The primitive forms of artificial intelligence developed so far have already proved very useful, but I fear the consequences of creating something that can match or surpass humans.
Open uri20170129 4 t8o6hg?1485693123

James Barrat Filmmaker, speaker and author

AI will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine
Open uri20170114 4 1hvxjz4?1484413669

Nick Bostrom

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb [...] We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound
Open uri20170610 4 1tfzxpr?1497101680

Alan Turing British mathematician and logician, a major contributor to mathematics, cryptanalysis, and AI

Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger … is certainly something which can give us anxiety.
Open uri20170608 4 3f1ra1?1496916876

Roman Yampolskiy Computer scientist at the University of Louisville

Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence
Open uri20170608 4 1nixpqa?1496907914

Yoshua Bengio Computer scientist at University of Montreal

One thing I came with is also … this subject of safe AI came up in many discussions, and I would say that these discussions left a strong [positive] impression on me.
Open uri20170608 4 1i6gpx9?1496932103

Dustin Moskovitz co-founder of Facebook and Asana

... As concern grows, Dustin Moskovitz and Cari Tuna’s funding outfit is also paying attention, with several recent grants focused on the risks of AI, including one for $5.5 million.
Open uri20170129 4 9lygaf?1485693111

World Economic Forum World Economic Forum Report

Some serious thinkers fear that AI could one day pose an existential threat: a ‘superintelligence’ might pursue goals that prove not to be aligned with the continued existence of humankind
Open uri20170608 4 18i0r2o?1496905114

Sam Harris American author, philosopher, and neuroscientist

It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would gu
Open uri20170608 4 1apz5cy?1496906234

Demis Hassabis Founder & CEO, DeepMind

He accepts there are “legitimate risks that we should be thinking about now”, but is adamant these are not the dystopian scenarios of science fiction in which super-smart machines ruthlessly dispense of their human creators.
Open uri20170129 4 12yya1y?1485693090

Max Tegmark Professor at MIT & co-founder at Future of Life Institute

Superintelligent machines are quite feasible
Open uri20170328 4 1ppe524?1490732895

Shane Legg Machine learning researcher and founder of DeepMind

It's my number 1 risk for this century

Eric Horvitz Director of Microsoft Research's main Redmond lab

Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems.
Open uri20170328 4 14hfkaa?1490732890

William Poundstone Journalist

There is going to be interest in creating machines with will, whose interests are not our own. And that's without considering what machines that terrorists, rogue regimes, and intelligence agencies of the less roguish nations, may devise. I think the notion of Frankensteinian AI, which turns on its creators, is something worth taking seriously
Open uri20170328 4 1uozrfm?1490732902

Clive Sinclair Entrepreneur and inventor

Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very dificult for us to survive
Open uri20170608 4 bg8x3o?1496918250

Jaan Tallinn Co-founder of Skype and Kazaa

A superintelligent AI could be a serious problem
Open uri20170108 4 ilw63q?1483870094

Stuart Russell Professor of Computer Science at Berkeley

The question is: Could you prove that your systems can’t ever, no matter how smart they are, overwrite their original goals as set by the humans?
Open uri20180521 4 1pqzb71?1526874030

Bart Selman Computer scientist at Cornell University

It's a societal risk. Society will have to adapt. How we will adapt is not fully clear yet. But I think it's something we'll have to think about.
Open uri20170114 4 1rdaml?1484413602

Frank Wilczek Physicist, MIT and Recipient, 2004 Nobel Prize in Physics

Without careful restraint and tact, researchers could wake up to discover they've enabled the creation of armies of powerful, clever, vicious paranoiacs
Open uri20180521 4 10uptxu?1526874973

Jack Ma Alibaba founder

Social conflicts in the next three decades will have an impact on all sorts of industries and walks of life. If trade stops, war starts.
Open uri20170114 4 1cpw9un?1484413645

Francesca Rossi Computer Scientist, Professor at the University of Padova

AI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions. [...] Consider automated trading systems. A bad decision in these systems may be (and has been) a financial disaster for many people. That will also be the case for self-driving cars. Some of their ... See More
Open uri20180514 4 qt9hr0?1526279702

Ray Kurzweil Author, computer scientist, inventor and futurist

The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far mo... See More

Vernor Vinge Retired San Diego State University Professor and author

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended (1993)
Open uri20170608 4 1ovk6wa?1496916249

Richard Sutton Professor and iCORE chair of computer science at University of Alberta

He states that there is “certainly a significant chance within all of our expected lifetimes” that human-level AI will be created, then goes on to say the AIs “will not be under our control”, and so on
Open uri20170608 4 153hymp?1496909517

Stephen Omohundro Scientist, Self-Aware Systems; Co-founder, Center for Complex Systems Research

Omohundro’s research concludes that the drives of superintelligent machines will be on a collision course with our own, unless we design them very carefully.
Open uri20170608 4 1pfx6hc?1496910469

David McAllester Professor and Chief Academic Officer at the Toyota Technological Institute at Chicago

The Singularity would enable machines to become infinitely intelligent, and would pose an ‘incredibly dangerous scenario’, he says.

Hans Moravec Former professor at the Robotics Institute of CMU, and founder of the SeeGrid Corporation

He states that by the end of this process “the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria”
Open uri20170608 4 la9oxf?1496914647

Murray Shanahan Professor of Cognitive Robotics at Imperial College London, and Research Scientist at DeepMind

The singularity presents both an existential threat to humanity and an existential opportunity for humanity to transcend its limitations. Shanahan makes it clear that we need to imagine both possibilities if we want to bring about the better outcome.
Create a new topic