Biased? Please add opinions or donate
Chris Olah Google researcherWe believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably
Andrew Davison Professor at Imperial College LondonExponentially increasing technology might lead to super-human AI and other developments that will change the world utterly in the surprisingly near future (i.e. perhaps the next 20--30 years)
Kris Yes we canAnything can happen during 50 years
Michael Vassar Co-founder and Chief Science Officer of MetaMed ResearchIf greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.
Paul Christiano PhD Theoretical Computer Science, UC Berkeley | Researcher at OpenAIIdeally, “each time you build a more powerful machine, it effectively models human values and does what humans would like,” says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence.
Stuart Armstrong PhD Mathematics, Oxford | Research Fellow, Future of Humanity InstituteOne of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk.
Anthony Mullen Personal Technology Research Director at Gartner. I especially love the topic of 'dialogue andCollaborative emergent AI - not one strong designed instance - is possible within the timeframe.
Jaime Sevilla Student of Math and CompSci, future AI Risk researcher, dabbling entrepreneur.In strategy games, the most powerful abilities are those which let you take more actions per turn or have a wider array of possible actions to perform the one best suited to the situation at hand. AI is literally a machine producing ideas, which let you act faster (and thus perfom more actions) or execute different plans (and thus have more choices). This is a serious game imbalance, and is one we... See More
Anthony Berglas Software engineer and authorEvolution suggests that a sufficiently powerful AI would probably destroy humanity
1) Given the history of AI development and current rate of AI progress, it's almost obvious that superhuman-level AI would be invented and run in millions of copies within the next 50 years if we don't impose severe restrictions on its development. 2) It's highly unobvious that scientists can invent the way to control general superhuman level AI. Moreover, it's even much more dubious that scienti... See More