Agree:

Open uri20180629 4 qa906n?1530306018

Stephen Hawking British physicist

The primitive forms of artificial intelligence developed so far have already proved very useful, but I fear the consequences of creating something that can match or surpass humans.
Open uri20170129 4 12yya1y?1485693090

Max Tegmark Professor at MIT & co-founder at Future of Life Institute

Superintelligent machines are quite feasible
Open uri20170114 4 1rdaml?1484413602

Frank Wilczek Physicist, MIT and Recipient, 2004 Nobel Prize in Physics

Without careful restraint and tact, researchers could wake up to discover they've enabled the creation of armies of powerful, clever, vicious paranoiacs
Open uri20170608 4 15toaaj?1496915222

Marcus Hutter Professor in the Research School of Computer Science at Australian National University

Way before the singularity, even when setting up a virtual society in our imagine, there are likely some immediate difference, for example that the value of an individual life suddenly drops, with drastic consequences.
Open uri20170610 4 3ijgty?1497103260

K. Eric Drexler Founding father of nanotechnology

AI technologies may reach thethreshold of rapid, open-ended, recursive improvement before we are prepared to manage the challenges posed by the emergence superintelligent AI agents.

Disagree:

Open uri20170114 4 1p4as6t?1484413591

Carlo Rovelli Theoretical Physicist and Author

How close to thinking are the machines we have built, or are going to be built soon? The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean. Differences are in performance, structural, functional, and more. Any maundering about how to deal with thinking machines is totally premature to say the least.
Create a new topic