Agree:

Open uri20170328 4 1ddio23?1490732910

Eliezer Yudkowsky AI researcher who popularized the idea of friendly artificial intelligence

Yudkowsky argues that as AI systems become increasingly intelligent, new formal tools will be needed in order to avert default incentives for harmful behavior, as well as to inductively teach correct behavior.

Disagree:

Open uri20160822 3 ppr8p4?1471892223

Andrew Ng Baidu; Stanford CS faculty; founded Coursera and Google Brain

Worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it - an unnecessary distraction.
Open uri20170129 4 96drk?1485693114

Ben Goertzel

Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way
Create a new topic