Opinions from artificial intelligence researchersSee all occupations
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google Brain
disagrees Basic IncomeI do not believe in unconditional basic income because this just encourages people to be trapped in low skilled jobs without a meaningful path to climb up to do better work. So rather than to pay people to “do nothing” I would rather see a new “New deal” where we pay you to study because I think that today we know how to educate people at scale and the society is pretty good at finding meaningf... See More
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google BrainWorrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we've even set foot on it - an unnecessary distraction.
Toby Walsh, Professor of artificial intelligenceEven if we have as many as 47% of jobs automated, this won’t translate into 47% unemployment. One reason is that we might just work a shorter week. That was the case in the Industrial Revolution. Before the Industrial Revolution, many worked 60 hours per week. After the Industrial Revolution, work reduced to around 40 hours per week. The same could happen with the unfolding AI Revolution.
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google BrainNeed to time technology well: 2007 was good time to launch iPhone; but not 1993 (Apple Newton) since battery/screen/chip tech not there. Extreme example: Leonardo da Vinci (1480s) invention of helicopters was way too early. Engine technology didn’t get there until 1900s. Maybe 2007 was early for autonomous driving (DARPA Urban Challenge) since AI, sensors not yet there. From ~2015 ecosystem more r... See More
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google BrainTech world is used to tectonic shift every 5 years from new inventions. Now tech has infected other industries so everyone has to shift.
Eliezer Yudkowsky, AI researcher who popularized the idea of friendly artificial intelligenceYudkowsky argues that as AI systems become increasingly intelligent, new formal tools will be needed in order to avert default incentives for harmful behavior, as well as to inductively teach correct behavior.
Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way
Andrew Ng, Baidu; Stanford CS faculty; founded Coursera and Google BrainUS govt should focus on accelerating US AI, rather than trying to slow down anyone else.
Filter by occupation/university by clicking on the pies: