Creating Friendly Superintelligence Eliezer S. Yudkowsky Anders Sandberg Michael Korns John Smart
Why friendly superintelligence? Because we definitely do not want an unfriendly superintelligence We want to maximise the benefits of artificial intelligence while minimising the risks.
What is general intelligence? Eliezer: The ability to model, predict and manipulate regularities in reality Voss/Me: The ability to accomplish things in a general domain given knowledge and the environment
What is superintelligence? Not just quantity but also quality –An uploaded dog is still a dog Greater ability to find and exploit patterns –For the discussion, the potential consequences are more relevant than particular measures
What is friendliness? The term "Friendly AI" refers to the production of human-benefiting, non- human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals. - Eliezer S. Yudkowsky Creating Friendly AI
Friendly AI is in the end a practical problem We want a better solution to it than Bill Joy’s Transdisciplinary problem: philosophy, cognitive science, economics and engineering fuse