Legal and Moral Complexities of Artificial General Intelligence Peter Voss Adaptive A.I. Inc. 2/22/2019 Peter Voss
Topics What is AGI, and how does it differ from conventional AI? Key Uncertainties AGI: Savior or Mortal Danger? Moral Implications Legal Implications 2/22/2019 Peter Voss
AGI: The forgotten science Real AI – Human-level learning and understanding Artificial General Intelligence (AGI) Conventional AI 2/22/2019 Peter Voss
AGI: The forgotten science Real AI – Human-level learning and understanding Artificial General Intelligence (AGI) Conventional AI Focus on acquiring knowledge and skills Focus on having knowledge and skills 2/22/2019 Peter Voss
AGI: The forgotten science Real AI – Human-level learning and understanding Artificial General Intelligence (AGI) Conventional AI Focus on acquiring knowledge and skills Focus on having knowledge and skills Acquisition via learning programming 2/22/2019 Peter Voss
AGI: The forgotten science Real AI – Human-level learning and understanding Artificial General Intelligence (AGI) Conventional AI Focus on acquiring knowledge and skills Focus on having knowledge and skills Acquisition via learning programming General ability, using abstraction and context Domain specific, rule-based and concrete 2/22/2019 Peter Voss
AGI: The forgotten science Real AI – Human-level learning and understanding Artificial General Intelligence (AGI) Conventional AI Focus on acquiring knowledge and skills Focus on having knowledge and skills Acquisition via learning programming General ability, using abstraction and context Domain specific, rule-based and concrete Ongoing cumulative, adaptive, grounded, self-directed learning Relatively fixed abilities. Externally initiated improvements 2/22/2019 Peter Voss
Implications of AGI Human-level learning and understanding Self-aware Self-improvement (Ready-to-Learn) Seed AI Human Augmentation / Integration 2/22/2019 Peter Voss
Key Questions How soon? How powerful? ‘Hard’ limits to intelligence? Will there be a ‘hard take-off’? Can we put the genie back into the bottle? Will it have a mind or agenda of its own? Can’t we first integrate AGIs into humans? 2/22/2019 Peter Voss
AGI: Our Savior? Do we need AGI to save us from ourselves? Dangers of biotech Dangers of nanotech Dangers of social breakdown 2/22/2019 Peter Voss
AGI: Our Savior? Do we need AGI to save us from ourselves? Dangers of biotech Dangers of nanotech Dangers of social breakdown AGI can potentially help by – - providing tools to prevent disaster - protecting us directly (universal policeman) - helping to alleviate poverty and suffering - making us more moral 2/22/2019 Peter Voss
AGI: A Mortal Danger? What is the real risk: An AGI with a mind of its own, or one without? 2/22/2019 Peter Voss
AGI: A Mortal Danger? What is the real risk: An AGI with a mind of its own, or one without? Little evidence that AGI will be detrimental to humans – unless specifically designed to be! Original applications or training may have large impact (a2i2 vs. military). The power of AGI in (the wrong) human hands is a bigger concern. Mitigating factor: AGI’s positive moral influence 2/22/2019 Peter Voss
Moral Implications AGI – Human Interaction How should we treat AGIs? Will they desire life and liberty? …And to pursue happiness? How will we treat AGIs? Will they be moral amplifiers? Make us more what we are? Bring out our fears? Our best? How will AGIs act towards us? Rationally - they better understand consequences of their actions. They lack primitive evolutionary survival instincts. 2/22/2019 Peter Voss
Moral Implications Human Development and Integration How will AGIs change human morality? They will change the world: Major impact on law, politics, and social justice (The Truth Machine). More rationality. Less material poverty and desperation. Coping with change. Will they help move us up Maslow’s hierarchy? Implications of radical Intelligence Augmentation We will be much more like AGI than humans – AGI thought and morality will dominate. 2/22/2019 Peter Voss
Moral Implications Rational Ethics Rational Personal Ethics: Principles for Optimal Living AGIs will have rational virtues AGI as wise oracle / mentor AGIs will help us become more virtuous 2/22/2019 Peter Voss
Legal Implications What are the primary legal issues? - To protect humans, or AGIs? (or governments!) - Will AGIs want life, power, protection? Can the legal system respond fast enough to prevent or limit potential risks of AGI? Future of the legal system: AGI judges? Truth based? 2/22/2019 Peter Voss
Summary AGI is fundamentally different from conventional AI Human-level AGI may well arrive in 3 to 6 years AGI will improve very rapidly beyond this stage AGIs are unlikely to have their own agenda AGIs are our best hope to protect and improve the human condition, and to improve our morality. Powerful AGIs will arrive long before significant IA Legal issues are more likely to center around limiting the production or use of AGIs, rather than protecting AGIs Legal mechanisms will be ineffective 2/22/2019 Peter Voss
What to do … ? Contact me: peter@optimal.org 2/22/2019 Peter Voss
References AGI – Artificial General Intelligence http://adaptiveai.com/faq/index.htm http://adaptiveai.com/research/index.htm The Truth Machine by James Halperin Existential Risks: Analyzing Human Extinction Scenarios http://nickbostrom.com/existential/risks.html True Morality: Rational Principles for Optimal Living http://optimal.org/peter/rational_ethics.htm Why Machines will become Hyper-Intelligent before Humans do http://optimal.org/peter/hyperintelligence.htm 2/22/2019 Peter Voss