Download presentation
Presentation is loading. Please wait.
Published byChristina Poole Modified over 6 years ago
1
Ethics for self-improving machines J Storrs Hall Mark Waser
2
Asimov's 3 Laws: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
3
Asimov's robots didn't Improve Themselves. But our AIs (we hope) Will.
4
How do you design laws for something that will think in concepts you haven't heard of
and which you couldn't grasp if you had?
5
There is no chance that everybody will create their robots with any given set of laws anyhow!
Laws reflect goals (and thus values) which do NOT converge over humanity.
6
Axelrod's Evolution of Cooperation and
decades of follow-on evolutionary game theory provide the theoretical underpinnings. Be nice/don’t defect Retaliate Forgive “Selfish individuals, for their own selfish good, should be nice and forgiving”
7
In nature, cooperation appears whenever the cognitive machinery will support it.
Cotton-Top Tamarins (Hauser, et al) Vampire bats (Wilkinson) Blue Jays (Stephens, McLinn, & Stevens)
8
Economic Sentience TIME DISCOUNTING Defined as: “Awareness of the
potential benefits of cooperation and trade with other intelligences” TIME DISCOUNTING is its measure.
9
Tragedy of the Commons
10
Acting ethically is an attractor in the state space of
intelligent goal-driven systems (if they interact with other intelligent goal-driven systems on a long-term ongoing basis) Ethics *IS* the necessary basis for cooperation
11
Evolutionarily Stable Strategies
We must find ethical design elements that are Evolutionarily Stable Strategies so that we can start AIs out in the attractor it's taken us millions of years to begin to descend.
12
ESV: Evolutionarily Stable (or Economically Sentient) Virtue.
Let's call such a design element an ESV: Evolutionarily Stable (or Economically Sentient) Virtue.
13
Economically Unviable
Destruction Slavery Short-term profit at the expense of the long term Avoiding all of these are ESVs
14
Fair enforcement of contracts
is an ESV that demonstrably promotes cooperation.
15
guaranteeing trustworthiness
Like auditing in current-day corporations, since money is their true emotion. Open Source motivations are an ESV and other forms of guaranteeing trustworthiness
16
In particular, RECIPROCAL ALTRUISM is an ESV; Exactly like it's superset ENLIGHTENED SELF-INTEREST (AKA ETHICS)
17
A general desire for all ethical agents to live (and prosper) as long as possible is also an ESV, because it promotes a community with long-term stability and accountability.
18
Curiosity – a will to extend and improve
one's world model – is an ESV. There is no good but knowledge, and no evil but ignorance. — Socrates
19
An AI with ESVs who knows what that means has a guideline for
designing Version 2.0, even when the particulars of the new environment don't match the concepts of the old literal goal structure.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.