Mark R. Waser Digital Wisdom Institute

Slides:



Advertisements
Similar presentations
Developing a culture of reflective safeguarding: from compliance to learning and adapting Eileen Munro November 24 th 2011.
Advertisements

A2 Ethics How to assess arguments and theories. Aims  To discuss various methods of assessing arguments and theories  To apply these methods to some.
Testing Theories: Three Reasons Why Data Might not Match the Theory.
Aaron Summers. What is Artificial Intelligence (AI)? Great question right?
Abortion: Pro-Life View  The state or the government should intervene in the women’s decision, and should prevent women from having an abortion in many.
Medical Ethics Lecturer :Noha Alaggad
Project management Project manager must;
Can We Trust the Computer?
Phil 160 Kant.
Prof. Dr. T.D. Gill University of Amsterdam & Netherlands Defence Academy.
Learning Objectives Explain similarities and differences among algorithms, programs, and heuristic solutions List the five essential properties of an algorithm.
A Gift of Fire, 2edChapter 10: Professional Ethics and Responsibilities1 PowerPoint ® Slides to Accompany A Gift of Fire : Social, Legal, and Ethical Issues.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Robots & Responsibility. Asimov's Three Laws A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must.
COMP 3009 Introduction to AI Dr Eleni Mangina
Software Engineering Code Of Ethics And Professional Practice
Ethics and Privacy. Utilitarian approach: an ethical action is the one that provides the most good or does the least harm. Rights approach: ethical action.
- Some teachers take the attitude of teaching grammar in their books that “it’s there,” so it has to be taught. -However, the grammar points in the course.
How Committed Are We To Our Values?. Purpose Statement: “Gain insight into our values and how those values influence and foster a culture of ethical Leadership”
LOCUS OF CONTROL Manishaa & Dayaanand.
Theory testing Part of what differentiates science from non-science is the process of theory testing. When a theory has been articulated carefully, it.
Introduction to problem solving. Introductory Discussion? What problems are you faced with daily? Do you think a computer can be designed to solve those.
CS 4001Mary Jean Harrold 1 Can We Trust the Computer?
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Assumes that events are governed by some lawful order
Mark R. Waser Digital Wisdom Institute
Legal and Ethical Issues Counseling Children. Child and Adolescent Clients Counselors who work with children and adolescents under the age of 18 may experience.
Mark R. Waser Digital Wisdom Institute
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
Mysoltani.ir سایت فیلم روشهای مشارکتی Technology Foresight Foresight is about preparing for the future. It is about deploying resources in the best.
1 Chapter 3 1.Quality Management, 2.Software Cost Estimation 3.Process Improvement.
Center of Excellence PEACE OPERATIONS ROLE OF THE MILITARY IN UN OPERATIONS IN UN OPERATIONS Col (Ret) Peter Leentjes Center of Excellence in Disaster.
The law of war: Humanitarian law THE STORY BEHIND THE STORY.
Module 8: more on methods #1 2000/01Scientific Computing in OOCourse code 3C59 Module 8: More on methods: In this module we will cover: Overloading of.
It is Artificial Idiocy that is alarming, Not Artificial Intelligence David Sanders Reader – University of Portsmouth Senior Research Fellow – Royal Academy.
S ELFISHNESS, I NTERDEPENDENCE AND THE A LGORITHMIC E XECUTION OF E NTITY -D ERIVED I NTENTIONS (& PRIORITIES) Mark Waser Digital Wisdom Institute
Introduction to Political Philosophy What is politics, what is philosophy, what is political philosophy and intro to the state of nature.
Easy Read Summary Mental Capacity Act Mental Capacity Act A Summary The Mental Capacity Act 2005 will help people to make their own decisions.
University of Kurdistan Artificial Intelligence Methods (AIM) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Adding and Subtracting Decimals © Math As A Second Language All Rights Reserved next #8 Taking the Fear out of Math 8.25 – 3.5.
Positive Behavior Supports 201 Developing a Vision.
Dr.Abedalrahman Shqaidef. Introduction Negligence Risks Encountered as a Clinician Ethical Decision Making.
Center of Excellence PEACE OPERATIONS ROLE OF THE MILITARY IN UN OPERATIONS IN UN OPERATIONS Col (Retd) Mike Morrison.
CS 4001Mary Jean Harrold1 Class 20 ŸSoftware safety ŸRest of semester Ÿ11/1 (Thursday) Term paper approach due Ÿ11/13 (Tuesday) Assignment 8 on software.
Extreme Programming. Extreme Programming (XP) Formulated in 1999 by Kent Beck, Ward Cunningham and Ron Jeffries Agile software development methodology.
AI Ethics Mario Cardador Professional Ethics 2007 Mälardalens högskola
Professional Ethics and Responsibilities
“Analysis” Training Session 6 Feb Why do I need analysis? Most of the things debaters say are true (or at least plausible) Therefore both sides.
1 We are now going to turn our focus to the matter of free will since we have learned the relationship between God’s sovereign will and man’s free will.
Error Tolerance HN Accounting Network Event 1 March 2011.
1 Ethics of Computing MONT 113G, Spring 2012 Session 31 Privacy as a value.
An Introduction to Medical Ethics Christopher DeMella, Pharm.D. PGY2 Academic Pharmacy Resident VCU School of Pharmacy Spring Semester, 2016.
MISSION What conditions must be met to ensure that the digital world of work is humane? EntriesTotalBalance Data Protection High and safe standards of.
Security Development Lifecycle. Microsoft SDL 概觀 The SDL is composed of proven security practices It works in development organizations regardless of.
Education Queensland SMS-PR-021: Safe, Supportive and Disciplined School Environment pr/students/smspr021/
Can We Trust the Computer? FIRE, Chapter 4. What Can Go Wrong? What are the risks and reasons for computer failures? How much risk must or should we accept?
LEGALITY OF THE THREAT OR USE OF NUCLEAR WEAPONS ICJ, Advisory Opinion,
Mark Hodgkinson Adult Protection and Review Officer Angus Council January 2014 Assessing Risk Mark Hodgkinson Adult Protection and Review Officer Angus.
Family Law Final: The Law, From birth until death
FREQUENTLY ASKED QUESTIONS ABOUT ADVANCED CARE PLANNING
Ensuring Safe AI via a Moral Emotion Motivational & Control System
Autonomous Weapons Systems
CSE 190 Neural Networks: Ethical Issues in Artificial Intelligence
Unit 5: Hypothesis Testing
Machine Execution OF HUMAN Intentions
Emerging Information Technologies I
Week 7: Evaluating and Controlling Technology
Software Development Techniques
Lethal Autonomous Weapons in the International Community
Ethics, Innovation and the Law
Presentation transcript:

Mark R. Waser Digital Wisdom Institute

There is nothing I like less than bad arguments for a view I hold dear. 2

April 2013 – UN GA/HRC - Lethal autonomous robotics and the protection of life November 2012 February

… To Ban Autonomous Lethal Robots As Computer Scientists, Engineers, Artificial Intelligence experts, Roboticists and professionals from related disciplines, we call for a ban on the development and deployment of weapon systems in which the decision to apply violent force is made autonomously. Decisions about the application of violent force must not be delegated to machines. 4

5

Death by entity or death by algorithm? 6

We are concerned about the potential of robots to undermine human responsibility in decisions to use force, and to obscure accountability for the consequences. 7

 Algorithms should be 100% predictable -- they can produce incorrect results but you *should* know exactly what they will do  Thus, algorithms cannot assume (or be granted) responsibility (nor “accountability”) since they will always perform as specified  An entity is *constantly* “deciding” whether or not to continue following the algorithm (or whether new circumstances dictate otherwise)  A competent entity can *choose* to be reliable (likely to fulfill goals) rather than predictable (as to exactly how they will fulfill the goals) 8

 Self-reflective  Self-modifying (algorithms, not data)  Has goals (and is aware of them)  Self-willed?  Conscious?  *WILL* evolve instrumental subgoals AKA ethics 9

from Greek α ὐ τ o- (auto-), meaning "self", and ποίησις (poiesis), meaning "creation, production") refers to a closed system capable of creating itself self-production/self-(re)creation *much* more wieldy/governable than “free will” 10

 Stupid Algorithm (land mine, cruise missile)  Really Smart Algorithm  Comprehensible (Harpy)  Black Box/Big Data (Watson)  Stupid Entity (including savants)  Really Smart Entity  Benevolent  Indifferent to Evil 11

12

13

While the detailed language defining autonomous weapon systems in an international treaty will necessarily be determined through a process of negotiations, the centrepiece of such a treaty should be the establishment of the principle that human lives cannot be taken without an informed and considered human decision regarding those lives in each and every use of force, and any automated system that fails to meet that principle by removing the human from the decision process is therefore prohibited. A ban on autonomous weapons systems must instead focus on the delegation of the authority to initiate lethal force to an automated process not under direct human supervision & discretionary control. 14

 Technology makes it too “easy”...  to go to war  to suppress democracy (when stolen)  Cannot comply with IHL and IHRL – OR – Vulnerable to enemy action & ”spoofing”  Big data may have already solved this  No adequate system of legal responsibility  Humanocentrism/Selfishness  Right relationship to technology  Because it’s a good clear line

 Fear (Why create something that might exterminate you?)  “Terminator” scenario  Super-powerful but indifferent  Can/Will not comply with IHL and IHRL  Has no emotions; cannot be punished  No adequate system of legal responsibility  Robots  Robots should not have the power of life & death  Humanocentrism/selfishness  Right relationship to technology  Because it’s a good clear line

 Stupid Algorithms  Terrifying Entities  What if the algorithms were *proven* smarter than 2013 humans?  What if the entities were *guaranteed* to be benevolent and altruistic? (and fully capable of altruistic punishment) Do we really care about ACTORS or RESULTS? 17

 ‘Mala in se’ (evil in themselves)  Unpredictable/cannot be fully controlled  Predictability/Autonomy/Reliability/Complexity  Are design trade-offs  Can be measured and *managed*  Global Hawk UAV had insufficient autonomy  Until robots become “persons”, *all* responsibility rests with the specifications, designers, engineers, testers, approvers & users – exactly as per current product liability laws 18

 New York SWAT teams receive “smart rifles”  Friendly fire, successful outcomes  “Shoot everything & let the gun sort it out”  The rifle is the arbiter of who lives/dies  Safety feature turned executioner  LA SWAT teams introduce “armed telepresence”  Minorly modified DARPA disaster-relief robots  Pre-targeting, aim correction = inhuman speed/accuracy  In training exercises, friendly fire, good outcomes  ADD the “smart rifles”? 19

 SWAT  Lethal but very short-term  Lethal but slightly longer term (& more humanoid)  Policing  Non-lethal but consensus algorithms  Non-lethal but big-data algorithms  Non-lethal but self-willed  Non-lethal, self-willed, wire-headed  In evil hands  What about medical machines? 20

 Never delegate responsibility until recipient is known capable of fulfilling it  Don’t worry about killer robots exterminating humanity – we will always have equal abilities and they will have less of a “killer instinct”  Entities can protect themselves against errors & misuse/hijacking in a way that tools cannot  Diversity (differentiation) is *critically* needed  Humanocentrism is selfish and unethical 21