CSE 190 Neural Networks: Ethical Issues in Artificial Intelligence

Slides:



Advertisements
Similar presentations
Robotics Where AI meets the real world. Ankit Jain
Advertisements

Causation, Control, and the Evolution of Complexity H. H. Pattee Synopsis Steve B. 3/12/09.
Kristine Belknap The Ethics of Robotics.
Prof. Dr. T.D. Gill University of Amsterdam & Netherlands Defence Academy.
Ethics in Human Robot Interaction (HRI). Evolution of robots (then) What can robots do? How do they fit into our lives? What are the possibilities? Dangers?
Robots & Responsibility. Asimov's Three Laws A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must.
Mobile Robotics: 1. Introduction Dr. Brian Mac Namee (
Robotics. When You Hear the Word “Robot”, what do you imagine?
CSE (c) S. Tanimoto, 2008 Big Issues & The Future of AI 1 Big Issues and the Future of Artificial Intelligence Outline: Common Sense, Ontologies.
CSE (c) S. Tanimoto, 2007 Social Issues 1 Social Issues and the Future of Artificial Intelligence Outline: What’s wrong with “David” in Kubrick/Spielberg.
PPT ON ROBOTICS AEROBOTICSINDIA.COM. ROBOTICS WHAT IS ROBOTICS THE WORD ROBOTICS IS USED TO COLLECTIVILY DEFINE A FIELD IN ENGINEERING THAT COVERS THE.
Robotic Ethics Shahid Iqbal Tarar. Robotics and Ethics Is new science or branch or a field of application of Engineering? Actually Discipline born from.
Science of Intelligent Systems
10/3/2015 ARTIFICIAL INTELLIGENCE Russell and Norvig ARTIFICIAL INTELLIGENCE: A Modern Approach.
SEMINAR REPORT ON K.SWATHI. INTRODUCTION Any automatically operated machine that functions in human like manner Any automatically operated machine that.
AI in Computer Gaming: The first person shooter Tyler Hulburd.
Mark R. Waser Digital Wisdom Institute
1 CMSC 671 Fall 2001 Class #11 – Tuesday, October 9.
It is Artificial Idiocy that is alarming, Not Artificial Intelligence David Sanders Reader – University of Portsmouth Senior Research Fellow – Royal Academy.
Mark R. Waser Digital Wisdom Institute
INTRODUCTION TO ROBOTICS Part 1: Overview Robotics and Automation Copyright © Texas Education Agency, All rights reserved. 1.
Artificial Intelligence and Robotics. Objectives: List and discuss types of artificial intelligence. Discuss the current state of artificial intelligence.
Robotic Ethics Shahid Iqbal Tarar. Robotics and Ethics A new science or an integral part of Engineering? Actually Discipline born from Computer Science,
Robotics Where AI meets the real world. AMAN KUMAR, SECTION –B4902.
Ethics Systematizing, defending, and recommending concepts of right and wrong behavior
Automated Technology in Law Enforcement Autumn Owens.
Technological Developments and Militarization Thoughts on Current Dangerous Developments Presentation by Reiner Braun (Executive Director IALANA)
LECTURE PRESENTATION FOR Thomas Rid, Cyberwar will not take place Manjikian
Jeremy Straub Department of Computer Science University of North Dakota.
SCIENCE AND SCI-FI LITERATURE IN POPULAR CULTURE TV’S IMPACT ON TODAY’S MAN.
Why you didn’t properly consent to listening to me ramble…
Developing as an Ethical Reasoner
ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
Introduction to Robotics
Drones, Targeted Killing, and the Law
The Spread of nuclear weapons
Chapter 11: Artificial Intelligence
Three Laws of robotics February 22, 2016.
Where are the robots?.
Artificial Intelligence
Intelligence in Technology
Intensive Production – Assessment 1
Autonomous Weapons Stuart Russell Computer Science Division
Robotics.
This is Why you can’t just blow stuff up.
Development of AI and Weapon Automation
Roboethics By Kevin and James.
The Declaration of Independence
ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
ETHICS & WAR War Quotes Suheir Hammad.
Internal control - the IA perspective
Artificial Intelligence and Ethics
Chapter 3, Section 4 U.S. Government 2015
Stat 217 – Day 28 Review Stat 217.
Business Law Ethics in Our Law.
Sci Fi / Fantasy Literature January 19-20
Exploring Computer Science Lesson 6-1
Artificial Intelligence and Its Implications On the Future of Humanity
G5BAIM Artificial Intelligence Methods
Social Issues and the Future of Artificial Intelligence
ETHICS & WAR.
The International Legal Rules on the Use of Firearms by Law Enforcement Officials Model Presentation.
P12202:Tigerbot II Asimov's Three Laws of Robotics:
Missile Madness.
BELLWORK: 3/27 Explain the causes of détente.
Natural Laws applied to voluntary euthanasia
Internal Control Internal control is the process designed and affected by owners, management, and other personnel. It is implemented to address business.
Social Issues and the Future of Artificial Intelligence
Lethal Autonomous Weapons in the International Community
Presentation transcript:

CSE 190 Neural Networks: Ethical Issues in Artificial Intelligence 6/4/2018 CSE 190 Neural Networks: Ethical Issues in Artificial Intelligence Gary Cottrell Week 10 Lecture 3 . CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Introduction Some of the issues: Will robots have rights? What if robots become much smarter than us? (the singularity) What if robots kill people or worse, make humans extinct? As researchers, we need to think about these issues now. CSE 190 6/4/2018

Introduction Some of the issues: Movie time! Will robots have rights? CSE 190 6/4/2018

The Singularity CSE 190 6/4/2018

The Singularity “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” — "The Coming Technological Singularity" (1993) by Vernor Vinge (SDSU Professor and SciFi author) CSE 190 6/4/2018

The Three Laws of Robotics (Isaac Asimov) 6/4/2018 The Three Laws of Robotics (Isaac Asimov) A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Will keep (our) people from dying CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

CSE 190 6/4/2018

Killer Robots Over 1,000 leading experts in artificial intelligence have signed an open letter calling for a ban on military AI development and autonomous weapons, as depicted within the Terminator sci-fi franchise. CSE 190 6/4/2018

Killer Robots AI Signatories include: Stuart Russell Peter Norvig Yann LeCun Geoff Hinton Yoshua Bengio Juergen Schmidhuber Lawrence Saul Charles Elkan Garrison Cottrell CSE 190 6/4/2018

Killer Robots Other Signatories include: Stephen Hawking Elon Musk Steve Wozniak Dan Dennett Noam Chomsky But not: Barak Obama Vladmir Putin ISIS CSE 190 6/4/2018

Killer Robots What are some arguments for autonomous weapons? 6/4/2018 Killer Robots What are some arguments for autonomous weapons? Evan Ackerman’s arguments Will keep (our) people from dying CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Killer Robots What are some arguments for autonomous weapons? 6/4/2018 Killer Robots What are some arguments for autonomous weapons? The problem with this [pronouncement by AI researchers] is that no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots. Generally speaking, technology itself is not inherently good or bad: it’s what we choose to do with it that’s good or bad, and you can’t just cover your eyes and start screaming “STOP!!!” if you see something sinister on the horizon when there’s so much simultaneous potential for positive progress. What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing. Will keep (our) people from dying Evan Ackerman’s arguments CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Killer Robots What are some arguments for autonomous weapons? 6/4/2018 Killer Robots What are some arguments for autonomous weapons? The problem with this [treaty by major countries] is that no treaty, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build nuclear bombs. Generally speaking, technology itself is not inherently good or bad: it’s what we choose to do with it that’s good or bad, and you can’t just cover your eyes and start screaming “STOP!!!” if you see something sinister on the horizon when there’s so much simultaneous potential for positive progress. What we really need, then, is a way of making nuclear bombs ethical, because we’re not going to be able to prevent them from existing. Will keep (our) people from dying Evan Ackerman’s arguments CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Killer Robots What are some arguments for autonomous weapons? 6/4/2018 Killer Robots What are some arguments for autonomous weapons? What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing. Could autonomous armed robots perform better than armed humans in combat, resulting in fewer casualties (combatant or non-combatant) on both sides? I think that it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans, since there are rules that can be followed (called Rules of Engagement) to determine whether or not using force is justified. For example, does your target have a weapon? Is that weapon pointed at you? Has the weapon been fired? Have you been hit? These are all things that a robot can determine using any number of sensors that currently exist. Will keep (our) people from dying Evan Ackerman’s arguments CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Killer Robots What are some arguments for autonomous weapons? 6/4/2018 Killer Robots What are some arguments for autonomous weapons? Why technology can lead to a reduction in casualties on the battlefield The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. Autonomous armed robotic vehicles do not need to have self-preservation as a foremost drive, if at all. They can be used in a self sacrificing manner if needed and appropriate without reservation by a commanding officer. There is no need for a ‘shoot first, ask-questions later’ approach, but rather a ‘first-do-no-harm’ strategy can be utilized instead. They can truly assume risk on behalf of the noncombatant, something that soldiers are schooled in, but which some have difficulty achieving in practice. Will keep (our) people from dying Ronald C. Arkin’s arguments CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Killer Robots What are some arguments for autonomous weapons? 6/4/2018 Killer Robots What are some arguments for autonomous weapons? Why technology can lead to a reduction in casualties on the battlefield Unmanned robotic systems can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. Will keep (our) people from dying Ronald C. Arkin’s arguments CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Killer Robots What are some arguments for autonomous weapons? 6/4/2018 Killer Robots What are some arguments for autonomous weapons? Why technology can lead to a reduction in casualties on the battlefield Intelligent electronic systems can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. When working in a team of combined human soldiers and autonomous systems as an organic asset, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties, providing evidence and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions. Will keep (our) people from dying Ronald C. Arkin’s arguments CSE 190 6/4/2018 Walker L. Cisler Memorial Science Lecture

Ethical Considerations Some of the issues: Will robots have rights? What if robots become much smarter than us? (the singularity) What if robots kill people or worse, make humans extinct? As researchers, we need to think about these issues now. I’m not saying to decide one way or another – just that you need to think about it! CSE 190 6/4/2018