AI Ethics Robot Rights? Cyber Warfare

Slides:



Advertisements
Similar presentations
Promoting Cooperative Solutions for Space Security 1 Is Current International Humanitarian Law Sufficient to Regulate a Potential.
Advertisements

Moral Reasoning Making appropriate use of facts and opinions to decide the right thing to do Quotations from Jacob Needleman’s The American Soul A Crucial.
Aaron Summers. What is Artificial Intelligence (AI)? Great question right?
Ethics in Action HST II Class. Objectives / Rationale Health care workers must understand ethical and legal responsibilities, limitations, and the implications.
ICS 417: The ethics of ICT 4.2 The Ethics of Information and Communication Technologies (ICT) in Business by Simon Rogerson IMIS Journal May 1998.
Introduction to Ethics
Core Principles Related to Conduct of Hostilities ATHA Specialized Training on International Humanitarian Law May 31, 2010 Stockholm, Sweden.
DSS: Decision Support Systems and AI: Artificial Intelligence
AI Ethics Mario Cardador Martín Professional Ethics 2007 Mälardalen Höskola
Ethics and Social Responsibility
SOCIAL WORK ETHICS Issue in Child Welfare. GOALS & OBJECTIVES 1. To discuss how we define ethics. 2. To examine personal values related to ethics. 3.
11 C H A P T E R Artificial Intelligence and Expert Systems.
What is AI:-  Ai is the science of making machine do things that would requires intelligence.  Computers with the ability to mimic or duplicate the.
The Eighth Asian Bioethics Conference Biotechnology, Culture, and Human Values in Asia and Beyond Confidentiality and Genetic data: Ethical and Legal Rights.
Conduct of hostilities Protection of civilians against the effects of hostilities Dr. Elżbieta Mikos-Skuza Seminar „Introduction to International Humanitarian.
SEMINAR REPORT ON K.SWATHI. INTRODUCTION Any automatically operated machine that functions in human like manner Any automatically operated machine that.
SPLASH 2008: CHALLENGES AND OPPORTUNITIES IN THE 21 ST CENTURY Artificial Intelligence.
Artificial Intelligence By Michelle Witcofsky And Evan Flanagan.

Artificial Intelligence and its Social/Ethical Implications Senior Project II Group 5.
1 CMSC 671 Fall 2001 Class #11 – Tuesday, October 9.
The law of war: Humanitarian law THE STORY BEHIND THE STORY.
What is Artificial Intelligence?
AI Ethics Mario Cardador Professional Ethics 2007 Mälardalens högskola
ITEC 1010 Information and Organizations Chapter V Expert Systems.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
Professional Ethics and Responsibilities Part-II
International Law and the Use of Force (LG566) Topic 1: Introduction.
Reductionism, Free Will, Determinism and the Biological LOA This is key evaluation which can be brought into any questions from this section.
RESPONSIBLE BEHAVIOUR OF A STATE IN CYBERSPACE
Corporate Governance In Tanzania 2009
Ethics and Moral reasoning
Week Four Seminar Terrorism
Leadership E.
Security in Internet of Things Begins with the Data

ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
Unit 3.
The Outer Space Treaty Article III
Fundamentals of Information Systems
Chapter 11: Artificial Intelligence
Level 2 Diploma in Customer Service
ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
DSS: Decision Support Systems and AI: Artificial Intelligence
Concepts of Engineering and Technology
Chapter # 1 Overview of Ethics
This is Why you can’t just blow stuff up.
The 8th Habit by Stephen R. Covey
ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
Course Instructor: knza ch
Introduction Artificial Intelligent.
Introduction to Computer Ethics
Chapter 1: People and Government
Artificial Intelligence Lecture 2: Foundation of Artificial Intelligence By: Nur Uddin, Ph.D.
Just War Theory. Just War Theory JWT is not Pacifism Pacifism says that war is always unjust, and therefore always wrong. This is an absolute statement.
Why Study Ethics and computing?
War and Violence Can war be just?.
Chapter 7 Engineering Ethics
Chapter 13: Ethics and Law
The Need for Ethical Principles
Legal and Moral Complexities of Artificial General Intelligence
G5BAIM Artificial Intelligence Methods
Introduction to Computer Ethics
The Declaration of Independence
What Are Ethics? What are the objectives?
Social and Ethical Responsibility of Management
Introduction to IHL: Application and Basic Principles
CS-480b Network Security Dick Steflik
Lethal Autonomous Weapons in the International Community
Ethics, Innovation and the Law
Presentation transcript:

AI Ethics Robot Rights? Cyber Warfare

Artificial Intelligence DEFINITIONS “the study of ideas that enable computers to be intelligent.” "the study and design of intelligent agents" Intelligence?

Artificial Intelligence Definition ...ability to reason? ...ability to acquire and apply knowledge? ...ability to perceive and manipulate things?

Goals of AI Make computers more useful “Computer scientists and engineers” Understand the principles that make intelligence possible “Psychologists, linguists and philosophers”

Points of view Strong AI: all mental activity is done by computing ( feelings and conscience can be obtained by simple computation ) Soft AI: mental activity can only be simulated Opinions are not the same ethically speaking when treating intelligent beings or aparently intelligent beings

What makes AI a moral issue? Rights (private life, anonimity) Duties Human welfare (physical safety) Justice (equality) Ethical problems resulting from AI and intelligent systems can be divided into 3 main sections Information Control Reasoning

What makes AI a moral issue? Information and communication Intelligent systems store information in databases. Massive management of information and communications between systems could threaten private life, liberty or dignity of users.

What makes AI a moral issue? 2. Control applications – Robotics - Common problems of classical engineering: guarantee personal safety (physical) and take responsibilities with the environment. - Basic safety in robotics : universal laws stating rules about behavior between robots and human (robots can not injure humans, robots must protect humans...)

What makes AI a moral issue? 3. Automatic reasoning Idea: computers taking decisions by themselves Problem: trust in intelligent systems Examples: - Medical diagnosis by symptoms - Artificial vision - Automatic Learning - Natural language processing New ethical problems!

Automatic Reasoning Ethical problems 1. Computers have no consciousness - They can not take responsibility of their actions - Are the creators responsible? The company in charge? - This way final decisions are always taken by humans 2. A “consciousness” for AI is developed - Could a computer simulate animal or human brain in order to receive the same animal or human rights? - Responsibilities

Consciousness AI Consciousness Definition: “an alert cognitive state in which you are aware of yourself and your situation” AI systems would not only get rights, but also they would want to have rights.

Consciousness AI Trust: Automatic pilot VS. automatic judge, doctor or policeman Equality problems: Could conscious computers work for us? Would not they become slaves? Do we have the right to turn off a conscious computer?

AI Limits AI depends on Laws and economics Technology -Current technology is not enough, but is improving exponentially (Moore’s Law). -Physical and theoretical bounds are too far to be a possible restriction Ethics -Could be the first obstacle to the evolution of AI

AI in the future Education in AI ethics Think about future goals of AI Decisions taken will lead to new ethical problems AI needs parallel evolution in biology, psychology...as well as technology

Conclusion Current AI ethics are quite undefined Everyday new controversial discussions are held around AI in the future AI wants to create something we do not really know: intelligence What is intelligence could be discovered by AI researching We can not think about AI without ethics

Social Implications of A.I. Systems People could easily become fully dependent on machines Possibility of increased unemployment rate

Implications of A.I. Surpassing Humanity What rights would machines have? What moral obligations would Humanity have towards machines (and vice-versa; machines to Humanity)? We could possibly end up the lower-order creatures – how would Humanity be treated? “Hollywoodized” example – The Matrix

Closing Remarks General questions one might have Do we want to build a computer that will be like us? If so, what do we need them for? What will the human-computers do for humanity? No answers to these questions…yet research and achievement continues to progress each year - We must wait and see what the future holds

ROBOTIC POTENTIALMASSIVE! Contemporary Examples Social Impact: Foxconn International makes components for iPhones, iPads, etc. It will buy enough robots to replace 1.2 million workers in China. Military and Surveillance: Internet surveillance.. eg. gmail monitored in US by CIA. e.g. Israel’s “Iron Dome” Defensive system e.g. National Airspace Monitoring system

Current Unmanned Surveillance Vehicle: Drone Over 30,000 drones forecast for US airspace alone-border patrol, forest fire location, etc.

ROBOTIC POTENTIALMASSIVE! Amazing Medical Advances Stomach, or gastric cancer is the second leading cause of cancer deaths worldwide and is particularly common in East Asia. Researchers made a crab robot that enters through the mouth to ‘eat’ cancer in the stomach. Robots for surgery, internal examination, organ modification (artery clearing), behavioral modification (implanted in brain), physical assistance, etc.

SINGULARITY AND The ETHICAL ISSUES: The DEBATE Proposition: Even if a super-robot were to control all medical systems in the future, with unlimited possibilities to manipulate the human, so long as the word ‘human’ applies, there must be the presumption of an ethical awareness, an available intentionality to express self meaningfully, and some sense of legitimate ‘choice’.

SINGULARITY AND The ETHICAL ISSUES: The DEBATE Pro: So long as the statement “x is better for humans” has relevance, then, ethical evaluation will define the human. Even if we adopt Zadeh’s (1988) argument for fuzzy logic, we just have no means of relating to entities who do not exhibit the minimal elements noted above.

SINGULARITY AND The ETHICAL ISSUES: The DEBATE Con: Singularity may change the very definition of the human Already the line is blurring between the machine and the human Most current technology is already beyond the intelligence of most humans No one human institution has control over machines or their development Wide belief that machines do it better

ROBOTS BRING A HOST OF ETHICAL ISSUES Should robots only be developed that are ‘sensitive’ to human values? Will humans accept their replacement? Human modification technology no longer in future—pacemaker? Hearing aids? Motion? Can we build a robot with interpersonal skills? Haven’t we always had technological developments? The wheel, the boat, writing, the telephone, etc.

The success of AI might mean the end of the human race. Can we encode robots or robotic machines with some sort of laws of ethics, or ways to behave? How are we expected to treat them? (immoral to treat them as machines?) How are they expected to behave? Clearly this is a concern that is in the distant future, but as we’ve mentioned a few times today, baby steps can be merged into a larger system Almost any technology has the potential to cause harm in the wrong hands, but with AI and robotics, we have the new problem that the wrong hands might belong to the technology itself Much science fiction has addressed this issue - robots running amok The key here is how we design robots Can we encode robots or robotic machines with some sort of laws of ethics, or ways to behave? 2 additional issues: If robots are conscious How are we expected to treat them? (immoral to treat them as machines?) How are they expected to behave?

Laws of Robotics Law Zero: A robot may not injure humanity, or, through inaction, allow humanity to come to harm. Law One: A robot may not injure a human being, or through inaction allow a human being to come to harm, unless this would violate a higher order law. Law Two: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law. Law Three: A robot must protect its own existence as long as such protection does not conflict with a higher order law.

Robot Safety “As robots move into homes and offices, ensuring that they do not injure people will be vital. But how?” “Kenji Urada (born c. 1944, died 1981) was notable in that he was one of the first individuals killed by a robot. Urada was a 37-year old maintenance engineer at a Kawasaki plant. While working on a broken robot, he failed to turn it off completely, resulting in the robot pushing him into a grinding machine with its hydraulic arm. He died as a result.” Over 5 million roombas sold By 2020, South Korea wants 100% of households to have domestic robots Japanese firms have been working on robots as domestic help for the elderly

Robot Rights Robot rights are like animal rights? Examples Robbing a bank – what if a robot robs a bank?

SUMMING UP: Widespread embrace of technology by humans No guidelines for developing entities more intelligent than we are Massive human dislocation/destruction could be a result (Atom bomb?) Ultimately human ethics will have to grapple with outcomes Can there be a “higher ethics”?

Warfare http://www.theatlantic.com/technology/archive/2013/01/could-human-enhancement-turn-soldiers-into-weapons-that-violate-international-law-yes/266732/

Warfare New weapons must conform to International Humanitarian Law: Article 36 of the Geneva Conventions, Additional Protocol I of 1977, specifies: In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party. High contracting party is a term in international law for the parties to an international agreement. http://www.theatlantic.com/technology/archive/2013/01/could-human-enhancement-turn-soldiers-into-weapons-that-violate-international-law-yes/266732/

Warfare Conventional (human) soldiers are not generally regarded as weapons. If we just use lasik on a human, probably not a weapon. But how much would have to be enhanced/replaced before we would have a weapon? http://www.theatlantic.com/technology/archive/2013/01/could-human-enhancement-turn-soldiers-into-weapons-that-violate-international-law-yes/266732/

Warfare Conventional (human) soldiers are not generally regarded as weapons. But, do we agree that a sophisticated robotic soldier is a weapon? If we just use lasik on a human, probably not a weapon. But how much would have to be enhanced/replaced before we would have a weapon? http://www.theatlantic.com/technology/archive/2013/01/could-human-enhancement-turn-soldiers-into-weapons-that-violate-international-law-yes/266732/

Warfare Conventional (human) soldiers are not generally regarded as weapons. But, do we agree that a sophisticated robotic soldier is a weapon? What about a cyborg? If we just use lasik on a human, probably not a weapon. But how much would have to be enhanced/replaced before we would have a weapon? http://www.theatlantic.com/technology/archive/2013/01/could-human-enhancement-turn-soldiers-into-weapons-that-violate-international-law-yes/266732/

Cyberwarfare Jus ad bellum: Article 2(4) of the UN Charter prohibits every nation from using “the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations.” . Conceptual muddle: What constitutes use of force? Launching a Trojan horse that disrupts military communication? Hacking a billboard to display porn to disrupt traffic? Hacking a C&C center so it attacks its own population? Most of this from: http://www.nap.edu/catalog.php?record_id=12651 Jus ad bellum: just cause for going to war Jus in bello: just way to wage war

Cyberwarfare Jus in bello: Military necessity Minimize collateral damage Perfidy Distinction Neutrality Conceptual muddle: What constitutes distinction: If we launch a Trojan horse against an ememy, must it contain something like “This code brought to you compliments of the U.S. government”? Jus ad bellum: just cause for going to war Jus in bello: just way to wage war Perfidy: you can’t pretend to be an ambulance Distinction: you have to mark military objects clearly

Cyberwarfare Jus in bello: Military necessity Minimize collateral damage Perfidy Distinction Neutrality Conceptual muddle: What constitutes neutrality: If A allows B to drive tanks through its territory on their way to attack C, A is no longer neutral. If A allows network traffic to pass through its routers on the way from B to C and an attack is launched, has A given up neutrality? Jus ad bellum: just cause for going to war Jus in bello: just way to wage war Perfidy: you can’t pretend to be an ambulance Distinction: you have to mark military objects clearly

ZER 0 DAYS Delve deep into the burgeoning world of digital warfare  A black ops cyber-attack launched by the U.S. and Israel on an Iranian nuclear facility unleashed malware with unforeseen consequences.