Presentation is loading. Please wait.

Presentation is loading. Please wait.

AI Ethics Robot Rights? Cyber Warfare

Similar presentations


Presentation on theme: "AI Ethics Robot Rights? Cyber Warfare"— Presentation transcript:

1 AI Ethics Robot Rights? Cyber Warfare

2 Artificial Intelligence
DEFINITIONS “the study of ideas that enable computers to be intelligent.” "the study and design of intelligent agents" Intelligence?

3 Artificial Intelligence
Definition ...ability to reason? ...ability to acquire and apply knowledge? ...ability to perceive and manipulate things?

4 Goals of AI Make computers more useful
“Computer scientists and engineers” Understand the principles that make intelligence possible “Psychologists, linguists and philosophers”

5 Points of view Strong AI: all mental activity is done by computing ( feelings and conscience can be obtained by simple computation ) Soft AI: mental activity can only be simulated Opinions are not the same ethically speaking when treating intelligent beings or aparently intelligent beings

6 What makes AI a moral issue?
Rights (private life, anonimity) Duties Human welfare (physical safety) Justice (equality) Ethical problems resulting from AI and intelligent systems can be divided into 3 main sections Information Control Reasoning

7 What makes AI a moral issue?
Information and communication Intelligent systems store information in databases. Massive management of information and communications between systems could threaten private life, liberty or dignity of users.

8 What makes AI a moral issue?
2. Control applications – Robotics - Common problems of classical engineering: guarantee personal safety (physical) and take responsibilities with the environment. - Basic safety in robotics : universal laws stating rules about behavior between robots and human (robots can not injure humans, robots must protect humans...)

9 What makes AI a moral issue?
3. Automatic reasoning Idea: computers taking decisions by themselves Problem: trust in intelligent systems Examples: - Medical diagnosis by symptoms - Artificial vision - Automatic Learning - Natural language processing New ethical problems!

10 Automatic Reasoning Ethical problems
1. Computers have no consciousness - They can not take responsibility of their actions - Are the creators responsible? The company in charge? - This way final decisions are always taken by humans 2. A “consciousness” for AI is developed - Could a computer simulate animal or human brain in order to receive the same animal or human rights? - Responsibilities

11 Consciousness AI Consciousness Definition:
“an alert cognitive state in which you are aware of yourself and your situation” AI systems would not only get rights, but also they would want to have rights.

12 Consciousness AI Trust: Automatic pilot VS. automatic judge, doctor or policeman Equality problems: Could conscious computers work for us? Would not they become slaves? Do we have the right to turn off a conscious computer?

13 AI Limits AI depends on Laws and economics Technology
-Current technology is not enough, but is improving exponentially (Moore’s Law). -Physical and theoretical bounds are too far to be a possible restriction Ethics -Could be the first obstacle to the evolution of AI

14 AI in the future Education in AI ethics Think about future goals of AI
Decisions taken will lead to new ethical problems AI needs parallel evolution in biology, psychology...as well as technology

15 Conclusion Current AI ethics are quite undefined
Everyday new controversial discussions are held around AI in the future AI wants to create something we do not really know: intelligence What is intelligence could be discovered by AI researching We can not think about AI without ethics

16 Social Implications of A.I. Systems
People could easily become fully dependent on machines Possibility of increased unemployment rate

17 Implications of A.I. Surpassing Humanity
What rights would machines have? What moral obligations would Humanity have towards machines (and vice-versa; machines to Humanity)? We could possibly end up the lower-order creatures – how would Humanity be treated? “Hollywoodized” example – The Matrix

18 Closing Remarks General questions one might have
Do we want to build a computer that will be like us? If so, what do we need them for? What will the human-computers do for humanity? No answers to these questions…yet research and achievement continues to progress each year - We must wait and see what the future holds

19 ROBOTIC POTENTIALMASSIVE! Contemporary Examples
Social Impact: Foxconn International makes components for iPhones, iPads, etc. It will buy enough robots to replace 1.2 million workers in China. Military and Surveillance: Internet surveillance.. eg. gmail monitored in US by CIA. e.g. Israel’s “Iron Dome” Defensive system e.g. National Airspace Monitoring system

20 Current Unmanned Surveillance Vehicle: Drone
Over 30,000 drones forecast for US airspace alone-border patrol, forest fire location, etc.

21 ROBOTIC POTENTIALMASSIVE! Amazing Medical Advances
Stomach, or gastric cancer is the second leading cause of cancer deaths worldwide and is particularly common in East Asia. Researchers made a crab robot that enters through the mouth to ‘eat’ cancer in the stomach. Robots for surgery, internal examination, organ modification (artery clearing), behavioral modification (implanted in brain), physical assistance, etc.

22 SINGULARITY AND The ETHICAL ISSUES: The DEBATE
Proposition: Even if a super-robot were to control all medical systems in the future, with unlimited possibilities to manipulate the human, so long as the word ‘human’ applies, there must be the presumption of an ethical awareness, an available intentionality to express self meaningfully, and some sense of legitimate ‘choice’.

23 SINGULARITY AND The ETHICAL ISSUES: The DEBATE
Pro: So long as the statement “x is better for humans” has relevance, then, ethical evaluation will define the human. Even if we adopt Zadeh’s (1988) argument for fuzzy logic, we just have no means of relating to entities who do not exhibit the minimal elements noted above.

24 SINGULARITY AND The ETHICAL ISSUES: The DEBATE
Con: Singularity may change the very definition of the human Already the line is blurring between the machine and the human Most current technology is already beyond the intelligence of most humans No one human institution has control over machines or their development Wide belief that machines do it better

25 ROBOTS BRING A HOST OF ETHICAL ISSUES
Should robots only be developed that are ‘sensitive’ to human values? Will humans accept their replacement? Human modification technology no longer in future—pacemaker? Hearing aids? Motion? Can we build a robot with interpersonal skills? Haven’t we always had technological developments? The wheel, the boat, writing, the telephone, etc.

26 The success of AI might mean the end of the human race.
Can we encode robots or robotic machines with some sort of laws of ethics, or ways to behave? How are we expected to treat them? (immoral to treat them as machines?) How are they expected to behave? Clearly this is a concern that is in the distant future, but as we’ve mentioned a few times today, baby steps can be merged into a larger system Almost any technology has the potential to cause harm in the wrong hands, but with AI and robotics, we have the new problem that the wrong hands might belong to the technology itself Much science fiction has addressed this issue - robots running amok The key here is how we design robots Can we encode robots or robotic machines with some sort of laws of ethics, or ways to behave? 2 additional issues: If robots are conscious How are we expected to treat them? (immoral to treat them as machines?) How are they expected to behave?

27

28 Laws of Robotics Law Zero: A robot may not injure humanity, or, through inaction, allow humanity to come to harm. Law One: A robot may not injure a human being, or through inaction allow a human being to come to harm, unless this would violate a higher order law. Law Two: A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law. Law Three: A robot must protect its own existence as long as such protection does not conflict with a higher order law.

29 Robot Safety “As robots move into homes and offices, ensuring that they do not injure people will be vital. But how?” “Kenji Urada (born c. 1944, died 1981) was notable in that he was one of the first individuals killed by a robot. Urada was a 37-year old maintenance engineer at a Kawasaki plant. While working on a broken robot, he failed to turn it off completely, resulting in the robot pushing him into a grinding machine with its hydraulic arm. He died as a result.” Over 5 million roombas sold By 2020, South Korea wants 100% of households to have domestic robots Japanese firms have been working on robots as domestic help for the elderly

30 Robot Rights Robot rights are like animal rights? Examples
Robbing a bank – what if a robot robs a bank?

31 SUMMING UP: Widespread embrace of technology by humans
No guidelines for developing entities more intelligent than we are Massive human dislocation/destruction could be a result (Atom bomb?) Ultimately human ethics will have to grapple with outcomes Can there be a “higher ethics”?

32 Warfare

33 Warfare New weapons must conform to International Humanitarian Law:
Article 36 of the Geneva Conventions, Additional Protocol I of 1977, specifies: In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party. High contracting party is a term in international law for the parties to an international agreement.

34 Warfare Conventional (human) soldiers are not generally regarded as weapons. If we just use lasik on a human, probably not a weapon. But how much would have to be enhanced/replaced before we would have a weapon?

35 Warfare Conventional (human) soldiers are not generally regarded as weapons. But, do we agree that a sophisticated robotic soldier is a weapon? If we just use lasik on a human, probably not a weapon. But how much would have to be enhanced/replaced before we would have a weapon?

36 Warfare Conventional (human) soldiers are not generally regarded as weapons. But, do we agree that a sophisticated robotic soldier is a weapon? What about a cyborg? If we just use lasik on a human, probably not a weapon. But how much would have to be enhanced/replaced before we would have a weapon?

37 Cyberwarfare Jus ad bellum:
Article 2(4) of the UN Charter prohibits every nation from using “the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations.” . Conceptual muddle: What constitutes use of force? Launching a Trojan horse that disrupts military communication? Hacking a billboard to display porn to disrupt traffic? Hacking a C&C center so it attacks its own population? Most of this from: Jus ad bellum: just cause for going to war Jus in bello: just way to wage war

38 Cyberwarfare Jus in bello: Military necessity
Minimize collateral damage Perfidy Distinction Neutrality Conceptual muddle: What constitutes distinction: If we launch a Trojan horse against an ememy, must it contain something like “This code brought to you compliments of the U.S. government”? Jus ad bellum: just cause for going to war Jus in bello: just way to wage war Perfidy: you can’t pretend to be an ambulance Distinction: you have to mark military objects clearly

39 Cyberwarfare Jus in bello: Military necessity
Minimize collateral damage Perfidy Distinction Neutrality Conceptual muddle: What constitutes neutrality: If A allows B to drive tanks through its territory on their way to attack C, A is no longer neutral. If A allows network traffic to pass through its routers on the way from B to C and an attack is launched, has A given up neutrality? Jus ad bellum: just cause for going to war Jus in bello: just way to wage war Perfidy: you can’t pretend to be an ambulance Distinction: you have to mark military objects clearly

40 ZER 0 DAYS Delve deep into the burgeoning world of digital warfare  A black ops cyber-attack launched by the U.S. and Israel on an Iranian nuclear facility unleashed malware with unforeseen consequences.


Download ppt "AI Ethics Robot Rights? Cyber Warfare"

Similar presentations


Ads by Google