Download presentation
Presentation is loading. Please wait.
Published byWilfrid Collins Modified over 9 years ago
1
Ethics Aspects Of Embedded And Cyber-Physical Systems Abhilash Thekkilakattil 1, Gordana Dodig-Crnkovic 1,2 1 Mälardalen University, Sweden 2 Chalmers Technical University, Sweden 2 University of Gothenburg, Sweden
2
Motivation Google’s latest self-driving car prototype, 2015 Mercedes-Benz Vario-based VITA Autonomous Prototype Eureka PROMETHEUS, 1980s Navlab-1 from CMU’s Navlab project,1980s Robotic Volkswagen Passat at Stanford University, 2009 Increasing use of “intelligence” in cyber-physical systems Ethical aspects form important consideration: from design and development to deployment and use of cyber physical systems Traditionally: who is to be blamed for accidents? Now: should software also be held responsible for accidents?
4
Addressing Ethical Challenges Identify the moral problem Examine alternative actions at handChoose a course of action Record feedback
5
Attributing Responsibility to Software? Typical arguments against holding software responsible: –“Humans build the systems, and hence they are ultimately responsible since they can control how the system works” –“Humans decide under what settings the systems will operate” –“Robots for which no human actor can be held responsible are poorly designed sociotechnical systems.” Most of the above arguments are true to some extent! …however, not incorporating the possibility of software responsibility in the discourse leaves a policy vacuum especially with the increasing deployment of Artificial Intelligence (AI) in such systems
6
Policy Vacuum Is mainly due to: Technology outpacing the legal infrastructure –Legal support for autonomous driving came into effect only after its development –The “bad sides” of many technologies like social networking and smartphones is only becoming evident with use over a long time Easiness of creating “intelligent” technology –Technology can be created relatively easily, e.g., Deep Blue computer that defeated humans in chess –Very likely that similar advances will be made in the safety-critical domain, e.g., a robonurse Dangerous situations can arise future “intelligent” systems are allowed to act outside an ethical framework Does not mean that human stakeholders are no longer responsible!
7
Demarcation Problem To attribute responsibility to software there is a need for: Tracing anomalous behaviors into the responsible stakeholder –Demarcate design errors from “unethical software behavior” Identify decisions taken by the software-agents –Judgment by the software vs. design decisions Highlights the need for a classification framework to enable traceability of anomalous behaviors
8
Contributions 1.Framework for classifying different systems w.r.t automation and autonomy –Accommodating the notion of “software responsibility” –Retaining the notion of “human responsibility” –Demarcating situations where software can be held responsible 1.Recommendation for ethics as an extra-functional property –Ethics is also an important property just like, e.g., reliability and timeliness, in the context of “truly intelligent” systems
9
Terminology Stakeholder: agent who may interact with the cyber-physical system at various stages of its design, development, deployment and operation Developer: individual or organization associated with the development of a cyber-physical system User: any stakeholder (person or software) that uses or operates a cyber-physical system Software-agent: any software associated with the development and use of a cyber-physical system, capable of making decisions that affect the system itself and its behavior
10
Classification Framework Automatic systems –Systems that typically replace hardware components –No autonomy: simply imitate the behavior of the replaced component –Example: drive-by-wire system Semi-automatic systems –Automatic systems that are controlled and coordinated by humans –Example: modern car with drive-by-wire and brake-by-wire Semi-autonomous systems –Autonomously operates once assigned a task –Human-in-loop assigns tasks that are autonomously carried out –Example: modern drones operate autonomously once given a task Autonomous systems –Operates “completely” autonomously –No human-in-loop to assign tasks or control behavior –Example: Future robots! Human-in-loop No human-in- loop cyber-physical systems
11
Assigning Responsibility Automatic system failures –Automatic systems: No autonomy, simply imitate the behavior of the replaced component –No intelligence in the system: all decisions are take according to a predictable algorithms e.g., steer-by-wire system –Responsibility of failure is attributable to the manufacturers Semi-automatic system failures –Semi-automatic systems: automatic systems that are controlled and coordinated by humans e.g., an airplane –Multiple stakeholders who are responsible: Manufacturer: responsible for correct functioning of the automatic systems User: responsible for operating the system according to the specifications
12
Assigning Responsibility Semi-autonomous system failures –Semi-autonomous systems: human-in-loop assigns tasks that are autonomously carried out –There are 3 stakeholders: User: responsible for how the system is used Manufacturer: responsible for automatic subsystems Software-agent: responsible for operational decisions Autonomous system failures –Autonomous Systems: operates “completely” autonomously without any human- in-loop to assign tasks or control behavior –There are 2 stakeholders: Manufacturer: responsible for automatic subsystems Software-agent: responsible for operational decisions
13
Challenges Ethical considerations were accommodated by “human agents” like engineers and users in traditional systems Need to include the possibility of an “ethics aware software-agent” in future AI based systems This brings in some of challenges, e.g.,: 1.Communication challenge: communication between different stakeholders, especially between “software-agents” and “human-agents” 1.Legal support for “software responsibility”: legislation to accommodate software responsibility
14
Ethics as an Extra-functional Property Ethics must be a part of the extra-functional property in future AI based systems –Software must be programmed to understand and apply ethics when AI based technologies are deployed –The ability of software to apply ethics is as important as other extra- functional properties like reliability and security –Allows for the seamless integration of ethics aspects into the software engineering life cycle
15
Conclusions Reluctance to accommodate the notion of “software responsibility” can leave a policy vacuum leading to dangerous situations Presented a framework to accommodate the notion of “software responsibility” –Enables the demarcation between “human responsibility” and “software responsibility” We highlight the need to consider ethics aspects as an extra- functional property just like e.g., reliability
16
Questions ? Thank you !
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.