EECS 690 April 9. A top-down approach: This approach is meant to generate a rule set from one or more specific ethical theories. Wallach and Allen start.

Slides:



Advertisements
Similar presentations
Morality: constitutive of or overcoming self-interest?
Advertisements

IS 376 Ethics and Convergence
Morality.
ETHICS BOWL kantian ETHICS.
Utilitarianism, Deontology and Confidentiality
Phil 160 Kant.
Ethical Principles, Quick Tests, And Decision-Making Guidelines
Ethics and Morality Theory Part 2 11 September 2006.
Ethics and ethical systems 12 January
COMP 381. Agenda  TA: Caitlyn Losee  Books and movies nominations  Team presentation signup Beginning of class End of class  Rawls and Moors.
Robots & Responsibility. Asimov's Three Laws A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must.
Ethical Theories High-level account of how questions about morality should be addressed. Similar to engineering models? V=IR: a tool to solve many engineering.
How Actions Can Be Morally Evaluated l Teleological Ethics: morality is the means to achieve what is identified as good or valuable l Deontological Ethics:
How Actions Can Be Morally Evaluated l Teleological Ethics: morality is the means to achieve what is identified as good or valuable l Deontological Ethics:
Kantian Ethics Exam Questions
Morality and Ethics.
MORALITY AND ETHICS (cont.). Debate Teams 1) “Cigarette Smoking Should be Banned in Public Areas” Support:Oppose: FishIda JuliusLok Kit 2) “It is wrong.
PHIL 2 Philosophy: Ethics in Contemporary Society
Introduction to Ethical Theory I Last session: “our focus will be on normative medical ethics, i.e., how people should behave in medical situations” –
“A man without ethics is a wild beast loosed upon this world.”
Ethics LL.B. STUDIES 2015 LECTURE 2. Part one Mapping ethics.
Genetic Engineering Kantian View.
EECS 690 Deontology 28 January The root of the word ‘deontology’ The word ‘deontology’ comes from pair of Greek words meaning ‘the study of duties’.
Consequentialism Is it OK to inflict pain on someone else? Is it OK to inflict pain on someone else? What if it is a small amount of pain to prevent a.
Immanuel Kant Deontological Ethics.
Ethical Theories Unit 9 Ethical Awareness. What Are Ethical Theories? - Explain what makes an action right or wrong - Have an overview of major ethical.
CSE/ISE 312 Ethics Do the Right Thing
MORALITY AND ETHICS. Where does morality come from?
Normative Ethical Theory: Utilitarianism and Kantian Deontology
Nicole Pongratz Allisen Jacques Shannon Griese Amber Teichmiller 4/13/2010.
EECS 690 Critiques of Deontology 31 January 2010.
Traditional Ethical Theories. Reminder Optional Tutorial Monday, February 25, 1-1:50 Room M122.
AREA 1 GUIDING PRINCIPLES SECTION 3 Consequences (Utilitarian Ethics) Duty and Reason (Kantian Ethics)
1 Problem Solving and Decision Making A Process Seven steps that provides a rational and analytical way of looking at decisions.
© Michael Lacewing Kant’s Categorical Imperative Michael Lacewing
CS 3043 Social Implications Of Computing 12/13/2015© 2009 Keith A. Pray 1 Class 2 Ethics And Professions Keith A. Pray Instructor socialimps.keithpray.net.
Ethical Decision Making , Ethical Theories
The Moral Philosophy of Immanuel Kant The Ethics of Duty and Reason
Ethics and Morality Theory Part 3 13 September 2006.
Ethics Overview: Deontological and Teleological ( Consequentalist) Systems.
Consequentialism (utilitarism). General description 'Consequentialist theories regard the moral value of actions, rules of conduct, and so on, as dependent.
Ethics Systematizing, defending, and recommending concepts of right and wrong behavior
Objections to Kant’s ethics Michael Lacewing
1 Ethical Issues in Computer Science CSCI 328, Fall 2013 Session 7 Ethics in IT Societies.
Utilitarianism Utilitarians focus on the consequences of actions.
Basic Framework of Normative Ethics. Normative Ethics ‘Normative’ means something that ‘guides’ or ‘controls’ ‘Normative’ means something that ‘guides’
CS 3043 Social Implications Of Computing 2/16/2016© 2009 Keith A. Pray 1 Class 2 Ethics And Professions Keith A. Pray Instructor socialimps.keithpray.net.
Kant and Kantian Ethics: Is it possible for “reason” to supply the absolute principles of morality?
Introduction  Based on something other than the consequences of a person’s actions  Unlike Egoism  People should act in their own self-interest  Unlike.
LO: I will know about the hypothetical and categorical imperatives Hmk: Part a essay question ‘Give an account of Kant’s theory of ethics’ (25)
Lesson Objective Key Words Lesson outcomes Hypothetical Categorical Imperatives Freedom To evaluate the differences between the Hypothetical and Categorical.
Ethical Decision Making. Daniels College Mission.
KANTIANISM AND EUTHANASIA ATTITUDES TO KEY ISSUES.
Kant (1) Grounding for the Metaphysics of Morals, Section 1.
EECS 690 January 27, Deontology Typically, when anyone talks about Deontology, they mean to talk about Immanuel Kant. Kant is THE deontologist.
Ethics: Theory and Practice
Morality and Ethics.
Moral Theory Review.
Chapter 1: A Moral Theory Primer
Theory of Formalism.
Contemporary Moral Problems
Kant’s Categorical Imperative
Kant and Kantian Ethics:
ETHICS BOWL kantian ETHICS.
What are the key parts of each theory you need to remember for Applied Ethics questions? Utilitarianism Deontology Virtue Ethics.
OBE 117 BUSINESS AND SOCIETY.
20th century conflict day one
Kantian Ethics.
Professional Ethics (GEN301/PHI200) UNIT 2: NORMATIVE THEORIES OF PROFESSIONAL ETHICS Handout # 2 CLO # 2 Explain the rationale behind adoption of normative.
Presentation transcript:

EECS 690 April 9

A top-down approach: This approach is meant to generate a rule set from one or more specific ethical theories. Wallach and Allen start off pessimistic about the viability of this approach, but point out that adherence to rules is an aspect of morality that must still be captured.

The Big Picture Theories (in Western thought) Utilitarianism Deontology What will be the authors’ questions about these theories is what the computability requirements would be for each. This approach may shed a unique light on the practice of morality itself.

Consequentialism: Utilitarianism (a subset of consequentialism) might initially appeal to us because of Bentham’s focus on calculability. James Gips, in 1995, supplied this list of computational requirements for a consequentialist robot: 1.A way of describing the situation in the world 2.A way of generating possible actions 3.A means of predicting the situation that would result if an action were taken given the current situation 4.A method of evaluating a situation in terms of its goodness or desirability

Some difficulties: How can one assign numbers to something as subjective as happiness? Do we aim for total or average happiness? What are the morally relevant features of any given situation? (People, animals, ecosystems?) How far/wide should the calculation of effect go? How much time is a moral agent allowed to devote to the decision-making process? Note that these are not only problems generated while thinking about the computability of moral theories, they are problems that concern peoples’ application of these moral theories, and they are issues that have not been widely settled, and not for lack of discussion.

A note: The authors do a good job of avoiding the question “how do humans do this?” when discussing ethical algorithms and behaviors. It may well be that general human behavior is not a good model to emulate for ethical systems. This raises the question of what standard to hold ethical systems to. Do we tolerate the same range of moral failure among these systems? These are questions that might fit here, but for the sake of organization are addressed later in the book.

Asimov’s Laws of Robotics 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (Later, a Zeroth law was added: A robot may not harm humanity, or, by inaction, allow humanity to come to harm)

Laws of Robotics Asimov was really serious about this, and was (I think foolishly) optimistic about the usefulness of robotic laws as stated. (Asimov’s short essay on the robotic laws forthcoming on the Further Resources section) This doesn’t fit with consequentialist theories very well because of its reliance on special duties The zeroth law is hopelessly vague for an action-guiding principle, and the first law alone can generate conflicts. There may be a real pressing difficulty with negative responsibility

Specific versus Abstract Specific rules are very easy to apply, but have limited usefulness in novel situations. Still, perhaps part of what ethical systems require is a few specific rules for specific circumstances, though these alone would not be sufficient. Abstract rules are more generally useful, as they allow adaptation, but are correspondingly difficult to apply.

The Categorical Imperative Act only as you could will that your maxim become universal law. –A computer would need to appreciate: a goal a maxim (a behavior-guiding means to the goal) an understanding of the implications for achievement of the goal by making the maxim universal Lying, for example, could not be a universal law, because its goals would be thwarted by its being universalized. (This provides a problem, according to critics of Kant.)

Language vagueness and morality Our language is full of words that are vague but that have clear applications and misapplications. (e.g. ‘baldness’ is a vague concept, but Captain Picard IS bald, and the members of the band ZZ Top are not) Perhaps by focusing on the clear applications of moral rules, we might achieve something useful for the less clear cases.