J. Blackmon.  The Neuron Replacement Thought Experiment  Basl on Moral Status  Basl on Teleo-Interests and the Comparable Welfare Thesis  Conclusion.

Slides:



Advertisements
Similar presentations
First, Scale Up to the Robotic Turing Test, Then Worry About Feeling.
Advertisements

Introduction to Ethics Lecture 19 Regan & The Case for Animal Rights
The Extended Mind.
The Subject-Matter of Ethics
Meditation IV God is not a Deceiver, Truth Criterion & Problem of Error.
An Argument that Abortion is wrong
Why Abortion is Immoral
The Role of God in the Meditations (1) Context
Descartes God.
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Libertarianism A Libertarian, such as Taylor:
The Euthyphro dilemma.
NO PAIN, NO GAIN… IT’S TIME TO GROW YOUR BRAIN! PART 1.
Introduction to Ethics Lecture 17 Warren on Abortion
1 Is Abortion Wrong? I I. 2 Some Background 1 st Mo.2 nd Mo.3 rd Mo.4 th Mo.5 th Mo.6 th Mo.7 th Mo.8 th Mo.9 th Mo. Conception “Zygote” “Embryo” “Fetus”
Section 2.2 You Are What You Eat Mind as Body 1 Empiricism Empiricism claims that the only source of knowledge about the external world is sense experience.
Introduction to Ethics Lecture 22 Active & Passive Euthanasia
Introduction to Ethics Lecture 8 Moore’s Non-naturalism
The Moral Status of the Non- Human World: Singer and Cohen.
Introduction to Ethics Lecture 9 The Challenge of Cultural Relativism By David Kelsey.
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
Can a machine be conscious? (How?) Depends what we mean by “machine? man-made devices? toasters? ovens? cars? computers? today’s robots? "Almost certainly.
16 Days and 16 Fallacies I The Moral Significance of the Question When a Human Being Begins to Exist.
“An attempt to understand and explain how the thoughts, feelings, and behavior of individuals are influenced by the actual, imagined, or implied presence.
Introduction to Psychology Suzy Scherf Lecture 1: Introduction The Science of Psychology Thinking Critically in Psychology Evaluating the SSSM.
1 Dennett The intentional stance, the interface problem. Tuesday introduction Fredrik Stjernberg IKK Philosophy Linköping University
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
PHILOSOPHY OF THE BODY. I. Historical Considerations: The Problem of Dualism What is Dualism? Basically dualism which is introduced by Plato is a theory.
Introduction to Ethics Lecture 19 Regan & The Case for Animal Rights By David Kelsey.
J. Blackmon. Could a synthetic thing be conscious?  A common intuition is: Of course not!  But the Neuron Replacement Thought Experiment might convince.
“A man without ethics is a wild beast loosed upon this world.”
Religious attitudes to Animal Rights – what do we need to know?
1 Abortion III Abortion. 2 Marquis’ Project Thesis: In the overwhelming majority of cases, deliberate abortions are seriously immoral. Don Marquis: “Why.
Utilitarianism. Our Everyday Ethical Choices Consider the last ethical choice you had to make in the most general terms, what were you thinking about?
1 III World Hunger & Poverty. 2 Arthur’s Central Argument John Arthur: “World Hunger and Moral Obligation” 1)Ignores an important moral factor: entitlement.
John Locke ( ) Influential both as a philosopher (Essay Concerning Human Understanding) and as a political thinker (Two Treatises on Government)
Human Nature 2.3 The Mind-Body Problem: How Do Mind and Body Relate?
In the mid-1930s, Alan H. Monroe developed a pattern for persuasive messages that has become something of a standard because of its effectiveness. It is.
A New Artificial Intelligence 5 Kevin Warwick. Philosophy of AI II Here we will look afresh at some of the arguments Here we will look afresh at some.
Predation versus transplantation Is the animal rights ethic consistent? Stijn Bruers, IARC Esch,
Introduction to Ethics Lecture 9 The Challenge of Cultural Relativism By David Kelsey.
The Free-Will Problem Appendix to Chapter 9 TOK II.
Philosophy 4610 Philosophy of Mind Week 4: Objections to Behaviorism The Identity Theory.
Philosophy 4610 Philosophy of Mind Week 8: Can a Computer Think?
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson) By David Kelsey.
Philosophy 224 Responding to the Challenge. Taylor, “The Concept of a Person” Taylor begins by noting something that is going to become thematic for us.
Reason and Argument Fallacies of Vagueness. Fallacies A fallacy is what results when there is something wrong with someone’s reasoning. The number and.
Reduction Nomological Reduction –1-1 relations –Many-1 relations (supervenience) Functions & mechanisms? Emergence –The problem of epiphenomenalism Attribute.
Ethical non-naturalism
It’s Your Life!!! Just Make Sure You’re Living it.
Assoc. Prof. Dr. Ahmet Turan ÖZCERİT.  What is moral values  What is Ethics  What is engineering ethics You will learn: 2.
Preference Utilitarianism. Learning Objectives By the end of this lesson, we will have... Consolidated our knowledge of Act and Rule Utilitarianism by.
Introduction to Philosophy Lecture 13 Minds and Bodies #2 (Physicalism) By David Kelsey.
Objectives: 1)Describe how natural variation is used in artificial selection. 2)Explain how natural selection is related to species’ fitness. 3)Identify.
Arguments against the existence of God Do you believe in God? Why or why not?
What are the strengths and weaknesses of Descartes’ Trademark Argument? StrengthsWeaknesses p , You have 3 minutes to read through the chart you.
Bouwsma and the evil demon. Bouwsma’s Goal Bouwsma tries to show that in the evil demon scenario we are not actually deceived. Contrary to what Descartes.
Origins of Psychology 4.2.1: Approaches in Psychology Origins of Psychology: Wundt, introspection and the emergence of Psychology as a science.
Give definitions Give an opinion and justify that opinion Explain religious attitudes Respond to a statement – 2 sides.
Chapter 10: Cloning and Genetic Enhancement John Robertson, “Liberty, Identity, and Human Cloning” – Robertson's main 3-part argumentative strategy Part.
AS Ethics Utilitarianism Title: - Preference Utilitarianism To begin… What is meant by preference? L/O: To understand Preference Utilitarianism.
MODERN UTILITARIANISM AND GENETIC ENGINEERING IS IT WRONG TO INTERFERE WITH NATURE? CAN WE JUSTIFY THE SACRIFICE OF A FEW LIVES TO SAVE MANY? DO ANIMALS.
Relativism, Divine Command Theory, and Particularism A closer look at some prominent views of ethical theory.
Unscramble The Words What are these key terms from the current theory we’re looking at? Finicalmounts Callaroues Ipunt Optutu Relegatedgunkmown Nupmat.
Mind-Brain Type Identity Theory
On Whiteboards: Do animals have any moral status (should they be considered when making moral decisions)? Whether you answered yes or no, say why. On what.
01 4 Ethical Language 4.1 Meta-Ethics.
What is good / bad about this answer?
Speciesism and the Idea of Equality
Presentation transcript:

J. Blackmon

 The Neuron Replacement Thought Experiment  Basl on Moral Status  Basl on Teleo-Interests and the Comparable Welfare Thesis  Conclusion

Could a synthetic thing be conscious?  A common intuition is: Of course not!  But the Neuron Replacement Thought Experiment might convince you otherwise.

The Neuron Replacement Thought Experiment  Suppose bioengineers develop a synthetic neuron, a unit that is functionally identical to our own biological neurons.

The Neuron Replacement Thought Experiment  If one of your neurons is about to die, they can replace it with a perfect functional (although synthetic) duplicate.

The Neuron Replacement Thought Experiment  Imagine that this is done for one of your neurons.  You now have a synthetic neuron doing exactly what your biological neuron had been doing.

The Neuron Replacement Thought Experiment  Would there be any difference?  No one else would notice a difference in your behavior. After all, your behavior is driven by the activity of your neurons, and because this synthetic replacement is functionally equivalent, your behavior will be exactly the same.

The Neuron Replacement Thought Experiment  Suppose now that another of your neurons is replaced with a functionally equivalent synthetic neuron.  And another, and another, …

The Neuron Replacement Thought Experiment  What happens?  Do you gradually lose consciousness even though you continue to act exactly the same?  Do you reach some threshold at which consciousness just vanishes?  Or do you remain conscious, even when every single neuron has been replaced and your “brain” is entirely synthetic?

The Neuron Replacement Thought Experiment  Either you remain conscious throughout, or you lose your conscious experience at some point.  If you remain conscious throughout, then an artificial synthetic thing can be conscious.  If you lose your conscious experience at some point, then consciousness is a property that cannot be scientifically studied.

The Neuron Replacement Thought Experiment  If, on the other hand, you lose your conscious experience at some point, then consciousness is a property that cannot be scientifically studied at the level of behavior.  After all, your body’s behavior remains exactly the same. (And “you” continue to insist that “you” feel fine and are having conscious experience.)

The Neuron Replacement Thought Experiment  Either artificial consciousness is possible or consciousness cannot be detected or investigated by examining one’s behavior!

The Neuron Replacement Thought Experiment  Either artificial consciousness is possible or consciousness cannot be detected or investigated by examining one’s behavior!  But if you think consciousness can be detected and investigated this way, then the answer to whether an artificial thing can be conscious is: Of course!

John Basl’s Account

 Something has moral status if it’s worthy of our consideration in moral deliberation.  An artifact might have certain kind of moral status just because it belongs to someone else. Or because it’s instrumental to doing something morally required.  However, a being might have a different kind of moral status: moral considerability or inherent worth.

 Something is a moral patient if it has a welfare composed of interests to be taken into account for the sake of the individual whose welfare it is.

 Humans are moral patients.  Presumably, certain non-human animals are, too.  We’re going to investigate whether certain kinds of artifacts can be moral patients.

 Something is a moral patient if it has a welfare composed of interests to be taken into account for the sake of the individual whose welfare it is.  Interests are those things which contribute to an individual’s welfare.

 Psychological Interests: those had in virtue of having certain psychological capacities and states.  Teleo-Interests: those had in virtue of being goal- directed or teleologically organized.

The Easy Case  Basl: If x has certain human capacities, then x is a moral patient.  Basl considers and rejects the following alternatives:  Appeal to Religion  Appeal to Species

The Easy Case  Appeal to Religion: If x was chosen by God to be a moral patient, then x is a moral patient.  Problems  First, how do we know whether some x was chosen by God?  Second, how can one justify this to the governed given that some of the governed are secular?

The Easy Case  Appeal to Species: If x belongs to one of the special species, then x is a moral patient.  Problems  We can imagine aliens that are moral patients in the same way we are.  A subset of humans might evolve into a new species.

The Easy Case  So Basl rejects both the Religion and Species alternatives.  Again, if x has certain human capacities, then x is a moral patient.

The Easy Case  So Basl rejects both the Religion and Species alternatives.  Again, if x has certain human capacities, then x is a moral patient.  And, recall, this is what the Turing Test is all about, even if we need to expand on some of the capacities.  And that’s what the Neuron Replacement Thought Experiment is all about, too.

The Harder Cases: Animals and Others

 Basl: The capacity for attitudes toward sensory experiences is sufficient for moral patiency.

The Harder Cases: Animals and Others  Basl: The capacity for attitudes toward sensory experiences is sufficient for moral patiency.  Note that Basl thinks simply having sensory experience is not enough.  Hitting a dog with a hammer is wrong not simply because the dog can experience pain, but because the dog has an aversive attitude to that sensory experience.

Epistemic Challenges  Problem of Other Minds: How do we truly know that other people have conscious minds like us?  Our best answer so far is that we know other humans are mentally like us because they are like us in terms of evolution, physiology, and behavior.  If x shares a common ancestor… If x is physiologically like us… If x behaves in similar ways…

Epistemic Challenges  Our best answer so far is that we know other humans are mentally like us because they are like us in terms of evolution, physiology, and behavior.  But with machines, we do not have these obvious connections.

How can non-sentient things have interests?  Basl builds from work in environmental ethics.  Why is acid bad for the maple tree, while water is good for it?  The answer cannot rely on the maple tree’s welfare unless it can be established that the tree has welfare.  However, the tree cannot suffer or be happy.

How can non-sentient things have interests?  Some people appeal to teleology.

How can non-sentient things have interests?  Some people appeal to teleology.  A maple tree has the end/goal of survival and reproduction.  But if it cannot grow leaves, then it cannot meet its ends.  So its interests/ends are frustrated.

Teleo-Interests  Non-sentient beings can have interests. Thus they can have welfare and so be moral patients.

Teleo-Interests  Non-sentient beings can have interests. Thus they can have welfare and so be moral patients.  If non-sentient beings can have interests, why can’t (non-sentient) machines have them?  Basl considers one objection, the Objection from Derivativeness.

Objection from Derivativeness:  The teleological organization of artifacts is derivative of our interests. (Not so with organisms.)  Therefore, the only sense in which mere machines can have interests is derivative.  But derivative interests are insufficient for welfare and moral considerability/patiency.

Basl’s Reply to the Objection from Derivativeness  There are two kinds of derivativeness.  Use-Derivativeness: Machines which exist only for our use because of our needs and desires may have use-derivative interests. Their very existence derives from our intentions and interests.  Explanatory Derivativeness: The ends or teleo- organization of a mere machine can only be explained by reference to the intentions or ends of conscious beings. The explanation derives from our intentions.

Basl’s Reply to the Objection from Derivativeness  Now, consider organisms such as crops or pets. Yes, these things exist because of our interest. However, they clearly have interests of their own.

Basl’s Reply to the Objection from Derivativeness  Now, consider organisms such as crops or pets. Yes, these things exist because of our interest. However, they clearly have interests of their own.  So use-derivativeness does not disqualify a thing for having interests of its own.

Basl’s Reply to the Objection from Derivativeness  Similarly, consider the case in which I play a significant role in the life and career choice of a child. That child’s preferences might not be explainable without reference to mine; however, the child still has interests of his or her own.  Thus explanatory derivativeness does not disqualify a thing for having interests of its own.

Basl’s Reply (my expansion of it)  Allow me to expand.  Physically identical things can have different origins.  Thus if naturally occurring maple trees have interests but artificially occurring maple trees do not, then by the standards of those making the Objection from Derivativeness, the first would “inherit” welfare while the second would have none.

Basl’s Reply (my expansion of it)  Moreover, if Jill 1 is born naturally, but Jill 2 is a clone of Jill 1 created out of our own interests, then Jill 2 would (according to the Objection from Derivativeness) have no interests!  But Jill 2 would be genetically identical to Jill 1.

Basl’s Reply (my expansion of it)  Moreover, if Jill 1 is born naturally, but Jill 2 is a clone of Jill 1 created out of our own interests, then Jill 2 would (according to the Objection from Derivativeness) have no interests!  But Jill 2 would be genetically identical to Jill 1.  Thus having interests (on this allegedly faulty view!) is not something that arises just from one’s genetics or physical make-up.

Basl’s Reply (my expansion of it)  Moreover, if Jill 1 is born naturally, but Jill 2 is a clone of Jill 1 created out of our own interests, then Jill 2 would (according to the Objection from Derivativeness) have no interests!  But Jill 2 would be genetically identical to Jill 1.  Thus having interests (on this allegedly faulty view!) is not something that arises just from one’s genetics or physical make-up.  The consequences are absurd.

Basl says the Objection from Derivativeness is the best objection, but it fails. His alternative is the Comparable Welfare Thesis.

Comparable Welfare Thesis (CTW): If non- sentient organisms have teleo-interests, then mere machines have teleo-interests.

Comparable Welfare Thesis (CWT): If non-sentient organisms have teleo-interests, then mere machines have teleo-interests.  Note that the CWT, of course, does not say that mere machines have teleo-interests.  It just says that if non-sentient organisms have them, then so do mere machines.  The idea is that whatever justifies acknowledging teleo-interests in organisms also justifies acknowledging them in mere machines.

Comparable Welfare Thesis (CWT): If non- sentient organisms have teleo-interests, then mere machines have teleo-interests.

But consider:  Basl has argued that derivativeness can’t rob a thing of its teleo-interests. So he has rejected the Objection from Derivativeness.  He has not refuted the position that teleo- interests must be acquired “naturally”.

Thus the environmental ethicist might take the following stance:  Agreed, an organism can’t be excluded from teleo-interests simply because someone created and molded it to suit their own ends. (The cases of the crops, pets, and child can all be accepted.)  But this doesn’t mean that mere machines have teleo-interests.

Thus the environmental ethicist might take the following stance (cont’d):  How can crops, pets, and children have teleo- interests while mere machines do not?  The environmental ethicist might try to argue that the crop, the pet, and the child are all members of natural kinds (plants, animals), while mere machines are not.

Thus the environmental ethicist might take the following stance (cont’d):  How can crops, pets, and children have teleo- interests while mere machines do not?  The environmental ethicist might try to argue that the crop, the pet, and the child are all members of natural kinds (plants, animals), while mere machines are not. This view faces a line-drawing problem. (Consider genetically modified organisms.)

Basl argues that teleo-interests are nonetheless unimportant. “You may recycle your computers with impunity.”

 So long as machines are not conscious, we may do with them largely as we please.

 But once we have something approaching artificial consciousness, we must consider whether these things are conscious and, crucially, whether they have attitudes.

 So long as machines are not conscious, we may do with them largely as we please.  But once we have something approaching artificial consciousness, we must consider whether these things are conscious and, crucially, whether they have attitudes.  We must then be careful with the epistemic uncertainties so as to avoid ignoring a possible person.

End