FUNCTIONALISM HTTPS://WWW.YOUTUBE.COM/WATCH?V=MWMPFJ6B--8.

Slides:



Advertisements
Similar presentations
The Extended Mind.
Advertisements

Summer 2011 Thursday, 07/14. Machine Functionalism Claims that the mind is a (very complex) computer program. One that arises naturally, not one thats.
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
LAST LECTURE. Functionalism Functionalism in philosophy of mind is the view that mental states should be identified with and differentiated in terms of.
Section 2.3 I, Robot Mind as Software.
Dark Rooms and Chinese Brains Philosophy of Mind BRENT SILBY Unlimited (UPT)
Section 2.2 You Are What You Eat Mind as Body 1 Empiricism Empiricism claims that the only source of knowledge about the external world is sense experience.
John Coleman DACE LWP How to reach Functionalism in 4 choices (and 639 words) Pack your baggage – mine includes Pack your baggage – mine includes.
B&LdeJ1 Theoretical Issues in Psychology Philosophy of Science and Philosophy of Mind for Psychologists.
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
PHILOSOPHY 100 (Ted Stolze) Notes on James Rachels, Problems from Philosophy.
Philosophy 4610 Philosophy of Mind Week 12: Qualia Friends and Foes.
Philosophy 4610 Philosophy of Mind Week 9: Computer Thinking (continued)
Summer 2011 Wednesday, 07/13. Formal Systems: The ELI System The ELI-system uses only three letters of the alphabet: E, L, I. It has a single axiom, EI,
Chapter 10: What am I?.
Shailesh Appukuttan : M.Tech 1st Year CS344 Seminar
The Turing Test What Is Turing Test? A person and a computer, being separated in two rooms, answer the tester’s questions on-line. If the interrogator.
Humans, Computers, and Computational Complexity J. Winters Brock Nathan Kaplan Jason Thompson.
Mind, Body and Philosophy
Property dualism and mental causation Michael Lacewing
The Mind-Body Problem. Some Theories of Mind Dualism –Substance Dualism: mind and body are differerent substances. Mind is unextended and not subject.
Doing Philosophy Philosophical theories are not primarily about facts. Therefore, there is no right or wrong. Philosophical arguments are well-argued opinions.
The Computational Theory of Mind. COMPUTATION Functions.
Philosophy 4610 Philosophy of Mind Week 5: Functionalism.
© Michael Lacewing Functionalism and the Mind- Body Problem Michael Lacewing
Functionalism and the Mind-Body Problem
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
Functionalism Mind and Body Knowledge and Reality; Lecture 3.
Chapter 6: Objections to the Physical Symbol System Hypothesis.
Turing Test and other amusements. Read this! The Actual Article by Turing.
Finding our way back  The initial result of Descartes’ use of hyperbolic doubt is the recognition that at least one thing cannot be doubted, at least.
Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.
CONSCIOUSNESS Frank Jackson, ‘Epiphenomenal Qualia’
Human Nature 2.3 The Mind-Body Problem: How Do Mind and Body Relate?
Philosophy 4610 Philosophy of Mind Week 4: Objections to Behaviorism The Identity Theory.
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson) By David Kelsey.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
The Mind And Body Problem Mr. DeZilva.  Humans are characterised by the body (physical) and the mind (consciousness) These are the fundamental properties.
EECS 690 April 2.
Computational Functionalism. Motivations A functionalist general purpose input-output device certainly sounds like a computer A functionalist general.
This week’s aims  To test your understanding of substance dualism through an initial assessment task  To explain and analyse the philosophical zombies.
Blindsight, Zombies & Consciousness Jim Fahey Department of Cognitive Science Rensselaer Polytechnic Institute 10/8/2009.
History and Philosophy (3 and 4): A Brief History of Cognitive Science
Philosophy of Mind Lecture II: Mind&behavior. Behaviorism
Philosophical behaviourism: two objections
Functionalism Computational Role
PHILOSOPHY 100 (Ted Stolze)
The Mind-Body Problem.
ATS2840 Philosophy of Mind Semester 1, 2017
Computational Theory of Mind
Problems for Identity Theory
The zombie argument: responses
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson)
The Problem of Consciousness
Rationalism versus Empiricism
Unscramble The Words What are these key terms from the current theory we’re looking at? Finicalmounts Callaroues Ipunt Optutu Relegatedgunkmown Nupmat.
Mind-Brain Type Identity Theory
Recap Key-Terms Cognitivism Non-Cognitivism Realism Anti-Realism
Recap Questions What is interactionism?
Theory of Computation Turing Machines.
What did I google to find this picture?
Do we directly perceive objects? (25 marks)
Essay Writing – What makes a good philosophy essay?
01 4 Ethical Language 4.1 Meta-Ethics.
What is good / bad about this answer?
Recap: What were the issues and responses?
On your whiteboard: What is hard behaviourism? What are its strengths?
Searle on Artificial Intelligence Minds, Brains and Science Chapter 2
Which of these things are defined functionally? What function?
Presented by Tim Hamilton
Presentation transcript:

FUNCTIONALISM

“I shall, in short, argue that pain is not a brain state, in the sense of a physical-chemical state of the brain... but another kind of state entirely. I propose the hypothesis that pain, or the state of being in pain, is a functional state of a whole organism” Hilary Putnam Psychological predicates

The primary objection to the identity theory is multiple realisability In response to this, physicalists dropped type identity in favour of token identity E.g. each token pain is identical to a token physical states: e.g C-fibre in humans, silicon chips in computers, spaghetti states in aliens

There’s a difference between: {an apple: a burnt keyboard: John’s D-fibres firing} and {John’s C-fibres firing: Izlyr’s spaghetti state: Kryten 2X4B-523P’s silicon-chip state} What makes c -fibre firings and silicon states pains but burnt keyboards and D-fibre firings not?

Some things are defined in terms of what they’re made of, e.g. water is H20 However, many things are defined in terms of what their functions are E.g the heart is something that pumps blood around the body of an organism: whether it’s made of muscle, or of metal and plastic is irrelevant

The property of ‘having the function y’ is a property that can occur in many different physical things For example, ‘being an eye’ is a functional property There are lots of types of eyes that work in different ways and have different physical properties

FUNCTIONALISM Says that something is a mental state because it has a particular function Mental states are functional states Mental states are real inner states

CONSIDER PAIN: It is typically caused by bodily injury it causes distress, a desire to make it go away, a belief about the location of the injury, etc; and it typically causes, wincing, bad language, nursing of the injured area, etc. Any state that plays this role is a pain

THREE KINDS OF RELATIONS CONSTITUTE THE ESSENTIAL FEATURES OF A MENTAL STATE: 1) typical ways the environment causes the mental state 2) typical ways the mental state interacts with other mental states 3) typical ways the mental state, in conjunction with other mental states, causes behaviour

You don't have to be a physicalist to be a functionalist (Chalmers isn’t) since functionalism defines mental states in terms of their function, not in terms of what they’re made of But most functionalists are physicalists

FUNCTIONALISM CAN BE SEEN AS A DEVELOPMENT FROM BEHAVIOURISM Two big problems with behaviourism: 1) Denies that mental states are inner states 2) Circularity problem: behavioural analysis of any mental term will implicitly invoke other mental terms

OBJECTION: CIRCULARITY Mental states are defined in terms of their relations to sensory input, behavioural output, and other mental states Functionalism takes into account the relationship between a mental state and other mental states This is circular

Our analysis of beliefs will say something about sensory input, behavioural output, and the relation of the belief to other mental states such as desires our analysis of desires will say something about sensory input, behavioural output, and the relation of the desire to other mental states such as beliefs We can’t understand beliefs without invoking desires and we can’t understand desires without invoking beliefs

TURING MACHINES (ALAN TURING) consist of an infinitely long tape divided into cells a scanner-printer (head) that reads one cell at a time, can erase what is in the cell, and write something new a finite set of symbols that are written in the cells a finite set of machine states that tell the head what to do when it reads the symbol in a cell

Head reads the cell and follows the machine state instructions Head erases the symbol Head types in a new symbol as per instructions Head moves one place to the left (in the direction of the arrow) and continues

Consider a machine that takes a number and adds ‘1’ to it. The machine's alphabet is ‘0’and ‘1’, and we can represent numbers as collections of 1s: 1=1,11=2,111=3, etc.

Turing machines can compute any function for which there is an explicit finite step by step procedure

Following the instructions in the table the machine has added a 1 to the 1s in the tape

BLOCK’S COLA MACHINE

Functionalism has a circularity problem: we can’t define mental state M1 without invoking M2, but we can't define M2 without invoking M1 the same is true of Turing Machines The machine state of a TM is defined entirely by its relations to inputs, outputs, and other machine states This circularity in defining the states of a TM doesn't cause any problems (it is not a vicious cycle)

MACHINE FUNCTIONALISM (HILARY PUTNAM) The mind is a Turing Machine, and mental states are states of its machine table i.e. the brain is a computer, and the mind is a computer program The brain is the hardware, the mind is the software

TMs are multiply realizable Software involved can be implemented in other kinds of hardware Anything that runs the same programme would have the same state

RAMSIFICATION (DAVID LEWIS) To Ramsify a sentence, we replace all the terms and phrases that refer to mental states with variables e.g. a, b, c (the backward E is called ‘the existential quantifier in logic, it means ‘there is an “a” such that’) It is put at the front of each variable in a Ramsified sentence this specifies the inputs, outputs, and relations between the internal states without using any mental terms

Since, in functionalism, a mental state is defined entirely by its relations to inputs, outputs, and other internal states, the Ramsey sentence replicates the original sentence as a specification of the mental states

PAIN Bodily damage and alertness cause pain; and pain causes wincing and distress; and distress causes a desire to be rid of the pain; and the desire to be rid of the pain together with the belief that nursing the damaged area will alleviate the pain causes nursing of the damaged area Ramsifying the above: l is in pain = (Bodily damage and a cause b; and b causes wincing and c; and c causes d; and d together with e causes nursing of the damaged area)

To say that Frank is in pain is to say that internal states a, b, c, d, e are related to each other, to inputs, and to outputs and that Frank is in state b

Such sentences aren't much use on their own but we can Ramsify our general talk of the mental as a whole and apply it to various platitudes found in folk psychology

All mental states can be analysed like this This would allow us to define mental states simultaneously but without circularity The network is abstract, physical manifestation is irrelevant, so it is multiply realisable

Frank perceives the orange in the kitchen and this causes him to believe that the orange is in the kitchen and this, together with his desire to get the orange, causes him to pick up the orange and this causes him to feel satisfaction

PROBLEMS WITH RAMSIFICATION The functionalist defines all mental terms at once in terms of the whole causal network of inputs, outputs, and internal states. when asked e.g. “what is pain?” or “what is desire?”, s/he simply points to one of the nodes in that network So if any clause of the Ramsey sentence is false, the whole sentence is false

For example, to say that a dog is in pain equals (Bodily damage and a cause b; and b causes wincing and c; and c causes d; and d together with e causes nursing of the damaged area) and the dog is in b) But e is a belief, and it's implausible to suppose that dogs have beliefs (beliefs are propositional and linguistic)

Since all our mental terms are defined together, and since dogs cannot have beliefs, it follows that dogs cannot experience pain Functionalist is victim to chauvinism of type identity theory

Only adult humans can be in pain; babies, like dogs, do not literally have beliefs The Ramsey sentence the functionalist use will contain all the platitudes of folk psychology If any of it is false, the sentence as a whole is false

The holistic functionalist defines mental terms simultaneously in a single Ramsey sentence The molecular functionalist defines mental terms in independent clusters there are many Ramsey sentences - so something could e.g. fail to have beliefs but still have emotions

ACTIVITIES Design a simple Turing machine Construct a complex statement containing folk psychology views about one of the following (being in love, going to the loo [careful], taking an exam) Ramsify it!

QUESTIONS Outline Functionalism Explain what a Turing machine is and how it can be used to support functionalism (against claims to circularity and multiple realisibility)

CRITICISMS OF FUNCTIONALISM

QUALIA Qualia are ‘phenomenal properties’ they are what give an experience it’s distinctive quality e.g. ‘what it is like’ to experience redness or to smell a rose We are aware of these properties through consciousness and introspection

1. Qualia, by definition, are intrinsic, non-representational properties of conscious mental states. 2. Intrinsic, non-representational properties cannot, by definition, be completely analysed in terms of their causal roles., because causal roles are relational properties, not intrinsic properties 3. Therefore, if qualia exist, some mental properties cannot be analysed in terms of their causal roles.

4. Functionalism claims that all mental properties are functional properties which can be completely analysed in terms of their causal roles. 5. Therefore, if qualia exist, functionalism is false. 6. Qualia exist. 7. Therefore, functionalism is false.

ABSENT QUALIA The possibility of a functional duplicate with no qualia Suppose we have a complete functional description of your mental states For each and every one of your mental states, we have an input-output analysis (Block calls this a ‘machine table’)

In ‘Troubles with functionalism’ Block accuses functionalism of ‘liberalism’ - the tendency to ascribe minds to things that do not have them He outlines two systems which could be functionally equivalent to a human being but without mental states

Imagine a body like yours with the head hollowed out and replaced with a set of tiny people who realise the same machine table as you according to the functionalist account, this ‘homunculi- head’ (i.e. head full of people) would have a mind and experiences of pain and intentional states such as beliefs and desires

Block argues that such a system would not be minded - there is nothing it is like to be the homunculi-head However no physical mechanism seems very intuitively plausible as a seat of qualia, not even a brain

CHINESE BRAIN (NED BLOCK) Imagine the entire nation of China simulates the workings of one brain, so a single Chinese person takes the place of one neurone Each are given two-way radios that connect them to each other, and connect some of them to an artificial body

Once we get the inputs, outputs, and relations between internal states right, the whole nation of China will realize the same functional organization as a human brain

According to functionalism, this should create a mind; but it is very difficult to believe that there would be a ‘Chinese consciousness’ If the Chinese system replicated the state of my brain when I feel pain, would something be in pain and if so what?

The Chinese system, although it duplicates your functioning, can’t duplicate your mind, because some mental states are qualia, and qualia are not functional states

This is one version of the absent qualia problem: it seems possible that there could be systems that share our functional organization but that have no qualia and no mental states whatsoever (functional zombies)

The claim that Qualia exist can be established by the possibility of a functional duplicate with different qualia if two people can have states with identical functions but different phenomenal properties we have disproved functionalism

INVERTED QUALIA This version of this objection is known as the ‘inverted qualia’ or the ‘inverted spectrum’ thought experiment Suppose someone has an inverted experience of colour, his vision seems to work the same way as yours, but where you see a green pigment, he sees red

i.e. ‘what it’s like for you to see red’ and ‘what it’s like for him to see green’ are functionally identical (both have the same inputs (grass) and outputs (e.g. saying ‘grass is green’)

Although the functionalist would say that you have the same mental states you don’t because his inner experiences are not identical in terms of their intrinsic properties (qualia)

The primary response is to claim that if somebody is really functionally equivalent to you, they necessarily have the same qualia as you, so the notion of inverted colour qualia is incoherent

mental states are the products of the particular physical states that constitute them, something without the same neurobiology would not be functionally equivalent

However this sounds more like type identity theory and means that qualia are not multiply realisable Inverted colour qualia seems to be a serious empirical possibility in pseudonormal vision (see next slide)

The possibility of spectrum inversion is ruled out by functionalism, yet as it is conceivable, functionalism must be false

It is conceivable because qualia have intrinsic qualities, regardless of how they relate to other mental states or to sensory inputs and behavioural outputs

If the hypothesis is both irrefutable and unconfirmable we may be inclined to conclude that the idea of inverted qualia is nonsensical

We cannot make coherent sense of the supposed difference between you and me if we cannot point to anything in the world that would establish the difference

we can modify the thought experiment so that it is specific to functionalism If I was born with normal vision, but had an operation to switch the neural pathways from the optic nerve to the visual cortex my qualia would be inverted

I would learn colour the vocabulary and become fully functionally equivalent to you this shows that there is more to qualia than what can be captured by a functionalist account

In response to this, functionalists can still claim that if we react in similar and complex ways to the same stimuli and if qualia play the same complex role in relation to other mental states and behaviour, this is all we need to be sure that we are in the same mental state

some functionalists concede that in this version of the thought experiment, the intrinsic physical differences would produce a different qualitative feel, and so concede that qualia cannot be given a complete functional definition

However, functionalists need not give up on the theory as an account of most of our mental states such as beliefs and desires

THE TURING TEST a test of whether an artificial intelligence could be said to have a mind (with beliefs and other intentional states) If a computer could communicate with a human being in such a way that the human being could not tell the difference between conversing with the computer and conversing with another human being, it could be said to have a mind

Our ordinary understanding of their basic operations supports the view that computers have minds We say that a computer has ‘memory’, that it processes information, uses language, calculates, obeys commands, follows rules etc.

Searle opposes machine functionalism He claims that mental states are essentially natural phenomena in the same way as other biological functions and they need a certain neurophysiology, i.e. a living brain organic brains are required for consciousness no artificial intelligence could be conscious

He tries to show that a computer which was functionally equivalent to a human being with respect to linguistic behaviour, and so could pass the Turing test, still wouldn’t be conscious

CHINESE ROOM (JOHN SEARLE) It is raining It’s raining Es regnet these sentences are syntactically different but semantically the same Syntax is about the form of symbols and the rules of grammar semantics is about the meaning we construct from those rules

SEARLE’S ARGUMENT: (P1) Programs are entirely syntactical (P2) Minds have semantics (P3) Syntax is not sufficient for semantics (C) Minds are not just programs

You're locked in a room, with two slots to the outside world marked “in” and “out". In the room, there are boxes of Chinese symbols, and a rule book containing instructions Through the in-slot, people pass you Chinese symbols (in fact, these are coherent Chinese sentences)

You look up the symbols in the rule- book, and it tells you which symbols to give back through the out-slot these are perfectly coherent replies

THE CHINESE ROOM

The Chinese Room is a computer that simulates understanding of Chinese the boxes of symbols are the database the rule book is the program you are the hardware implementing the program (note that the inputs and outputs are the same as if there was somebody in the room who did understand Chinese)

Understanding a language requires more than manipulating symbols It needs you to understand the meaning of what you are saying or writing

Merely implementing the right program does not in itself generate any semantics You have only simulated understanding of Chinese, not replicated it

Searle’s Chinese room argument focuses on intentionality, the feature of our mental states which enables them to be about things He distinguishes different sorts of intentionality as-if intentionality is possessed by things like rivers as they flow towards the sea, this is how we represent the river, but not something possessed by the river itself

Intrinsic intentionality is only possessed by minds Although they can pass the ‘Turing test’ computers do not have intrinsic intentionality Like the person in the room, a computer only has as-if intentionality

Note: Searle does not argue that it’s impossible for e.g. computers made of silicon chips to have minds All he’s arguing is that they can’t have minds merely in virtue of whatever programs they’re running

SYSTEMS RESPONSE The person in the room doesn’t understand Chinese but the person is part of a system (symbols, rule book etc.) and the system as a whole could pass the Turing test We attribute understanding not to the individual (neuron) but to the entire room (brain)

SEARLE’S RESPONSE the person in the room doesn’t understand Chinese because s/he doesn’t possess intentionality and has no way to attach meaning to the symbols But if one person has no way to attach meaning to the symbols, the room as a whole has no way to do this

Searle suggests an extension to the thought experiment: imagine the person in the room memorizes the database and the rule book s/he goes out and converses with people face-to-face in Chinese but still doesn’t understand Chinese, because all s/he's doing is manipulating symbols

INTUITION IS UNRELIABLE Chinese brain thought experiment appeals to our intuition that the system cannot be conscious But they could be wrong, when we are used to talking machines, we may find our intuitions change

Stephen Pinker asks us to imagine a race of intelligent aliens with silicon- brains, who cannot believe that humans are conscious because our brains are made of meat.

The only basis we have for ascribing minds to others is that they behave appropriately if we are happy to ascribe mentality to other humans we should be happy to do so to a machine capable of behaving in the same way Any other attitude would be chauvinistic

Paul & Patricia Churchland parody Searle's argument: P1) Electricity and magnetism are forces: P2) Luminance is a property of light; P3) Forces are not sufficient for luminance; C) Light is not just electromagnetism.

THE LUMINOUS ROOM Moving a magnet up and down quickly, generates electromagnetic waves If electromagnetic waves in themselves are sufficient for luminance, this will produce luminance but this is absurd because you can't produce light merely by producing the right forces

SYNTAX IS OBSERVER-RELATIVE (JOHN SEARLE) Things like Gravitation, mass, etc. are intrinsic features of the world that would exist whether or not there are any observers. Other things are observer-relative: e.g. it being a nice days for a picnic The things going on in a computer only have a syntax because we assign a syntax to them

Consider a wall: it contains billions of molecules all moving in various ways For some of those molecules, there will be a pattern of movements that is functionally identical to the structure of e.g. a word processing program

nobody would argue that walls are word processors, because it's impossible in practice for us to use them in that way in practice we can assign syntactical properties to calculators and laptops, so we treat them as computers running programs but in principle we could assign those same syntactical properties to any sufficiently large object

SEARLE’S VIEWS 1) Syntax is not sufficient for semantics (Chinese Room) 2) Syntax is observer-relative Nothing has syntactic properties intrinsically the idea that the brain is a computer and the mind a program tells us nothing about how the brain and the mind really work

FUNCTIONS ARE OBSERVER-RELATIVE Functionalists say that if something performs the function of a heart, whether it’s made out of muscle or metal and plastic is irrelevant But an artificial heart is not the same as a biological heart An artificial heart is a heart in virtue of our intentions for it; to pump blood around the body we could describe a different functional of the heart e.g. to make a rhythmic sound

When we describe the function of something, there is always a normative judgement involved If so, the existence of functions presupposes mentality and so cannot be used to explain it without circularity This also begs the question of whether a defective heart is still a heart

The functionalist may say that what makes something a heart is that it was selected to pump blood But appealing to natural selection only works if the trait in question was in fact selected Many biological traits were not selected for but are by-products of things that they were selected for

To define something according to its functional role we have to establish that this is what it was selected for Something as complex as brain will have lots of by-products Chomsky claims that our ability to use language is one of them

we are in search of some ‘mysterious property’ if we insist the machine must possess supposedly intrinsic intentionality it’s not obvious that any theory of mind is able to explain this feature of consciousness

we can deny that anything – computer, Chinese room or human being – possesses intrinsic intentionality, perhaps the only kind of intentionality is the ‘as-if’ kind this is just a way of interpreting and predicting human behaviour that is ultimately reducible to a set of causal relations