Computational Theory of Mind

Slides:



Advertisements
Similar presentations
EECS 690 April 5. Type identity Is a kind of physicalism Every mental event is identical with a physical event In each case where two minds have something.
Advertisements

The Extended Mind.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Second Language Acquisition
Section 2.3 I, Robot Mind as Software.
B&LdeJ1 Theoretical Issues in Psychology Philosophy of Science and Philosophy of Mind for Psychologists.
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
Summer 2011 Monday, 07/25. Recap on Dreyfus Presents a phenomenological argument against the idea that intelligence consists in manipulating symbols according.
Summer 2011 Tuesday, 07/05. Dualism The view that the mind is separate from the physical/material world. Tells us what the mind is not, but is silent.
Introduction The Soul and the Body. Paul Gauguin (1897) Whence come we? What are we? Whither go we? Introduction.
SEARLE THE CHINESE ROOM ARGUMENT: MAN BECOMES COMPUTER.
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
Acting Humanly: The Turing test (1950) “Computing machinery and intelligence”:   Can machine’s think? or Can machines behave intelligently? An operational.
Humans, Computers, and Computational Complexity J. Winters Brock Nathan Kaplan Jason Thompson.
Introduction to Cognitive Science Lecture #1 : INTRODUCTION Joe Lau Philosophy HKU.
The Mind-Body Problem. Some Theories of Mind Dualism –Substance Dualism: mind and body are differerent substances. Mind is unextended and not subject.
The Computational Theory of Mind. COMPUTATION Functions.
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
Functionalism Mind and Body Knowledge and Reality; Lecture 3.
Philosophy 4610 Philosophy of Mind Week 11: The Problem of Consciousness.
CSCI 4410 Introduction to Artificial Intelligence.
Chapter 6: Objections to the Physical Symbol System Hypothesis.
Big Idea 1: The Practice of Science Description A: Scientific inquiry is a multifaceted activity; the processes of science include the formulation of scientifically.
Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Developing and Evaluating Theories of Behavior.
© 2008 The McGraw-Hill Companies, Inc. Chapter 8: Cognition and Language.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
CTM 2. EXAM 2 Exam 1 Exam 2 Letter Grades Statistics Mean: 60 Median: 56 Modes: 51, 76.
The Language of Thought : Part I Joe Lau Philosophy HKU.
Philosophy and Cognitive Science Conceptual Role Semantics Joe Lau PhilosophyHKU.
Theories of Second Language Acquisition. Behaviorism A change in external behavior achieved through a large amount of repetition of desired actions. The.
Chapter 1 What is Biology? 1.1 Science and the Natural World.
The Chinese Room Argument Part II Joe Lau Philosophy HKU.
Computationalism: Still Cool after All These Years Marcin Miłkowski Institute of Philosophy & Sociology Polish Academy of Sciences.
The Cognitive Approach
NATURAL WORLD. OBSERVATION INFERENCE. HYPOTHESIS.
An Introduction to Linguistics
Connectionism and LOTH
Linguistics Linguistics can be defined as the scientific or systematic study of language. It is a science in the sense that it scientifically studies the.
Mental Representations
Course Outcomes of Object Oriented Modeling Design (17630,C604)
Which of these do you agree with?
CMSC201 Computer Science I for Majors Lecture 11 – Program Design
Problems for Identity Theory
Philosophy of Mathematics 1: Geometry
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson)
The Grain of Vision and the Grain of Attention
The Problem of Consciousness
Assumptions of the Cognitive approach
Chapter 1 - Introducing Psychology
Survey of Knowledge Base Content
Unscramble The Words What are these key terms from the current theory we’re looking at? Finicalmounts Callaroues Ipunt Optutu Relegatedgunkmown Nupmat.
Mind-Brain Type Identity Theory
Introduction Artificial Intelligent.
Recap Questions What is interactionism?
Developing and Evaluating Theories of Behavior
Introduction to Psychology Chapter 1
Theory of Computation Turing Machines.
What did I google to find this picture?
THE NATURE OF SCIENCE.
Overview and Historical Roots
Rule-Following Wittgenstein.
"CONSCIOUSNESS AND QUALITIES"
Which of these things are defined functionally? What function?
Section 1: The Methods of Science
The Nature of Science.
Lesson Overview 1.1 What Is Science?.
Thinking as Computation
Presentation transcript:

Computational Theory of Mind

Pylyshyn’s Starting Point “The most remarkable property of human behavior is that in order to capture what is systematic about behavior involving intelligence it is necessary to recognize equivalence classes of causal events that cannot be characterized using the terms of existing natural sciences.” (191)

Two Major Breakthroughs Pylyshyn emphasizes two major breakthroughs in Cognitive Science: The discovery of subconscious mental states The discovery of computers

Two Major Breakthroughs Appreciating the existence of sub-conscious mental states allows us to appeal to complicated “under the hood” processes to explain intelligent behavior and cognitive capabilities.

Two Major Breakthroughs The discovery of the computer allowed us to understand how “reasoning” could be carried out by purely mechanical processes. By “reasoning” here, Pylyshyn means something like “information processing” or “transitions among informational states”

A Digression into Vision Science One of the most successful explanatory uses of subconscious mental states comes from vision science.

The Underdetermination Problem When light hits your retinas in a certain way this is compatible with an infinite number of ways the world actually is. Your visual system has to somehow take information that does not entail any state of the environment and reliably produce correct representations.

The Underdetermination Problem The astounding thing is that your visual system almost always gets it right! How? This question is what is known in vision science as “the underdetermination problem” or “the inverse problem.”

Unconscious Inferences Helmholtz (1867) proposed that your visual system does this by: Making some assumptions about the environment Making inferences based on those assumptions and retinal stimulations to conclusions about the world.

Unconscious Inferences One way to see that he is right about this is to look at some cases where your visual system gets it wrong. These are known as perceptual illusions.

Crater Illusion

Hollow Face Illusion

Kanizsa Triangle

Checkerboard Illusion

Checkerboard Illusion

Visual System as a Computer Think of the visual system as a computer running a program in your subconscious. It takes inputs (retinal stimulations) It then engages in a set of automatic computations beyond your control It produces an experience of the world from these processes. Such a theory accounts for our visual systems’ reliability as well as why we see the illusions that we do!

The Computational Theory of Mind Fodor hyopthesized that there is a basic programming language for our minds. He called it the “Language of Thought” or “Mentalese” for short. Mentalese symbols have meaning Mentalese has a grammar and construction rules just like a natural language Like a natural language, it is both productive and systematic But the operations the mind does over them are purely syntactic (like a computer). The Language of Thought is like the native programming language of the mind.

The Computational Theory of Mind Mental states are sentences in the language of thought. Whether a particular Mentalese sentence S is a belief, desire, or perception depends on its functional role in the overall system. A useful heuristic is to think of the sentences being moved around to different “boxes.”

The Computational Theory of Mind Suppose I have a Mentalese sentence that means “cup on the table.” Because of a certain retinal stimulation this goes into my perception box. A series of computations quickly transfers it to my belief box. Once in the belief box, it can interact with sentences in the desire box like “desire for coffee.” Those two sentences cause a third Mentalese sentence to be written down in my intention box: “Drink from the cup on the table.”

The Computational Theory of Mind How the sentences get moved around and interact is purely based on their syntactic elements. The “coffee” symbol in the belief and desire is matched and automatically causes the intention. The meaning of the Mentalese symbol doesn’t matter for the processing. This is the sense in which the mind is like a computer according to CTM.

The Computational Theory of Mind Usually CTM theories divide the mind into various modules. These are self-contained, specialized computational mechanisms: Perception Language Comprehension Sensorimotor control

Back to Turing Machine Functionalism CTM is similar to Turing Machine Functionalism, but it can handle all of the problems that theory faced! Mentalese is an actual language, so it is both systematic and productive. Complex Mentalese sentences are composed out of simpler elements Can be recombined in an infinite number of ways based on syntactic rules just like English expressions. Modular structure allows that one can be in more than one mental state at a time

The Tri-Level Hypothesis But what about the Chinese Room? Most proponents of CTM aren’t too worried about this because they already divide their project into three levels of explanation.

Reading Read: Nagel: “What is it like to be a bat?” Kim: pages 263-277, 301-311, 323-334 Review Sheet handed out on Friday.

The Tri-Level Hypothesis The Biological/Physical Level The Symbolic/Computational Level The Knowledge/Semantic Level All three levels are needed for a complete theory of mind!

The Biological Level Some explanations are carried out in terms of the physical properties of the brain: Effects of alcohol or other drugs Effects of damage to the brain How are the computational mechanisms actually implemented physically? Note: This same sort of thing happens with Turing Machines and other computers!

The Tri-Level Hypothesis The Biological/Physical Level The Symbolic/Computational Level The Knowledge/Semantic Level All three levels are needed for a complete theory of mind!

The Symbolic/Computational Level This is what people working in CTM are most interested in. What are the computational/syntactic functions that explain: Perception Language acquisition Higher Reasoning Etc.

The Symbolic/Computational Level At this level, we are not interested in what the symbols the mind operates over mean. We only care how they are causally related to one another and how they are processed. In other words: what sorts of computational programs does the brain implement?

The Semantic Level So how do the purely syntactic operations get meaning? This is not well understood and involves some of the deepest philosophical issues. (Sorry!) Here is one (controversial) story.

World-Mind Interaction Many philosophers think that content is acquired by interactions with the world. Take some meaningless symbol in the programing language of some organism’s mind.

World-Mind Interaction The organism keeps bumping into trees constantly. Gradually it happens that the symbol comes to be causally related to trees: It tends to pop up when the organism is around trees When it pops up on its own, the organism moves towards trees its been to in the past.

World-Mind Interaction After even more time you can “trick” the organism. You can stimulate it so the symbol pops up and it starts acting as if it is around trees. The symbol starts to play a role in explanations of the behavior of the organism and its relation to trees.

World-Mind Interaction At this point, it seems as though the symbol has come to represent trees. It is causally linked to trees in the right way It can cause behavior and other mental states involving trees It can be present even when trees are absent and still plays those causal roles In short, the symbol has become a mental stand-in for trees. In other words now it means “tree”!

Final Points Even if this story isn’t right, it should be clear that CTM is not committed to saying that you can get semantics out of syntax. The purely syntactic operations are very important for understanding the nature of mind. But that is not all there is to the story! Level-1 and Level-3 are also important and require investigation

Final Points Fodor calls CTM “the only game in town” for a scientifically plausible theory of the mind. It is physicalist It accommodates multiple realizability It provides a robust scientific and philosopical research paradigm It gives us an idea of how purely causal physical processes could produce something like a mind. Actually he is referring to a far more specific proposal, but it works just as well for us.

Final Points Of course we still don’t know everything about the mind (not even close). There is one problem, the problem of consciousness that presents a problem for nearly every theory that we discussed this quarter.

Final Points But, at this point it seems like any adequate theory of the mind will have some place for computational processes, even though that may not be the whole story.