Computational Theory of Mind
Pylyshyn’s Starting Point “The most remarkable property of human behavior is that in order to capture what is systematic about behavior involving intelligence it is necessary to recognize equivalence classes of causal events that cannot be characterized using the terms of existing natural sciences.” (191)
Two Major Breakthroughs Pylyshyn emphasizes two major breakthroughs in Cognitive Science: The discovery of subconscious mental states The discovery of computers
Two Major Breakthroughs Appreciating the existence of sub-conscious mental states allows us to appeal to complicated “under the hood” processes to explain intelligent behavior and cognitive capabilities.
Two Major Breakthroughs The discovery of the computer allowed us to understand how “reasoning” could be carried out by purely mechanical processes. By “reasoning” here, Pylyshyn means something like “information processing” or “transitions among informational states”
A Digression into Vision Science One of the most successful explanatory uses of subconscious mental states comes from vision science.
The Underdetermination Problem When light hits your retinas in a certain way this is compatible with an infinite number of ways the world actually is. Your visual system has to somehow take information that does not entail any state of the environment and reliably produce correct representations.
The Underdetermination Problem The astounding thing is that your visual system almost always gets it right! How? This question is what is known in vision science as “the underdetermination problem” or “the inverse problem.”
Unconscious Inferences Helmholtz (1867) proposed that your visual system does this by: Making some assumptions about the environment Making inferences based on those assumptions and retinal stimulations to conclusions about the world.
Unconscious Inferences One way to see that he is right about this is to look at some cases where your visual system gets it wrong. These are known as perceptual illusions.
Crater Illusion
Hollow Face Illusion
Kanizsa Triangle
Checkerboard Illusion
Checkerboard Illusion
Visual System as a Computer Think of the visual system as a computer running a program in your subconscious. It takes inputs (retinal stimulations) It then engages in a set of automatic computations beyond your control It produces an experience of the world from these processes. Such a theory accounts for our visual systems’ reliability as well as why we see the illusions that we do!
The Computational Theory of Mind Fodor hyopthesized that there is a basic programming language for our minds. He called it the “Language of Thought” or “Mentalese” for short. Mentalese symbols have meaning Mentalese has a grammar and construction rules just like a natural language Like a natural language, it is both productive and systematic But the operations the mind does over them are purely syntactic (like a computer). The Language of Thought is like the native programming language of the mind.
The Computational Theory of Mind Mental states are sentences in the language of thought. Whether a particular Mentalese sentence S is a belief, desire, or perception depends on its functional role in the overall system. A useful heuristic is to think of the sentences being moved around to different “boxes.”
The Computational Theory of Mind Suppose I have a Mentalese sentence that means “cup on the table.” Because of a certain retinal stimulation this goes into my perception box. A series of computations quickly transfers it to my belief box. Once in the belief box, it can interact with sentences in the desire box like “desire for coffee.” Those two sentences cause a third Mentalese sentence to be written down in my intention box: “Drink from the cup on the table.”
The Computational Theory of Mind How the sentences get moved around and interact is purely based on their syntactic elements. The “coffee” symbol in the belief and desire is matched and automatically causes the intention. The meaning of the Mentalese symbol doesn’t matter for the processing. This is the sense in which the mind is like a computer according to CTM.
The Computational Theory of Mind Usually CTM theories divide the mind into various modules. These are self-contained, specialized computational mechanisms: Perception Language Comprehension Sensorimotor control
Back to Turing Machine Functionalism CTM is similar to Turing Machine Functionalism, but it can handle all of the problems that theory faced! Mentalese is an actual language, so it is both systematic and productive. Complex Mentalese sentences are composed out of simpler elements Can be recombined in an infinite number of ways based on syntactic rules just like English expressions. Modular structure allows that one can be in more than one mental state at a time
The Tri-Level Hypothesis But what about the Chinese Room? Most proponents of CTM aren’t too worried about this because they already divide their project into three levels of explanation.
Reading Read: Nagel: “What is it like to be a bat?” Kim: pages 263-277, 301-311, 323-334 Review Sheet handed out on Friday.
The Tri-Level Hypothesis The Biological/Physical Level The Symbolic/Computational Level The Knowledge/Semantic Level All three levels are needed for a complete theory of mind!
The Biological Level Some explanations are carried out in terms of the physical properties of the brain: Effects of alcohol or other drugs Effects of damage to the brain How are the computational mechanisms actually implemented physically? Note: This same sort of thing happens with Turing Machines and other computers!
The Tri-Level Hypothesis The Biological/Physical Level The Symbolic/Computational Level The Knowledge/Semantic Level All three levels are needed for a complete theory of mind!
The Symbolic/Computational Level This is what people working in CTM are most interested in. What are the computational/syntactic functions that explain: Perception Language acquisition Higher Reasoning Etc.
The Symbolic/Computational Level At this level, we are not interested in what the symbols the mind operates over mean. We only care how they are causally related to one another and how they are processed. In other words: what sorts of computational programs does the brain implement?
The Semantic Level So how do the purely syntactic operations get meaning? This is not well understood and involves some of the deepest philosophical issues. (Sorry!) Here is one (controversial) story.
World-Mind Interaction Many philosophers think that content is acquired by interactions with the world. Take some meaningless symbol in the programing language of some organism’s mind.
World-Mind Interaction The organism keeps bumping into trees constantly. Gradually it happens that the symbol comes to be causally related to trees: It tends to pop up when the organism is around trees When it pops up on its own, the organism moves towards trees its been to in the past.
World-Mind Interaction After even more time you can “trick” the organism. You can stimulate it so the symbol pops up and it starts acting as if it is around trees. The symbol starts to play a role in explanations of the behavior of the organism and its relation to trees.
World-Mind Interaction At this point, it seems as though the symbol has come to represent trees. It is causally linked to trees in the right way It can cause behavior and other mental states involving trees It can be present even when trees are absent and still plays those causal roles In short, the symbol has become a mental stand-in for trees. In other words now it means “tree”!
Final Points Even if this story isn’t right, it should be clear that CTM is not committed to saying that you can get semantics out of syntax. The purely syntactic operations are very important for understanding the nature of mind. But that is not all there is to the story! Level-1 and Level-3 are also important and require investigation
Final Points Fodor calls CTM “the only game in town” for a scientifically plausible theory of the mind. It is physicalist It accommodates multiple realizability It provides a robust scientific and philosopical research paradigm It gives us an idea of how purely causal physical processes could produce something like a mind. Actually he is referring to a far more specific proposal, but it works just as well for us.
Final Points Of course we still don’t know everything about the mind (not even close). There is one problem, the problem of consciousness that presents a problem for nearly every theory that we discussed this quarter.
Final Points But, at this point it seems like any adequate theory of the mind will have some place for computational processes, even though that may not be the whole story.