Download presentation
Presentation is loading. Please wait.
Published byMarjory Craig Modified over 6 years ago
1
Some Aspects of the History of Cognitive Science
Jay McClelland Symbolic Systems 100 Spring, 2010
2
Decartes’ Legacy Mechanistic approach to sensation and action
Divine inspiration creates mind This leads to three dissociations: Mind / Brain Higher Cognitive Functions / Sensory-motor systems Human / Animal Today we accept that the mind arises from brain activity. Some cognitive scientists ground their models of mind in the workings of the brain, while others continue to view cognition abstractly.
3
Early History of the Study of Human Mental Processes
Introspectionism (Wundt, Titchener) Thought as conscious content, but two problems: Suggestibility Gaps Freud suggests that mental processes are not all conscious Behaviorists (Watson, Skinner) eschew talk of mental processes altogether
4
Can Experiments Teach Us About the Contents of the Mind?
Conrad: Verbal coding in short-term memory Sachs: Representation of meaning in long-term memory
5
Conrad’s Experiment You will see a series of letters.
Try to remember them so that, when you see the word recall, you can write them down in the correct order. There will be six letters, followed by a brief delay, then the word ‘Recall’ will appear. After you see the word recall, write down the letters in order, starting with the first letter and then proceeding through the list.
6
.
7
B
8
M
9
S
10
F
11
V
12
N
13
.
14
.
15
.
16
Recall
17
B M F S V N
18
Sachs’ Experiment Participants heard a story containing a sentence such as: He sent Galileo, the great Italian Scientist, a letter about it. Either immediately, or after reading a few more sentences, the participants were asked which of the following sentences they had heard: He sent a letter about it to Galileo, the great Italian Scientist. Galileo, the great Italian Scientist, sent him a letter about it. When tested immediately, nearly all participants chose the correct sentence. After a delay, many participants chose the second sentence, but no one chose the third.
19
A Question: What sort of a mechanism should we use to capture the processes that underlie human thought? A mechanism like the brain? Or a mechanism like a computer?
21
The McCulloch-Pitts Neuron
Neuron i Output from neuron j wij 1 Input Threshold Output McCulloch-Pitts neurons can be used to compute logical functions, such as A-AND-B, A-OR-B, A-AND-NOT-B, etc
22
The Perceptron
23
Problems for the Perceptron
Depends crucially on the φi Some functions require an exponential number of φi No one figured out how to train the weights coming in to the φi all the possible φi that might ever be needed had to be provided in advance
24
The Rise of Symbolic Computation
Mathematics and logic grew up around the use of symbols: Marks on paper that stand for things. Computer programs that do math and logic make use of symbols too. Rules of mathematics and logic can be expressed in terms of statements about symbols. ‘If p then q’ and ‘p’ implies ‘q’ So symbolic models seemed like they might be effective ways of using computers to model human reasoning.
25
But AI Didn’t Live Up to It’s Promise Either
Computers could do math and logic, but they couldn’t: Recognize objects Recognize speech Understand sentences Retrieve relevant information from memory Was there something wrong with the specific models or languages people were using or was there something wrong with the whole approach?
27
Ubiquity of the Constraint Satisfaction Problem
In sentence processing I saw the grand canyon flying to New York I saw the sheep grazing in the field In comprehension Margie was sitting on the front steps when she heard the familiar jingle of the “Good Humor” truck. She remembered her birthday money and ran into the house. In reaching, grasping, typing…
28
Graded and variable nature of neuronal responses
29
Lateral Inhibition in Eye of Limulus (Horseshoe Crab)
30
Neural Network Models of Cognition: The Interactive Activation Model
31
Solving the Problem of Learning in Multi-Layer Perceptrons
Replace the threshold function with a smooth function that has a derivative. Activation is a matter of degree. Now we can calculate the effect of changing each weight on the network’s error – the difference between Ψ and Ω. We change each weight in the direction that reduces the error, making larger changes to weights that will have a larger impact on the error. This allows us to adapt the weights coming into each φi, as well as the weights from the φi to the output. Now neural networks can learn to compute any computable function. ‘Hidden units’ can adapt to do what’s needed, and even multi-layer networks of such units can be trained.
32
Implicit and Explicit Knowledge
Knowledge in an neural network is implicit knowledge – it produces outputs but it cannot itself be inspected. We also have explicit knowledge – facts that we can report and use in reasoning. While these ultimately have a neural basis as well, it is often useful to use symbolic models to represent them. Just how much we rely on each of these two kinds of knowledge is very difficult to assess. Verbal reports can be based on theories we have about our behavior, and may not reflect the factors that actually determine what we do.
33
Newer Directions Reasoning with uncertain information:
Probabilistic models of cognition Cognition as an embodied process, tied to experience and action.
34
Levels of Analysis (from David Marr)
Computational level What is the problem to be solved? What information is available to solve it? What would be the optimal solution? Algorithms, representations, and implementation What algorithms and representations are used in solving the problem? How are these algorithms implemented?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.