Split-Brain Studies What do you see? “Nothing” Left Visual Field-> Right Hemisphere -> Can’t Speak
Split-Brain Studies What do you see? “Triangle” Right Visual Field-> Left Hemisphere -> Can Speak
Split-Brain Studies Point to correct match “Heart” Left Visual Field-> Right Hemisphere -> Can understand Simple words -> Can Point
Language in both hemispheres Split-Brain Studies Language in both hemispheres “What is your name?” “Bob”
Language in both hemispheres Split-Brain Studies Language in both hemispheres “What is your name?” “Bob”
Language in both hemispheres Split-Brain Studies Language in both hemispheres “What do you want to be When you grow up?” “I want to be a director of independent films And then sell out to major studios and Make a bundle”
Language in both hemispheres Split-Brain Studies Language in both hemispheres “What do you want to be When you grow up?” “I want to be a professional hockey player”
Language in both hemispheres Split-Brain Studies Language in both hemispheres “What’s your favorite CD? “Brittany Spears”
Language in both hemispheres Split-Brain Studies Language in both hemispheres “What’s your favorite CD? “Ozzy Osbourne – Heavy Metal”
Split-Brain Studies Two people in this guys head? More than two people? How many in your head? “Feels like” one person, but is this an illusion?
Studying “Experience” or “Consciousness” Measurement Problems Measure experience by reports of subject Reports of experience are pretty fuzzy Can study our own experience But this is pretty limited What’s it like to be a split-brain patient? Only real way to know is to zap your cortex There are limits to curiosity
What’s a model? A system is a structure that provides a mapping to the thing being modelled A model of solar system Big center thing (maps to sun) Small things going around big thing (maps to planets) Leaves some stuff out Paths of objects are elliptical Solar flares
3 FROG Symbolic Models What are symbols? Symbols are things that represent something else 3 (the number three) FROG
Symbolic Models What are symbols? Symbols are things that represent something else Semantics / Intentionality one “one” two “two” three “three” four “four” numbers marks on paper
Symbolic Models What are symbols? Symbols are things that represent something else isomorphic relationship one “I” two “II” three “III” four “IV” numbers marks on paper
Symbolic Models What are symbols? Symbols are things that represent something else homomorphic relationship “cat” “bird” animals marks on paper
Symbolic Models What are symbols? Symbols are things that represent something else “chicken (McNuggets)” “chicken (bird)” animals marks on paper
Symbolic Models What are symbols? Symbols are things that represent something else x “chicken (McNuggets)” “chicken (bird)” animals marks on paper
Symbolic Models What are symbols? Symbols are things that represent something else quasi-homomorphic relationship (Q-morph) “chicken (McNuggets)” “chicken (bird)” animals marks on paper
Symbolic Models What are symbols? Symbols are things that represent something else. A set of symbols and a set of items in the world. A correspondence can be drawn between elements in each set. This correspondence can be: Isomorphic (one-to-one) Homomorphic (many-to-one) Q-morphic (approximate many-to-one, with errors)
Symbolic Models The first symbolic computational model: General Problem Solver (GPS) (by Simon and Newell) But first, an example problem...
Tower of Hanoi Puzzle
Symbolic Models General Problem Solver (GPS) Uses IF-THEN rules, called productions Uses goals, and breaks them into sub-goals. Back to the example problem...
Tower of Hanoi Puzzle Goal: Move stack[1] to peg C Production: disk[1] is not on peg C, and is not free subgoal: move stack[2] to peg B 4 3 Stack[1] 2 1 A B C
Tower of Hanoi Puzzle Goal: Move stack[2] to peg B Production: disk[2] is not on peg B, and is not free subgoal: move stack[3] to peg C 4 Stack[2] 3 2 1 A B C
Tower of Hanoi Puzzle Goal: Move stack[3] to peg C Production: disk[3] is not on peg C, and is not free subgoal: move stack[4] to peg B 4 Stack[3] 3 2 1 A B C
Tower of Hanoi Puzzle Goal: Move stack[4] to peg B Production: disk[4] is not on peg B, but is free! so move disk[4] to peg B Stack[4] 4 3 2 1 A B C
Tower of Hanoi Puzzle Goal: Move stack[4] to peg B Production: disk[4] is on peg B, and is free, but there is no stack[5] so we’re done with this goal! 3 2 1 4 A B C
Tower of Hanoi Puzzle Previous Goal: Move stack[3] to peg C Production: disk[3] is not on peg C, but it is free so move disk[3] to peg C 2 1 4 3 A B C
Tower of Hanoi Puzzle Previous Goal: Move stack[3] to peg C Production: disk[3] is on peg C, and it is free so subgoal: move stack[4] to peg C 2 1 4 3 A B C
Tower of Hanoi Puzzle Goal: Move stack[4] to peg C Production: disk[4] is not on peg C, and it is free so move disk[4] to peg C 2 1 4 3 A B C
Tower of Hanoi Puzzle Goal: Move stack[4] to peg C Production: disk[4] is not on peg C, and it is free so move disk[4] to peg C 2 4 1 3 A B C
Tower of Hanoi Puzzle Goal: Move stack[4] to peg C Production: disk[4] is on peg C, and is free, but there is no stack[5] so done with this goal! 2 4 1 3 A B C
Tower of Hanoi Puzzle Previous Goal: Move stack[3] to peg C Production: disk[3] is on peg C and has stack[4] on top of it so done with this goal! 2 4 1 3 A B C
Tower of Hanoi Puzzle Previous Goal: Move stack[2] to peg B Production: disk[2] is not on peg B, and is free, so move disk[2] to peg B 4 1 2 3 A B C
Tower of Hanoi Puzzle Previous Goal: Move stack[2] to peg B Production: disk[2] is on peg B, and is free, so set up a subgoal to move stack[3] onto peg B 4 1 2 3 A B C
Tower of Hanoi Puzzle ….and so on….. 4 1 2 3 A B C
Tower of Hanoi Puzzle DONE! 4 3 2 1 A B C
Symbolic Models General Problem Solver (GPS) Uses IF-THEN rules, called productions Uses goals, and breaks them into sub-goals. ACT* Three components: Production Memory (IF-THEN rules) Declarative Memory (facts about the world) Working Memory (information currently being processed)
ACT* Declarative Memory Production Memory retrieval storage execution match Working Memory perception action
ACT* Declarative Memory Production Memory retrieval storage execution busy intersection IF (driving) and (stop sign) THEN brake! retrieval storage execution match Working Memory I’m driving! Brake! Stop sign! perception action
Symbolic Models General Problem Solver (GPS) Uses IF-THEN rules, called productions Uses goals, and breaks them into sub-goals. ACT* Production Memory, Declarative Memory, Working Memory
Symbolic Models: Summary The basic units of computation are symbols Decisions, actions, and problem-solving are accomplished through productions: IF-THEN like statements in memory Usually they divide up memory into different kinds, like production memory, declarative memory, working memory. All thought is seen as manipulation of symbols in these different areas of memory. WHAT THEY ARE GOOD AT: Problem solving, reasoning, language, logic. WHAT THEY ARE BAD AT: Perception, memory storage and retrieval, low-level processing.
Where do symbols come from? These systems assume symbols are fully-formed But where do we get our symbols? Born with this mass of cortex Somehow we end up with symbols that map to things in world Big problem We need a model for how we learn things
Artificial Brains? input input layer hidden layer output layer
Artificial Neural Nets Some Properties of Artificial Neural Nets Distributed Representation: Ideas, thoughts, concepts, memories, are all represented in the brain as patterns of activation across a large number of neurons. As a result, there is a lot of redundancy in neural representation. Graceful Degradation: Performance of the system decreases gradually as the system is damaged. Learning: When neurons are active at the same time, the strength of the connection between them to increase. In artificial nets, this is called the Hebb Rule.
Artificial Neural Nets More Properties of Artificial Neural Nets Generalization: Because of how the network learns, and its distributed representation, it can respond to inputs that it was never officially trained on, generalizing based on similarity to things it was trained on. Distributed Processing: Not only representation, but processing is distributed, too, so there is no central controlling function, or CPU, in the brain. It is more cooperative.
Another Approach + Lets look at the biology of the eye! The first artificial neural model: The Perceptron +
The Basic Perceptron a 1 1 a b 1 b a AND b
The Basic Perceptron a 1 1 1 a b 1 1 b a OR b
The Basic Perceptron a 1 1 a b 1 1 b a XOR b
The Basic Perceptron a 1 1 a b linearly separable 1 b a AND b
The Basic Perceptron a 1 1 1 a b 1 linearly separable 1 b a OR b
The Basic Perceptron a a b 1 b NOT linearly separable a XOR b
The Basic Perceptron What the perceptron can’t do: Exclusive OR 1 1 a a 1 1 b
The Basic Perceptron What the perceptron can’t do: Exclusive OR Even/Odd discrimination Total inputs: 0 1 2 3 4 5 6 7 ouput: 1 0 1 0 1 0 1 0
The Basic Perceptron What the perceptron can’t do: Exclusive OR Even/Odd discrimination Inside/Outside discrimination
The Basic Perceptron What the perceptron can’t do: Exclusive OR Even/Odd discrimination Inside/Outside discrimination Open/Closed discrimination
The Basic Perceptron BIG PROBLEMS What the perceptron can’t do: Exclusive OR Even/Odd discrimination Inside/Outside discrimination Open/Closed discrimination BIG PROBLEMS
The Multi-Layer Perceptron 1 1 a b 1 1 b a XOR b
The Multi-Layer Perceptron AND 1 1 - a b + 1 OR 1 b a XOR b
The Three-Layer Network input input input layer input layer hidden layer output layer
The Two-Layer Network input input layer output layer
The Two-Layer Network input input layer output layer
The Three-Layer Network input input input layer input layer hidden layer output layer
The Three-Layer Network input input input layer input layer hidden layer output layer
The Three-Layer Network input input input layer input layer ? ? ? ? ? ? ? hidden layer output layer
The Three-Layer Network input input input layer input layer hidden layer output layer Back-propagation is the learning procedure that allows you to adjust the weights in multi-layer networks to train them to respond correctly.
The Hopfield Network (not all connections are shown)
The Hopfield Network THE END OF CLASS IS NEAR Pattern Completion
Quick Summary Symbolic Models: Basic Units are Symbols Local Representation: Individual elements have meaning Use productions, goals (means-ends analysis), working memory Two examples are GPS and ACT* Are good at problem solving, reasoning, logic, and language Neural Network Models: Basic Units are artificial neurons Distributed Representation: Patterns across elements have meaning Use activation, learning (Hebb Rule, Backpropagation) Two examples are layered networks and Hopfield networks Are good at pattern matching, memory storage/retrieval