Integrating New Findings into the Complementary Learning Systems Theory of Memory Jay McClelland, Stanford University.

Slides:



Advertisements
Similar presentations
Ch. 6 MEMORY.
Advertisements

REMERGE: A new approach to the neural basis of generalization and memory-based inference Dharshan Kumaran, UCL Jay McClelland, Stanford University.
Context Model, Bayesian Exemplar Models, Neural Networks.
Cognitive - knowledge.ppt © 2001 Laura Snodgrass, Ph.D.1 Knowledge Structure of semantic memory –relationships among concepts –organization of memory –memory.
Learning and Memory in Hippocampus and Neocortex: A Complementary Learning Systems Approach Psychology 209 Feb 11, 2014.
Learning in Recurrent Networks Psychology 209 February 25, 2013.
Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford.
Emergence in Cognitive Science: Semantic Cognition Jay McClelland Stanford University.
Emergence of Semantic Structure from Experience Jay McClelland Stanford University.
Knowing Semantic memory.
WHAT, WHERE, & HOW SYSTEMS AGNOSIAS!. What, Where, & How Systems.
Categorization  How do we organize our knowledge?  How do we retrieve knowledge when we need it?
Memory Systems Chapter 23 Friday, December 5, 2003.
Fractionation of Memory in Medial Temporal Lobe Amnesia
Memory Chapter 6.
Cooperation of Complementary Learning Systems in Memory Review and Update on the Complementary Learning Systems Framework James L. McClelland Psychology.
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
‘All that is psychological is first physiological’ Session 2: Localisation of Brain Function.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach Jay McClelland Department of Psychology and Center for.
Learning, memory & amnesia
Using Backprop to Understand Apects of Cognitive Development PDP Class Feb 8, 2010.
Representation, Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology.
Emergence of Semantic Structure from Experience Jay McClelland Stanford University.
Cognitive Processes PSY 334 Chapter 7 – Human Memory: Retention and Retrieval May 16, 2003.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University January 21, 2014.
Disintegration of Conceptual Knowledge In Semantic Dementia James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford.
The Brain Basis of Memory: Theory and Data James L. McClelland Stanford University.
Contrasting Approaches To Semantic Knowledge Representation and Inference Psychology 209 February 15, 2013.
Shane T. Mueller, Ph.D. Indiana University Klein Associates/ARA Rich Shiffrin Indiana University and Memory, Attention & Perception Lab REM-II: A model.
MULTIPLE MEMORY SYSTEM IN HUMANS
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
The Influence of Feature Type, Feature Structure and Psycholinguistic Parameters on the Naming Performance of Semantic Dementia and Alzheimer’s Patients.
Development, Disintegration, and Neural Basis of Semantic Cognition: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology.
Emergence of Semantic Structure from Experience Jay McClelland Stanford University.
Similarity and Attribution Contrasting Approaches To Semantic Knowledge Representation and Inference Jay McClelland Stanford University.
Learning and Memory in Hippocampus and Neocortex: A Complementary Learning Systems Approach Psychology 209 Feb 11&13, 2013.
Rapid integration of new schema- consistent information in the Complementary Learning Systems Theory Jay McClelland, Stanford University.
Semantic Cognition: A Parallel Distributed Processing Approach James L. McClelland Center for the Neural Basis of Cognition and Departments of Psychology.
Cognitive Processes PSY 334 Chapter 5 – Meaning-Based Knowledge Representation.
Have you ever used graphic organizers? Short Story Novel *Short * Few characters * Few conflicts * Characters * Plot * Conflict * Resolution * Long *
Last Lecture Frontal Lobe Anatomy Inhibition and voluntary control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Origins of Cognitive Abilities Jay McClelland Stanford University.
The Emergent Structure of Semantic Knowledge
Cognitive Processes PSY 334 Chapter 7 – Human Memory: Retention and Retrieval August 7, 2003.
Memory: Its Nature and Organization in the Brain James L. McClelland Stanford University.
Verbal Representation of Knowledge
Emergent Semantics: Meaning and Metaphor Jay McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University.
Semantic Knowledge: Its Nature, its Development, and its Neural Basis James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation.
Organization and Emergence of Semantic Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center for.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
Long Term Memory LONG TERM MEMORY (LTM)  Variety of information stored in LTM:  The capital of Turkey  How to drive a car.
Chapter 7 Memory. Objectives 7.1 Overview: What Is Memory? Explain how human memory differs from an objective video recording of events. 7.2 Constructing.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Complementary Learning Systems
Psychology 209 – Winter 2017 January 31, 2017
What is cognitive psychology?
Neural Network Architecture Session 2
Psychology 209 – Winter 2017 Feb 28, 2017
LEARNING & MEMORY Jaan Aru
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
Does the Brain Use Symbols or Distributed Representations?
Cooperation of Complementary Learning Systems in Memory
Memory and Learning: Their Nature and Organization in the Brain
Emergence of Semantic Structure from Experience
Emergence of Semantics from Experience
CLS, Rapid Schema Consistent Learning, and Similarity-weighted Interleaved learning Psychology 209 Feb 26, 2019.
The Network Approach: Mind as a Web
Presentation transcript:

Integrating New Findings into the Complementary Learning Systems Theory of Memory Jay McClelland, Stanford University

Effects of Hippocampal Lesions in Humans Intact performance on tests of general intelligence, world knowledge, language, digit span, … Dramatic deficits in formation of some types of new memories Spared implicit learning Temporally graded retrograde amnesia l

Why Are There Complementary Learning Systems? Hippocampus uses sparse distributed representations to minimize interference among memories and allow rapid new learning. Neocortex uses dense distributed representations that promote generalization along meaningful lines, but learning proceeds very gradually. Working together, these systems allow us to learn –Shared structure underlying experiences in a domain –Details of specific experiences Without interference of new learning with knowledge of shared structure

A model of neocortical learning (Rumelhart, 1990; McC et al. 1995) Relies on distributed representations capturing aspects of meaning that emerge through a very gradual learning process The progression of learning and the representations formed capture many aspects of cognitive development – Differentiation of concept representations – Generalization of learning to new concepts – llusory correlations and overgeneralization – Domain-specific variation in importance of feature dimensions – Reorganization of conceptual knowledge

The Rumelhart Model

The Training Data: All propositions true of items at the bottom level of the tree, e.g.: Robin can {grow, move, fly}

Target output for ‘robin can’ input

ajaj aiai w ij net i =  a j w ij w ki Forward Propagation of Activation

 k ~ (t k -a k ) w ij  i ~   k w ki w ki ajaj Back Propagation of Error () Error-correcting learning: At the output layer:w ki =  k a i At the prior layer: w ij =  j a j … aiai

ExperienceExperience Early Later Later Still

sparrow Train network with sparrow-isa-bird

sparrow It learns a representation similar to other birds…

Use the representation to infer what this new thing can do. sparrow

Complementary Learning Systems (McClelland et al 1995; Marr 1971 ) color form motion action valance Temporal pole name Medial Temporal Lobe

Disintegration of Conceptual Knowledge in Semantic Dementia Progressive loss of specific knowledge of concepts, including their names, with preservation of general information Overgeneralization of frequent names Illusory correlations: Overgeneralization of domain typical properties

Picture naming and drawing in Sem. Demantia

Rogers et al (2005) model of semantic dementia Gradually learns through exposure to input patterns derived from norming studies. Representations in the integrative layer are acquired through the course of learning. After learning, the network can activate each other type of information from name or visual input. Representations undergo progressive differentiation as learning progresses. Damage to units within the integrative layer leads to the pattern of deficits seen in semantic dementia. nameassocfunction integrative layer vision

Severity of DementiaFraction of Neurons Destroyed omissionswithin categ. superord. Patient Data Simulation Results Errors in Naming As a Function of Severity

Simulation of Delayed Copying Visual input is presented, then removed. After several time steps, pattern is compared to the pattern that was presented initially. Omissions and intrusions are scored for typicality nameassocfunction temporal pole vision

Omissions by feature typeIntrusions by feature type IF’s ‘camel’ DC’s ‘swan’ Simulation results

Adding New Inconsistent Information to the Neocortical Representation Penguin is a bird Penguin can swim, but cannot fly

Catastrophic Interference and Avoiding it with Interleaved Learning

Complementary Learning Systems Theory (McClelland et al 1995; Marr 1971) color form motion action valance Temporal pole name Medial Temporal Lobe

Challenges for CLS If extraction of generalizations depends on gradual learning, how do we form generalizations and inferences shortly after initial learning? Why do some studies find evidence consistent with the view that an intact MTL facilitates certain types of generalization in memory? How can we explain new findings showing that new information can sometimes be consolidated into neocortical representations quickly?

Challenges for CLS  If extraction of generalizations depends on gradual learning, how do we form generalizations and inferences shortly after initial learning?  Why do some studies find evidence consistent with the view that an intact MTL facilitates certain types of generalization in memory? How can we explain new findings showing that new information can sometimes be consolidated into neocortical representations quickly?

REMERGE: Recurrence and Episodic Memory Result in Generalization (Kumaran & McClelland, 2012) Holds that several MTL based item representations may work together through recurrent activation to produce generalization and inference Draws on classic exemplar models (Medin & Shaffer, 1978; Nosofsky, 1984) Extends these models by allowing similarity between stored items to influence performance, independent of direct activation by the probe (McClelland, 1981) Demonstrates the strong dependence of some forms of generalization and inference on the strength of learning for trained items

What REMERGE Adds to Exemplar Models X

Recurrence allows similarity between stored items to influence memory, independent of direct activation by the probe. X c

Neural Network Model, Exemplar Model, or Probabilistic Model? REMERGE was initially built on the IAC model, a neural network/connectionist model But the same principles can be captured in an exemplar model formulation, which in turn is closely related to an explicitly Bayesian formulation In fact there are now two versions of the model (IAC, GCM) and a probabilistic version is on its way

GCM-like Version of REMERGE Choice rule: Input from other units: Hedged softmax activation function: Logistic activation function:

“Learning” in REMERGE Connection weights in REMERGE are specified by the modeler, not learned by a connection adjustment rule. Stronger weights lead to better performance Weight strength can vary as a function of amount of exposure, individual differences, and brain injury

Phenomena Considered Benchmark Simulations – Categorization – Recognition memory Acquired Equivalence Associative Chaining – In paired associate learning – In hippocampal reactivation after spatial learning Transitive Inference – Effects of increasing study – Effects of sleep Spared Category Learning in Amnesia

Phenomena Considered Benchmark Simulations – Categorization – Recognition memory Acquired Equivalence Associative Chaining – In paired associate learning – In hippocampal reactivation after spatial learning Transitive Inference – Effects of increasing study – Effects of sleep Spared Category Learning in Amnesia

Acquired Equivalence (Shohamy & Wagner, 2008) Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

F1 S1 F2 S2 F3 S3 F4 S4 Acquired Equivalence (Shohamy & Wagner, 2008) Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

F1 S1 F2 S2 F3 S3 F4 S4 Acquired Equivalence (Shohamy & Wagner, 2008) S1 S2 S3 S4 Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

F1 S1 F2 S2 F3 S3 F4 S4 Acquired Equivalence (Shohamy & Wagner, 2008) S1 S2 S3 S4 Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

Roles of Neocortical Learning Gradually learns the ‘features’ (dimensions of the neocortical distributed representations) that serve as the basis for exemplar learning in the MTL Provides efficient, structured distributed representations that capture structure in experience But what about those findings showing that new ‘schema consistent’ knowledge can be integrated into neocortical networks quickly?

Tse et al (Science, 2007, 2011) Additional tests after surgery for old and new associations. Then train and test a second pair of new associations. During training, 2 wells uncovered on each trial

Schemata and Schema Consistent Information What is a ‘schema’? – An organized knowledge structure into which new items could be added. What is schema consistent information? – Information consistent with the existing schema. Possible examples: – Trout Cardinal What about a penguin? – Partially consistent – Partially inconsistent What about previously unfamiliar odors paired with previously unvisited locations in a familiar environment?

New Simulations Initial training with eight items and their properties as indicated at left. Added one new input unit fully connected to representation layer to train network on one of: – penguin-isa & penguin-can – trout-isa & trout-can – cardinal-isa & cardinal-can Used either focused or interleaved learning Network was not required to generate item-specific name outputs.

New Learning of Consistent and Partially Inconsistent Information

Overall Discussion The work described here (with a new hippocampal model, and an old neocortical model) addresses both types of challenge to the CLS theory But many questions remain – What is an item and how is it represented in the hippocampus and the neocortex? – What new information is sufficiently ‘schema consistent’ to be learned rapidly in amnesia? – Even if the models capture important features of hippocampal and neocortical learning, how are these processes actually implemented in real nervous systems?