Memory and Learning: Their Nature and Organization in the Brain

Slides:



Advertisements
Similar presentations
Long-Term Memory: Encoding and Retrieval
Advertisements

Learning and Memory in Hippocampus and Neocortex: A Complementary Learning Systems Approach Psychology 209 Feb 11, 2014.
Learning in Recurrent Networks Psychology 209 February 25, 2013.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford.
Emergence in Cognitive Science: Semantic Cognition Jay McClelland Stanford University.
Knowing Semantic memory.
COGNITIVE NEUROSCIENCE
Chapter Seven The Network Approach: Mind as a Web.
Memory Consolidation A Summary PSY 506A Molly Bisbee.
Cooperation of Complementary Learning Systems in Memory Review and Update on the Complementary Learning Systems Framework James L. McClelland Psychology.
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach Jay McClelland Department of Psychology and Center for.
Learning, memory & amnesia
Representation, Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
© 2009 McGraw-Hill Higher Education. All rights reserved. CHAPTER 8 The Information-Processing Approach.
Integrating New Findings into the Complementary Learning Systems Theory of Memory Jay McClelland, Stanford University.
Disintegration of Conceptual Knowledge In Semantic Dementia James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford.
The Brain Basis of Memory: Theory and Data James L. McClelland Stanford University.
 How does memory affect your identity?  If you didn’t have a memory how would your answer the question – How are you today?
Shane T. Mueller, Ph.D. Indiana University Klein Associates/ARA Rich Shiffrin Indiana University and Memory, Attention & Perception Lab REM-II: A model.
Jeremy R. Gray, Christopher F. Chabris and Todd S. Braver Elaine Chan Neural mechanisms of general fluid intelligence.
MULTIPLE MEMORY SYSTEM IN HUMANS
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
The Influence of Feature Type, Feature Structure and Psycholinguistic Parameters on the Naming Performance of Semantic Dementia and Alzheimer’s Patients.
Similarity and Attribution Contrasting Approaches To Semantic Knowledge Representation and Inference Jay McClelland Stanford University.
Learning and Memory in Hippocampus and Neocortex: A Complementary Learning Systems Approach Psychology 209 Feb 11&13, 2013.
Rapid integration of new schema- consistent information in the Complementary Learning Systems Theory Jay McClelland, Stanford University.
Semantic Cognition: A Parallel Distributed Processing Approach James L. McClelland Center for the Neural Basis of Cognition and Departments of Psychology.
Last Lecture Frontal Lobe Anatomy Inhibition and voluntary control
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
The Emergent Structure of Semantic Knowledge
Memory: Its Nature and Organization in the Brain James L. McClelland Stanford University.
Emergent Semantics: Meaning and Metaphor Jay McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University.
Semantic Knowledge: Its Nature, its Development, and its Neural Basis James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
Long Term Memory LONG TERM MEMORY (LTM)  Variety of information stored in LTM:  The capital of Turkey  How to drive a car.
Chapter 7 Memory. Objectives 7.1 Overview: What Is Memory? Explain how human memory differs from an objective video recording of events. 7.2 Constructing.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Complementary Learning Systems
Psychology 209 – Winter 2017 January 31, 2017
What is cognitive psychology?
Neural Network Architecture Session 2
Psychology 209 – Winter 2017 Feb 28, 2017
Neurobiology of Learning and Memory
Neural Networks.
LEARNING & MEMORY Jaan Aru
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
NATURE NEUROSCIENCE 2007 Coordinated memory replay in the visual cortex and hippocampus during sleep Daoyun Ji & Matthew A Wilson Department of Brain.
James L. McClelland SS 100, May 31, 2011
Does the Brain Use Symbols or Distributed Representations?
Cooperation of Complementary Learning Systems in Memory
Capacity of auto-associative networks
Emergence of Semantic Structure from Experience
Simple learning in connectionist networks
Emergence of Semantics from Experience
Memory and Forgetting *Memory: “The ability to recall information”.
Chapter 7: Memory.
Brainstorm… What is learning? How would you define it?
Interplay of Hippocampus and Prefrontal Cortex in Memory
Wallis, JD Helen Wills Neuroscience Institute UC, Berkeley
Building Valid, Credible, and Appropriately Detailed Simulation Models
Simple learning in connectionist networks
Relating Hippocampal Circuitry to Function
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
CLS, Rapid Schema Consistent Learning, and Similarity-weighted Interleaved learning Psychology 209 Feb 26, 2019.
Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model Psych 209 – 2019 Feb 14, 2019.
The Network Approach: Mind as a Web
Presentation transcript:

Memory and Learning: Their Nature and Organization in the Brain James L. McClelland Stanford University

Some features of human learning and memory Over a lifetime we can learn so much Names of thousands of people, things The layout of familiar neighborhoods Academic domains including math, history, biology… Many skills such as driving, typing, reading, computer programming, playing chess, … And yet our memory is far from perfect We forget We falsely recollect We blend information from different experiences

Results of Successive Reconstructions

Constructive Memory Memory and the Paleontologist metaphor Fragments stitched together with the aid of plaster, glue … prior knowledge, beliefs, and desires. Fragments may come from one or many dinosaurs… not necessarily of the same species! From metaphor to mechanism: What do we know about learning and memory in the brain that can help explain how we can learn so much, and yet also have the limitations that we do?

How Does the Brain Learn? By adjustments of the strengths of connections among neurons that occur in the course of neural activity There are about 100 billion neurons in the brain, with up to 100 thousand synapses each, providing an tremendous substrate for learning and memory

What is a Memory? The trace left in the memory system by an experience? A representation brought back to mind of a prior event or experience? Note that in some theories, these things are assumed to be one and the same. Not so in a neural network! Here, the knowledge is in your connections

In a neural network… The trace left by an experience is a pattern of adjustments to connections among units participating in processing it. The representation brought back to mind is a pattern of activation which may be similar to that produced by the experience, constructed with the participation of the affected connections. Such connections are generally assumed also to be affected by many other leaning episodes, so the process of ‘reinstatement’ is always subject to influence from traces of other experiences.

Computational Models of Learning in Neural Networks We don’t really know how the brain succeeds in achieving remarkable feats of learning But we are getting better and better at training artificial neural networks We generally use an error-correcting learning algorithm, driven by mismatch between actual and desired output otherwise known as gradient descent With these models we have captured quite a lot of what we know about the time course and outcome of human learning, so I assume the brain has figured out how to achieve the same result, even though it remains unclear how it actually does it

A pattern associator model

A Deep Neural Network

A Deep Convolutional Neural Network

Combining a CNN with an RNN to describe images

Auto-associator network

A DQN that learns to play games Model learns to predict the reward value of each possible action given The visual input based on trial and error learning. A model combining supervised learning with this approach beats a human expert at Go.

Properties of Neural Network Models They can learn many input-output mappings in the same set of connection weights Repetition leads to greater strength Noise gets averaged out over repetitions – the central tendency emerges The learn to generalize so they can respond to novel inputs Similar inputs produce similar outputs That’s good much of the time But can result in interference when similar inputs must be mapped to different outputs

A Neural Network Based Approach to the Neural Basis of Memory Complementary and Cooperating Brain Systems Memory performance depends on multiple contributing brain systems. Each system is a neural network, but may be parameterized differently, to perform complementary functions Contributions of components to overall task performance depend on their neuro-mechanistic properties. Components work together so that overall performance may be better than the sum of the independent contributions of the parts.

The Complementary Learning Systems Theory (McClelland, McNaughton & O’Reilly, 1995) Key findings from effects of brain damage The basic theory Neuroscience data consistent with the account Why there should be complementary systems

Bi-lateral destruction of hippocampus and related areas produces: - Profound deficit in forming new arbitrary associations and new episodic memories. - Preserved general intelligence, knowledge and acquired skills. - Preserved learning of new skills and item-specific priming. - Loss of recently learned material w/ preservation of prior knowledge, acquired skills, and remote memory.

The Theory: Processing and Learning in Neocortex An input and a response to it result in activation distributed across many areas in the neocortex. Small connection weight changes occur as a result, producing Item-specific effects Gradual skill acquisition These small changes are not sufficient to support rapid acquisition of arbitrary new associations.

Complementary Learning System in the Hippocampus Bi-directional connections produce a reduced description of the cortical pattern in the hippocampus. Large connection weight changes bind bits of reduced description together Cued recall depends on pattern completion within the hippocampal network Consolidation occurs through repeated reactivation, leading to cumulation of small changes in cortex. hippocampus

Supporting Neurophysiological Evidence The necessary pathways exist. Anatomy and physiology of the hippocampus support its role in fast learning. Reactivation of hippocampal representations during sleep.

Different Learning and Coding Characteristics of Hippocampus and Neocortex Hippocampus learns quickly to allow one- (or a few)-trial learning of particulars of individual items and events. Cortex learns slowly to allow sensitivity to overall statistical structure of experience. Hippocampus uses sparse conjunctive representations to maintain the distinctness of specific items and events to minimize interference. Cortex uses representations that start out highly overlapping and differentiate gradually to allow: Generalization by default Differentiation where necessary

Recording was made while animal traversed an eight-arm radial maze. Examples of neurons found in entorhinal cortex and hippocampal area CA3, consistent with the idea that the hippocampus but not cortex uses sparse conjunctive coding Recording was made while animal traversed an eight-arm radial maze.

Why Are There Complementary Learning Systems? Discovery of structure requires gradual interleaved learning with dense (overlapping) patterns of activation. Models based on this idea have led to successful accounts of many aspects of conceptual development and disintegration of conceptual knowledge in neurodegenerative disease (R&M’04). Rapid learning of new information in such systems leads to catastrophic interference. Structured knowledge gradually built up is rapidly destroyed.

Keil, 1979

The Model of Rumelhart (1990)

The Training Data: All propositions true of items at the bottom level of the tree, e.g.: Robin can {grow, move, fly}

Target output for ‘robin can’ input

Early Later Later Still E x p e r i e n c e

Emergence of Cognitive Abilities through Interleaved Learning Distributed representations that capture semantic knowledge and many other cognitive abilities emerge through a gradual interleaved learning process The progression of learning and the representations formed capture many aspects of cognitive development, and the outcome exhibits the acquired cognitive abilities and skills exhibited by human adults

What happens in this system if we try to learn something new? Such as a Penguin

Learning Something New Used network already trained with eight items and their properties. Added one new input unit fully connected to the representation layer Trained the network with the following pairs of items: penguin-isa living thing-animal-bird penguin-can grow-move-swim

Rapid Learning Leads to Catastrophic Interference

How the Hippocampus Solves the Catastrophic Interference Problem Rapid learning in the hippocampus allows the system to learn new things, while leaving the knowledge in the cortex intact. If experiences with new item are occasionally replayed to the cortex, interleaved with ongoing experiences with other things, it can be gradually integrated into neocortex without producing catastrophic interference.

Avoiding Catastrophic Interference with Interleaved Learning

Overview What is “a memory”? The essence of the connectionist/PDP perspective Contrasting systems-level approaches to the neural basis of memory The complementary learning systems approach McClelland, McNaughton, and O’Reilly, 1995 How the complementary learning systems work together to create ‘episodic’ and ‘semantic’ memory.

Effect of Prior Association on Paired-Associate Learning in Control and Amnesic Populations Base rates

Kwok & McClelland Model of Semantic and Episodic Memory Model includes slow learning cortical system and a fast-learning hippocampal system. Cortex contains units representing both content and context of an experience. Semantic memory is gradually built up through repeated presentations of the same content in different contexts. Formation of new episodic memory depends on hippocampus and the relevant cortical areas, including context. Loss of hippocampus would prevent initial rapid binding of content and context. Episodic memories benefit from prior cortical learning when they involve meaningful materials. Hippocampus Relation Cue Context Target Neo-Cortex

Simulation Results From KM Model Base rates in model

Summary Your memories are in your connections; memories are constructed using these traces (and those of other experiences) to constrain the construction process. Memory performance involves cooperation among brain systems: Cortical regions that gradually learn to represent content and context Medial temporal regions that can learn conjunctive associations of cortical patterns rapidly Working together, these systems allow us to learn new things rapidly without catastrophic interference Rely simultaneously on prior knowledge and new learning