Download presentation
Presentation is loading. Please wait.
Published byErich Winter Modified over 5 years ago
1
CLS, Rapid Schema Consistent Learning, and Similarity-weighted Interleaved learning
Psychology 209 Feb 26, 2019
2
Your knowledge is in your connections!
An experience is a pattern of activation over neurons in one or more brain regions. The trace left in memory is the set of adjustments to the strengths of the connections. Each experience leaves such a trace, but the traces are not separable or distinct. Rather, they are superimposed in the same set of connection weights. Recall involves the recreation of a pattern of activation, using a part or associate of it as a cue. The reinstatement depends on the knowledge in the connection weights, which in general will reflect influences of many different experiences. Thus, memory is always a constructive process, dependent on contributions from many different experiences.
3
Effect of a Hippocampal Lesion
Intact performance on tests of intelligence, general knowledge, language, other acquired skills Dramatic deficits in formation of some types of new memories: Explicit memories for episodes and events Paired associate learning Arbitrary new factual information Spared priming and skill acquisition Temporally graded retrograde amnesia: lesion impairs recent memories leaving remote memories intact. Note: HM’s lesion was bilateral
5
Key Points We learn about the general pattern of experiences, not just specific things Gradual learning in the cortex builds implicit semantic and procedural knowledge that forms much of the basis of our cognitive abilities The Hippocampal system complements the cortex by allowing us to learn specific things without interference with existing structured knowledge In general these systems must be thought of as working together rather than being alternative sources of information. Much of behavior and cognition depends on both specific and general knowledge
6
Emergence of Meaning in Learned Distributed Representations through Gradual Interleaved Learning
Distributed representations (what ML calls embeddings) that capture aspects of meaning emerge through a gradual learning process The progression of learning and the representations formed capture many aspects of cognitive development Progressive differentiation Sensitivity to coherent covariation across contexts Reorganization of conceptual knowledge
8
The Rumelhart Model
9
The Training Data: All propositions true of items at the bottom level of the tree, e.g.: Robin can {grow, move, fly}
11
Early Later Later Still E x p e r i e n c e
12
What happens in this system if we try to learn something new?
Such as a Penguin
13
Learning Something New
Used network already trained with eight items and their properties. Added one new input unit fully connected to the representation layer Trained the network with the following pairs of items: penguin-isa living thing-animal-bird penguin-can grow-move-swim
14
Rapid Learning Leads to Catastrophic Interference
16
Avoiding Catastrophic Interference with Interleaved Learning
20
Rapid Consolidation of Schema Consistent Information
Richard Morris Rapid Consolidation of Schema Consistent Information
21
Tse et al (Science, 2007, 2011) During training, 2 wells
uncovered on each trial
22
Schemata and Schema Consistent Information
What is a ‘schema’? An organized knowledge structure into which existing knowledge is organized. What is schema consistent information? Information that can be added to a schema without disturbing it. What about a penguin? Partially consistent Partially inconsistent In contrast, consider a trout a cardinal
23
New Simulations Initial training with eight items and their properties as before. Added one new input unit fully connected to the representation layer also as before Trained the network on one of the following pairs of items: penguin-isa & penguin-can trout-isa & trout-can cardinal-isa & cardinal-can
24
New Learning of Consistent and Partially Inconsistent Information
INTERFERENCE
25
Connection Weight Changes after Simulated NPA, OPA and NM Analogs
Tse et al. 2011
26
How Does It Work?
27
How Does It Work?
28
Remaining Questions Are all aspects of new learning integrated into cortex-like networks at the same rate? No, some aspects are integrated much more slowly than others Is it possible to avoid replaying everything one already knows when one wants to learn new things with arbitrary structure? Yes, at least in some circumstances that we will explore Perhaps the answers to these questions will allow us to make more efficient use of both cortical and hippocampal resources for learning.
29
Toward an Explicit Mathematical Theory of Interleaved Learning
Characterizing structure in a dataset to be learned The deep linear network that can learn this structure The dynamics of learning the structure in the dataset Initial learning of a base data set Subsequent learning of an additional item Using similarity weighted interleaved learning to increase efficiency of interleaved learning Initial thoughts on how we might use the hippocampus more efficiently
30
Hierarchical structure in a synthetic data set
Sparrow Hawk Salmon Sunfish Oak Maple Rose Daisy SparrowHawk
31
Processing and Learning in a deep linear network
Saxe et al, (2013a,b,…)
32
SVD of the dataset
33
Dynamics of Learning – one-hot inputs
SSE a(t) Variable discrepancy affects takeoff point, but not shape Solid lines are simulated values of a(t) Dashed lines are based on the equation
34
Dynamics of Learning – auto-associator
SSE 𝑠 2 a(t) 𝑠 2 Dynamics are a bit more predictable Solid lines are simulated values of a(t) Dashed lines are based on the equation
35
Adding a new member of an existing category
Sparrow Hawk Salmon Sunfish Oak Maple Rose Daisy SparrowHawk
36
SVD of New Complete Dataset
37
The consequence of standard interleaved learning
38
SVD Analysis of Network Output for Birds
Adjusted Dimensions New Dimension
39
Similarity Weighted Interleaved Learning
Full Interleaving Similarity- weighted Interleaving Uniform Interleaving
40
Freezing the output weights initially
Full Interleaving Similarity- weighted Interleaving Uniform Interleaving
41
Discussion Integration of fine-grained structure into a deep network may always be a slow process Sometimes this fine-grained structure is ultimately fairly arbitrary and idiosyncratic, although other times it may be part of a deeper pattern the learner has not previously seen One way to address such integration: Initial reliance on sparse / item-specific representation This could be made more efficient by storing only the ‘correction vector’ in the hippocampus Gradual integration through interleaved learning
42
SparrowHawk Error Vector After Easy Integration Phase is Complete
43
Questions, answers and next steps
Are all aspects of new learning integrated into cortex-like networks at the same rate? No, some aspects are integrated much more slowly than others Is it possible to avoid replaying everything one already knows when one wants to learn new things with arbitrary structure? Yes, at least in some circumstances that we have explored Perhaps the answers to these questions will allow us to make more efficient use of both cortical and hippocampal resources for learning.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.