9. Continuous attractor and competitive networks

Slides:



Advertisements
Similar presentations
On Bubbles and Drifts: Continuous attractor networks in brain models
Advertisements

Lecture 13: Associative Memory References: D Amit, N Brunel, Cerebral Cortex 7, (1997) N Brunel, Network 11, (2000) N Brunel, Cerebral.
MICHAEL MILFORD, DAVID PRASSER, AND GORDON WYETH FOLAMI ALAMUDUN GRADUATE STUDENT COMPUTER SCIENCE & ENGINEERING TEXAS A&M UNIVERSITY RatSLAM on the Edge:
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
1 On Bubbles and Drifts: Continuous attractor networks and their relation to working memory, path integration, population decoding, attention, and motor.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
1 6. Feed-forward mapping networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
What to make of: distributed representations summation of inputs Hebbian plasticity ? Competitive nets Pattern associators Autoassociators.
Ch 7. Cortical feature maps and competitive population coding Fundamentals of Computational Neuroscience by Thomas P. Trappenberg Biointelligence Laboratory,
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
1 7. Associators and synaptic plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
381 Self Organization Map Learning without Examples.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
1 9. Continuous attractor and competitive networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer.
National Taiwan Normal A System to Detect Complex Motion of Nearby Vehicles on Freeways C. Y. Fang Department of Information.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Radial Basis Function G.Anuradha.
NATURE NEUROSCIENCE 2007 Coordinated memory replay in the visual cortex and hippocampus during sleep Daoyun Ji & Matthew A Wilson Department of Brain.
6. Feed-forward mapping networks
Mean Shift Segmentation
11. System level organization and coupled networks
Lecture 22 Clustering (3).
Collins Assisi, Mark Stopfer, Maxim Bazhenov  Neuron 
Volume 56, Issue 2, Pages (October 2007)
Neural Mechanisms of Self-Location
Presented by Rhee, Je-Keun
Learning Precisely Timed Spikes
Optimal Degrees of Synaptic Connectivity
Volume 66, Issue 6, Pages (June 2010)
Carlos D. Brody, J.J. Hopfield  Neuron 
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Volume 40, Issue 6, Pages (December 2003)
7. Associators and synaptic plasticity
Volume 36, Issue 5, Pages (December 2002)
Sparseness and Expansion in Sensory Representations
Experience-Dependent Asymmetric Shape of Hippocampal Receptive Fields
Volume 19, Issue 3, Pages (April 2017)
Neural Mechanisms of Visual Motion Perception in Primates
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Mechanisms of Connectome Development
Moran Furman, Xiao-Jing Wang  Neuron 
Collins Assisi, Mark Stopfer, Maxim Bazhenov  Neuron 
Yann Zerlaut, Alain Destexhe  Neuron 
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Volume 64, Issue 6, Pages (December 2009)
Computational Models of Grid Cells
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Dynamic Shape Synthesis in Posterior Inferotemporal Cortex
Opponent Inhibition Neuron
End-Stopping and the Aperture Problem
Volume 56, Issue 2, Pages (October 2007)
Recurrent neuronal circuits in the neocortex
Rony Azouz, Charles M. Gray  Neuron 
Continuous attractor neural networks (CANNs)
Presentation transcript:

9. Continuous attractor and competitive networks Fundamentals of Computational Neuroscience, T. P. Trappenberg, 2002. Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering Graduate Programs in Cognitive Science, Brain Science and Bioinformatics Brain-Mind-Behavior Concentration Program Seoul National University E-mail: btzhang@bi.snu.ac.kr This material is available online at http://bi.snu.ac.kr/

Outline 9.1 9.2 9.3 9.4 9.5 Spatial representations and the sense of direction Learning with continuous pattern representations Asymptotic states and the dynamics of neural fields ‘Path’ integration, Hebbian trace rule, and sequence learning Competitive networks and self-organizing maps

9.1 Spatial representations and the sense of direction Auto-associative attractor models General memory states in mind The shape of objects, their smell, texture, or color Point attractor neural networks (PANNs) Memory represented by independent vectors Continuous attractor neural networks (CANNs) The training patterns represent continuous Spatial location of an object Topographic maps

9.1.1 Head direction The sense of direction Representation of body or head direction A mechanism to update this information without visual cues Fig. 9.1 (A) Experimental response of a neuron in the subiculum of a rodent when the rodent is heading in different directions in a familiar maze. The dashed line represents the new head properties of the same neuron when the rodent is placed in the new unfamiliar maze. The new response properties will normally be similar to the previous one, that is, head direction cells try to maintain approximately their response properties to specific head directions. However, the results shown were produced in experiments with a rodent that had cortical lesions that weakened the ability to maintain the response properties after the rodent was transferred into a new environment. (B) Neuronal response from many hippocampal neurons in a rodent that responded to the subject’s location (places) in a maze. The figure shows the firing rates of the neurons in response to a particular place. whereby the neurons were placed in the figure so that neurons with similar response properties were placed adjacent to each other.

9.1.2 Place fields Head direction representations The spatial representation of a one-dimensional feature space in the brain Apply equally to higher-dimensional representations Neurons in the hippocampus of rats Fire in relation to specific locations within a maze A specific topography of neurons within the hippocampal tissue with respect to their maximal response to a particular place has not been found The rearrangement

9.1.3 Spatial representations in network models A possible solution to representing head directions Fig. 9.2 A proposal as to how the activity of nodes, for clarity arranged into a circle, can represent head directions. With the 20 nodes of this model we can represent head directions with a resolution of 18 degree when using a single binary node as a representation of a head direction. The single active node in the figure, represented as a solid circle, indicates a head direction of 72 degrees in this example.

9.1.4 Graded winner-take-all models Only one node or one activity packet of nodes The dynamic equation for the networks Neural field equation 1-dimension The discretization rules (9.1) (9.2) (9.3) (9.4) (9.5)

9.2 Learning with continuous pattern representations Recurrent neural networks Represent a continuous set of patterns Hebbian learning Hebbian rule for the excitatory weights In the neural field representation The firing rate r μ is the firing rate of the neural field while dominated by the training example of a pattern μ presented to the network The inhibition from inhibitory interneurons (9.6) (9.7)

9.2.1 Learning Gaussian head direction patterns The external input to a node i, Gaussian profile around a preferred direction The displacement between The head direction αHD provided by the external input The optimal firing direction of the cell αi A contribution to each weight component For infinite resolution of the model, i.e. Δ α → 0: The continuous notations (9.8) (9.9) (9.10) for a node with a preferred direction equal to that of training example (9.11) for a node with a preferred direction different from the direction of the training example (9.12) The weight matrix has a Gaussian shape and the same width as the receptive fields of the nodes (9.13) (9.14) : weight matrix depends only on the distance between nodes

9.2.2 Gaussian interaction profiles in the brain An effective interaction structure Short-distance excitation and long-distance inhibition Columnar organizations in the cortex The superior colliculus from cell recordings in monkeys Fig. 9.3 Data from cell recordings in the superior colliculus in a monkey that indicate the interaction strength ρw between cells in this midbrain structure. The solid line displays the corresponding measurement from simulations of a CANN model of this brain structure.

9.2.3 Self-organized interaction structures in CANNs Fig. 9.4 A recurrent associative attractor network model, similar to the model shown in Fig. 9.2, where the nodes have been arbitrarily placed in the physical space on a circle. The relative connection strength between the nodes is indicated by the thickness of the lies between the nodes. Each node responds during learning with a Gaussian firing profile around the stimulus that excites the node maximally. Each node is assigned a center of the receptive field randomly from a pool of centers covering the periodic training domain. (A) Before training all nodes have the same relative weights between them. (B) After training the relative weight structure has changed with a few strong connections and some weaker connections. (C) The regularities of the interactions can be revealed when reordering the nodes so that nodes with the strongest connections are adjacent to each other.

9.3 Asymptotic states and the dynamics of neural fields The asymptotic states (attractors) The weight matrix is shift invariant after training the network on continuous Gaussian patterns Local cooperation and global competition Activity packet A collection of nodes to be active Shift invariant The activity packet can be stabilized at any location in the network depending on an initial external stimulus Dynamic competition

9.3.1 Attractor regimes The different regimes in the CANN model depend on the level of inhibition c Growing activity The inhibition is weak compared to the excitation Decaying activity The inhibition is strong compared to the excitation Stable activity packet In an intermediate range of the strength of inhibitions relative to that of the excitations Fig. 9.5 (A) Time evolution of the firing rates in a CANN model with 100 nodes. Equal external inputs to nodes 30-70 were applied at t = 0. This external input was removed at t = 10τ. The inhibition was set to three times the average firing rate of a node when driven by a Gaussian external input like that used for training the network. (B) The solid line represents the firing rate profile of the simulation shown in (A) at t = 20τ. The dashed line corresponds to the firing rate profile in a similar simulation with reduced (by a factor of three) inhibition.

9.3.2 Formal analysis of attractor states A threshold activation function g(x) = 1/exp(0.007x) The stationary state of the dynamics eqn 9.3 h(x1)=h(x2)=0, For the weighting function w = wE - c, (9.15) (9.16) (9.17) Fig. 9.6 (A) Plot of the functions (9.17) and two linear functions with slope c = 1 and c = 0.4. The intersection of the functions (other than at x2 – x1 = 0) gives the solutions we are seeking of eqn 9.17. (B) The solution of eqn 9.17 as a function of the inhibition constant.

9.3.3 Stability of the activity packet The stability of the activity packet with respect to its movement Calculate the velocity of the boundaries The velocity The centre The velocity of the centre of the activity packet (9.18) (9.19) Fig. 9.7 (A) Two Gaussian bell curves centered around two different values x1 and x2. The striped and dotted areas are the same due to the symmetry of the bell curve. The integrals from x1 to x2 over the two different curves are therefore the same. This is not true if the two curves are not symmetric and have different shapes. (B) The dashed line outlines the shape of an activity packet from a simulation. The symmetry of this activity packet makes the gradients of boundaries equal except for a sign. (9.20)

9.3.4 Drifting activity packets Fig. 9.8 (A) Noisy weight matrix. Time evolution of the center of gravity of activity packets in CANN model with 100 nodes. The model was trained with activity packets on all possible locations. Each component of the resulting weight matrix was then convoluted with some noise. (B) Irregular or partial learning. Partial view of the weight matrix resulting from training the network with activity packets on only a few locations. (C) Time evolution of the center of gravity of activity packets in CANN model with 100 nodes after training the network on only 10 different locations.

9.3.5 Stabilization of the activity packet The drift in the activity packet can be stabilized by a small increase in the excitability of neurons once they have been recently activated NMDA receptor Voltage-dependent nonlinearity An increases of the voltage-dependent nonlinearity would make more states stable Fig. 9.8 (D) ‘NMDA’-stabilization. The network trained o the 10 locations was augmented a stabilization mechanism that reduces the firing threshold of active neurons.

9.4 ‘Path’ integration, Hebbian trace rule, and sequence learning The possibility of ‘updating’ the state A subject might not have an absolute value available Rotate a subject with closed eyes Path integration Calculate the new position from the old position 9.4.1 Path integration with asymmetric weighting functions The path integration problem involves using such asymmetries in a systematic way The strength of the asymmetry to the velocity of the movement Idiothetic cues

9.4.2 Idiothetic update of head direction representations Fig. 9.9 Model for path integration in CANNs. The central nodes are part of the network with collateral connections as used to represent head directions (Fig. 9.2). The rotation nodes represent collections of neurons that signal rotation velocities proportional to their activity. The afferents of these rotation cells can modulate the collateral connections within the head direction network. We symbolized this with synapses close to the synapses of the collateral connections. Each rotation cell can synapse on to each synapse in the head direction network. The separation of the connections, as indicated by the solid and dashed lines in the figure, is self-organized during learning

9.4.3 Self-organization of a rotation network Biologically realistic model Self-organize the network Clockwise rotation Clockwise synapse Short-term memory Trace term The weight between rotation nodes with Hebbian rule The rule strengthens the weights between the rotation node and the appropriate synapses in the recurrent network (9.21) (9.22)

9.4.4 Updating the network after learning The dynamics of the model (9.23) (9.24) Fig. 9. 10 (A) Simulation of a CANN model with idiothetic updating mechanisms. The acitivity packet can be moved with idiothetic inputs in either clockwise or anti-clockwise directions, depending on the firing rates of the corresponding rotation cells. (B) The different weighting functions from node 50 to the other nodes in the network after learning. w, solid line; wrot, dashed line, weff, dotted line.

9.4.5 Sequence learning Sequence learning Apply the generic mechanisms of asymmetric weighting functions Including a trace term (in pattern space) in the canonical learning rule For a sufficient strength of the asymmetric component Using the strength parameter The network is able to jump between the patterns in sequences (9.25)

9. 5 Competitive networks and self-organizing maps 9. 5 9.5 Competitive networks and self-organizing maps 9.5.1 Two-dimensional SOM Two-dimensional feature vectors (9.26) Fig. 9.11 Architecture of a two-dimensional self-organizing map. Each of the two input values rin1 and rin2 , each representing one of two feature components, is mapped on the map network with individual weight values win. The nodes in the map network are arranged in a two-dimensional sheet with collateral connections (not shown) corresponding to the distances between nodes in this two-dimensional sheet.

9.5.2 Simplifying winner-take-all description The response of the network A Gaussian firing rate around the node that receives the strongest input Wining node and label it with ‘*’ The firing rate of other nodes, Hebbian learning rule, The weight vector of the wining node is closest to the corresponding input vector, (9.27) (9.28) (9.29) Fig. 9.12 Experiment with two-dimensional self-organizing feature maps. (A) Initial map with random weight values. (B) and (C) Two examples of the resulting feature map after 1000 random training examples with different random initial conditions. These simulations are discussed further in Chapter 12.

9.5.3 Other competitive networks (1) Fig. 9.13 Another example of a two-dimensional self-organizing feature map. In this example we trained the network on 1000 random training examples from the lower left quadrants. The new training examples were then chosen randomly from the lower-left and upper-right quadrant. The parameter t specifies how many training examples have been presented to the network.

9.5.3 Other competitive networks (2) Fig. 9. 14 Example of categorization (vector quantization) of two-dimensional input data (two-dimensional training vectors). The training data are represented as dots, and the input vector that would best evoke a response of one of the three output nodes is represented by a cross. (A) Before training there is no correspondence between the group of input data and the output node representing category. (B) After training we have a ‘preferred vector’ for each node that corresponds to each of the clusters in the training data set.

Conclusion The continuous attractor model Winner-take-all models Spatial representation Winner-take-all models Hebbian learning with continuous pattern Gaussian interaction Self-organization models Attractor regimes Path integration and Sequence learning Competitive networks