On Bubbles and Drifts: Continuous attractor networks in brain models

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Wang TINS 2001 Wang Neuron 2002 An integrated microcircuit model of working memory and decision making.
Kinetic Theory for the Dynamics of Fluctuation-Driven Neural Systems David W. McLaughlin Courant Institute & Center for Neural Science New York University.
Stochastic Dynamics as a Principle of Brain Functions: Binary and Multiple Decision and Detection G. Deco, R. Romo and E. Rolls Universitat Pompeu Fabra/ICREA.
Lecture 13: Associative Memory References: D Amit, N Brunel, Cerebral Cortex 7, (1997) N Brunel, Network 11, (2000) N Brunel, Cerebral.
WINNERLESS COMPETITION PRINCIPLE IN NEUROSCIENCE Mikhail Rabinovich INLS University of California, San Diego ’
Image Segmentation by Complex-Valued Units Cornelius Weber and Stefan Wermter Hybrid Intelligent Systems School of Computing and Technology University.
Spike timing-dependent plasticity Guoqiang Bi Department of Neurobiology University of Pittsburgh School of Medicine.
Part II: Population Models BOOK: Spiking Neuron Models, W. Gerstner and W. Kistler Cambridge University Press, 2002 Chapters 6-9 Laboratory of Computational.
Basic Models in Theoretical Neuroscience Oren Shriki 2010 Synaptic Dynamics 1.
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Biological Modeling of Neural Networks: Week 11 – Continuum models: Cortical fields and perception Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Transients.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
Michigan State University1 Visual Attention and Recognition Through Neuromorphic Modeling of “Where” and “What” Pathways Zhengping Ji Embodied Intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
How facilitation influences an attractor model of decision making Larissa Albantakis.
Connected Populations: oscillations, competition and spatial continuum (field equations) Lecture 12 Course: Neural Networks and Biological Modeling Wulfram.
Unsupervised learning
1 On Bubbles and Drifts: Continuous attractor networks and their relation to working memory, path integration, population decoding, attention, and motor.
Bump attractors and the homogeneity assumption Kevin Rio NEUR April 2011.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Biological Modeling of Neural Networks: Week 9 – Adaptation and firing patterns Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 Firing patterns and adaptation.
10/6/20151 III. Recurrent Neural Networks. 10/6/20152 A. The Hopfield Network.
1 6. Feed-forward mapping networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
Hebbian Coincidence Learning
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Unsupervised learning
Xiao-Jing Wang Department of Neurobiology Yale University School of Medicine The Concept of a Decision Threshold in Sensory-Motor Processes.
Ch 7. Cortical feature maps and competitive population coding Fundamentals of Computational Neuroscience by Thomas P. Trappenberg Biointelligence Laboratory,
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
Multiple attractors and transient synchrony in a model for an insect's antennal lobe Joint work with B. Smith, W. Just and S. Ahn.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
”When spikes do matter: speed and plasticity” Thomas Trappenberg 1.Generation of spikes 2.Hodgkin-Huxley equation 3.Beyond HH (Wilson model) 4.Compartmental.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
1 7. Associators and synaptic plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Synaptic plasticity DENT/OBHS 131 Neuroscience 2009.
1 Adaptive Resonance Theory. 2 INTRODUCTION Adaptive resonance theory (ART) was developed by Carpenter and Grossberg[1987a] ART refers to the class of.
Neural Networks with Short-Term Synaptic Dynamics (Leiden, May ) Misha Tsodyks, Weizmann Institute Mathematical Models of Short-Term Synaptic plasticity.
A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds Neuroscience Program, Lafayette College, Easton PA
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Image Segmentation by Complex-Valued Units Cornelius Weber Hybrid Intelligent Systems School of Computing and Technology University of Sunderland Presented.
Wang TINS 2001 Wang et al PNAS 2004 Wang Neuron ms.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
IJCNN, July 27, 2004 Extending SpikeProp Benjamin Schrauwen Jan Van Campenhout Ghent University Belgium.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
1 9. Continuous attractor and competitive networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Self-Organizing Network Model (SOM) Session 11
Real Neurons Cell structures Cell body Dendrites Axon
Chapter 7: Cortical maps and competitive population coding
How Neurons Do Integrals
9. Continuous attractor and competitive networks
Covariation Learning and Auto-Associative Memory
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Volume 36, Issue 5, Pages (December 2002)
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Continuous attractor neural networks (CANNs)
Presentation transcript:

On Bubbles and Drifts: Continuous attractor networks in brain models Thomas Trappenberg Dalhousie University, Canada

Once upon a time ... (my CANN shortlist) Wilson & Cowan (1973) Grossberg (1973) Amari (1977) … Sampolinsky & Hansel (1996) Zhang (1997) Stringer et al (2002)

It’s just a `Hopfield’ net … Recurrent architecture Synaptic weights

In mathematical terms … Updating network states (network dynamics) Gain function Weight kernel

Weights describe the effective interaction profile in Superior Colliculus TT, Dorris, Klein & Munoz, J. Cog. Neuro. 13 (2001)

Network can form bubbles of persistent activity (in Oxford English: activity packets) End states

Space is represented with activity packets in the hippocampal system From Samsonovich & McNaughton Path integration and cognitive mapping in a continuous attractor neural J. Neurosci. 17 (1997)

There are phase transitions in the weight-parameter space

CANNs work with spiking neurons Xiao-Jing Wang, Trends in Neurosci. 24 (2001)

Shutting-off works also in rate model Node Time

Various gain functions are used End states

CANNs can be trained with Hebb Training pattern:

Normalization is important to have convergent method Random initial states Weight normalization w(x,y) w(x,50) x x y Training time

Gradient-decent learning is also possible (Kechen Zhang) Gradient decent with regularization = Hebb + weight decay

CANNs have a continuum of point attractors Point attractors and basin of attraction Line of point attractors Can be mixed: Rolls, Stringer, Trappenberg A unified model of spatial and episodic memory Proceedings B of the Royal Society 269:1087-1093 (2002)

Neuroscience applications of CANNs Persistent activity (memory) and winner-takes-all (competition) Working memory (e.g. Compte, Wang, Brunel etc) Place and head direction cells (e.g. Zhang, Redish, Touretzky, Samsonovitch, McNaughton, Skaggs, Stringer et al.) Attention (e.g. Olshausen, Salinas & Abbot, etc) Population decoding (e.g. Wu et al, Pouget, Zhang, Deneve, etc ) Oculomotor programming (e.g. Kopecz & Schoener, Trappenberg) etc

Superior colliculus intergrates exogenous and endogenous inputs h a l E F L I P R Cerebellum

Superior Colliculus is a CANN TT, Dorris, Klein & Munoz, J. Cog. Neuro. 13 (2001)

CANN with adaptive input strength explains express saccades

CANN are great for population decoding (fast pattern matching implementation)

CANN (integrators) are stiff

… and drift and jump TT, ICONIP'98

Modified CANN solves path-integration

CANNs can learn dynamic motor primitives Stringer, Rolls, TT, de Araujo, Neural Networks 16 (2003).

Drift is caused by asymmetries NMDA stabilization

CANN can support multiple packets Stringer, Rolls & TT, Neural Networks 17 (2004)

How many activity packets can be stable? T.T., Neural Information Processing-Letters and Reviews, Vol. 1 (2003)

Stabilization can be too strong TT & Standage, CNS’04

CANN can discover dimensionality

Continuous dynamic (leaky integrator): The model equations: Continuous dynamic (leaky integrator): : activity of node i : firing rate : synaptic efficacy matrix : global inhibition : visual input : time constant : scaling factor : #connections per node : slope : threshold NMDA-style stabilization: Hebbian learning: