A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch

Slides:



Advertisements
Similar presentations
Computational Neuroscience 03 Lecture 8
Advertisements

Coordination of Multi-Agent Systems Mark W. Spong Donald Biggar Willett Professor Department of Electrical and Computer Engineering and The Coordinated.
Lecture 13: Associative Memory References: D Amit, N Brunel, Cerebral Cortex 7, (1997) N Brunel, Network 11, (2000) N Brunel, Cerebral.
Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.
Part III: Models of synaptic plasticity BOOK: Spiking Neuron Models, W. Gerstner and W. Kistler Cambridge University Press, 2002 Chapters Laboratory.
Spike timing-dependent plasticity: Rules and use of synaptic adaptation Rudy Guyonneau Rufin van Rullen and Simon J. Thorpe Rétroaction lors de l‘ Intégration.
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Spike timing dependent plasticity Homeostatic regulation of synaptic plasticity.
Artificial Neural Networks - Introduction -
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Plasticity in the nervous system Edward Mann 17 th Jan 2014.
Artificial Neural Networks - Introduction -
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
1Neural Networks B 2009 Neural Networks B Lecture 1 Wolfgang Maass
Overview over different methods
The “Humpty Dumpty” problem
Rules for Information Maximization in Spiking Neurons Using Intrinsic Plasticity Prashant Joshi & Jochen Triesch { joshi,triesch
Lyapunov Functions and Memory Justin Chumbley. Why do we need more than linear analysis? What is Lyapunov theory? – Its components? – What does it bring?
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Synapses are everywhere neurons synapses Synapse change continuously –From msec –To hours (memory) Lack HH type model for the synapse.
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
The three main phases of neural development 1. Genesis of neurons (and migration). 2. Outgrowth of axons and dendrites, and synaptogenesis. 3. Refinement.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
Asymptotic Techniques
Unsupervised learning
Learning and Stability. Learning and Memory Ramón y Cajal, 19 th century.
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Biological Modeling of Neural Networks Week 8 – Noisy input models: Barrage of spike arrivals Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
Modeling the Calcium Dysregulation Hypothesis of Alzheimer's Disease Júlio de Lima do Rêgo Monteiro, Marcio Lobo Netto, Diego Andina, Javier Ropero Pelaez.
HEBB’S THEORY The implications of his theory, and their application to Artificial Life.
Plasticity and learning Dayan and Abbot Chapter 8.
Unsupervised learning
Biological Modeling of Neural Networks Week 6 Hebbian LEARNING and ASSOCIATIVE MEMORY Wulfram Gerstner EPFL, Lausanne, Switzerland 6.1 Synaptic Plasticity.
Lecture 9: Introduction to Neural Networks Refs: Dayan & Abbott, Ch 7 (Gerstner and Kistler, Chs 6, 7) D Amit & N Brunel, Cerebral Cortex 7, (1997)
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
The BCM theory of synaptic plasticity. The BCM theory of cortical plasticity BCM stands for Bienestock Cooper and Munro, it dates back to It was.
”When spikes do matter: speed and plasticity” Thomas Trappenberg 1.Generation of spikes 2.Hodgkin-Huxley equation 3.Beyond HH (Wilson model) 4.Compartmental.
Theoretical Neuroscience Physics 405, Copenhagen University Block 4, Spring 2007 John Hertz (Nordita) Office: rm Kc10, NBI Blegdamsvej Tel (office)
1 7. Associators and synaptic plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Mean Field Theories in Neuroscience B. Cessac, Neuromathcomp, INRIA.
Keeping the neurons cool Homeostatic Plasticity Processes in the Brain.
17 1 Stability Recurrent Networks 17 3 Types of Stability Asymptotically Stable Stable in the Sense of Lyapunov Unstable A ball bearing, with dissipative.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
Stefan Mihalas Ernst Niebur Krieger Mind/Brain Institute and
Biointelligence Laboratory, Seoul National University
Financial Informatics –XVII: Unsupervised Learning
Synaptic scaling in the developing visual cortex
A principled way to principal components analysis
Covariation Learning and Auto-Associative Memory
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Presentation of Article:
Generating Coherent Patterns of Activity from Chaotic Neural Networks
Volume 40, Issue 6, Pages (December 2003)
7. Associators and synaptic plasticity
Continuous attractor neural networks (CANNs)
An extended Hebbian model of unsupervised learning
Presentation transcript:

A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch | Web:

Network of neurons in brain perform diverse cortical computations in parallel External environment and experiences modify these neuronal circuits via synaptic plasticity mechanisms Correlation based Hebbian plasticity forms the basis of much of the research done on the role of synaptic plasticity in learning and memory What is wrong with Hebbian plasticity? Synopsis

Classical Hebbian plasticity leads to unstable activity regimes in the absence of some kind of regulatory mechanism (positive feedback process) In the absence of such regulatory mechanism, Hebbian learning will lead the circuit into hyper- or hypo-activity regimes Synopsis

How can neural circuits maintain stable activity states when they are constantly being modified by Hebbian processes that are notorious for being unstable? A new synaptic plasticity mechanism is presented  Enables a neuron to maintain homeostasis of its firing rate over longer timescales  Leaves the neuron free to exhibit fluctuating dynamics in response to external inputs  Is globally asymptotically stable  Simulation results are presented from single neuron to network level for sigmoidal as well as spiking neurons

Outline Homeostatic mechanisms in biology Computational theory and learning rule Simulation results Conclusion

Homeostatic mechanisms in biology Slow homeostatic plasticity mechanisms enable the neurons to maintain average firing rate levels by dynamically modifying the synaptic strengths in the direction that promotes stability Abott, L.F., Nelson S. B.: Synaptic plasticity: taming the beast. Nature Neurosci. 3, (2000) Turrigiano, G.G., Nelson, S.B.: Homeostatic plasticity in the developing nervous system. Nature Neurosci. 5, (2004)

Hebb and beyond (Computational theory and learning rule) Hebb’s premise: We can make the postsynaptic neuron achieve a baseline firing rate of ν base by adding a multiplicative term (ν base – ν post (t)) Stable points: ν pre (t) = 0 or ν post (t) = 0

Some Math Learning rule: Basic Assumption – pre and post-synaptic neurons are linear Differentiating equation 2a we get: By substituting in equation (1): (1) (2a) (2b) (3)

Theorem 1 (Stability) For a SISO case, with the presynaptic input held constant at ν pre, and the postsynaptic output having the value ν 0 post at time t = 0, and ν base being the homeostatic firing rate of the postsynaptic neuron, the system describing the evolution of ν post (.) is globally asymptotically stable. Further ν post globally asymptotically converges to ν base Hint for proof: 1. Use as Lyapunov function: 2. The derivative of V is negative definite over the whole state space 3. Apply global invariant set theorem

Theorem 2 For a SISO case, with the presynaptic input held constant at ν pre, and the postsynaptic output having the value ν 0 post, at time t = 0, and ν base being the homeostatic firing rate of the postsynaptic neuron, the postsynaptic value at any time t > 0 is given by: Hint for proof: Convert equation: into a linear form and solve it.

Results A sigmoidal postsynaptic neuron receiving presynaptic inputs from two different and independent Gaussian input streams Simulation time n = 5000 steps Initial weights uniformly drawn from [0, 0.1] ν base = 0.6, τ w = 30 For n <= 2500  IP1: mean = 0.3, SD = 0.01  IP2: mean = 0.8, SD = 0.04 For n > 2500  IP1: mean = 0.36 SD = 0.04  IP2: mean = 1.6, SD = 0.01

Results Single postsynaptic integrate- and-fire neuron receiving presynaptic inputs from 100 Poisson spike trains via dynamic synapses Simulation time, t = 10 sec, dt = 1ms Initial weights = ν base = 40 Hz, τ w = 3600 For 0 < t <=5 sec:  First 50 spike trains : 3 Hz  Remaining 50 spike trains: 7 Hz For t > 5 sec:  First 50 spike trains: 60 Hz  Remaining 50 spike trains: 30 Hz

Results Can synaptic homeostatic mechanisms be used to maintain stable ongoing activity in recurrent circuits? 250 I&F neurons, 80% E, 20%I with dynamic synapses 20 Poisson IP spike trains spiking at 5 Hz for t 3 sec

Conclusion A new synaptic plasticity mechanism is presented that enables a neuron to maintain stable firing rates At the same time the rule leaves the neuron free to show moment-to- moment fluctuations based on variations in its presynaptic inputs The rule is completely local Globally asymptotically stable Able to achieve firing rate homeostasis from single neuron to network level

References 1. Hebb, D.O.: Organization of Behavior. Wiley, New York (1949) 2. Abbott, L.F., Nelson, S.B.: Synaptic plasticity: taming the beast. Nature Neurosci. 3, 1178– 1183 (2000) 3. Turrigiano, G.G., Nelson, S.B.: Homeostatic plasticity in the developing nervous system. Nature Neuroscience 5, 97–107 (2004) 4. Turrigiano, G.G., Leslie, K.R., Desai, N.S., Rutherford, L.C., Nelson, S.B.: Activity- dependent scaling of quantal amplitude in neocortical neurons. Nature 391(6670), 892–896 (1998) 5. Bienenstock, E.L., Cooper, L.N., Munro, P.W.: Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2, 32–48 (1982) 6. Oja, E.: A simplified neuron model as a principal component analyzer. J. Math. Biol. 15, 267–273 (1982) 7. Triesch, J.: Synergies between intrinsic and synaptic plasticity in individual model neurons. In: Advances in Neural Information Processing Systems 2004 (NIPS 2004). MIT Press, Cambridge (2004)

Lyapunov Function Let V be a continuously differentiable function from to. If G is any subset of, we say that V is a Lyapunov function on G for the system if : does not changes sign on G. More precisely, it is not required that the function V be positive-definite (just continuously differentiable). The only requirement is on the derivative of V, which can not change sign anywhere on the set G.

Global Invariant Set Theorem Consider the autonomous system dx/dt = f(x), with f continuous, and let V(x) be a scalar function with continuous first partial derivatives. Assume that 1. V(x)  ∞ as ||x||  ∞ 2. V'(x) <= 0 over the whole state space Let R be the set of all points where V'(x) = 0, and M be the largest invariant set in R. Then all solutions of the system globally asymptotically converge to M as t  ∞