Spike timing dependent plasticity - STDP Markram et. al. 1997 +10 ms -10 ms Pre before Post: LTP Post before Pre: LTD.

Slides:



Advertisements
Similar presentations
Example Project and Numerical Integration Computational Neuroscience 03 Lecture 11.
Advertisements

Computational Neuroscience 03 Lecture 8
Week 10 Generalised functions, or distributions
What is Calculus?.
Timing of the brain events underlying access to consciousness during the attentional blink Claire Sergent, Sylvain Baillet, & Stanislas Dehaene.
Complex Power – Background Concepts
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
Artificial neural networks:
Chapter 2 Motion Along a Straight Line In this chapter we will study kinematics, i.e., how objects move along a straight line. The following parameters.
Spike Timing-Dependent Plasticity Presented by: Arash Ashari Slides mostly from: 1  Woodin MA, Ganguly K, and Poo MM. Coincident pre-
Part III: Models of synaptic plasticity BOOK: Spiking Neuron Models, W. Gerstner and W. Kistler Cambridge University Press, 2002 Chapters Laboratory.
Spike timing-dependent plasticity: Rules and use of synaptic adaptation Rudy Guyonneau Rufin van Rullen and Simon J. Thorpe Rétroaction lors de l‘ Intégration.
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Spike timing-dependent plasticity Guoqiang Bi Department of Neurobiology University of Pittsburgh School of Medicine.
Artificial Neural Networks - Introduction -
Overview over different methods
Learning and Synaptic Plasticity. Molecules Levels of Information Processing in the Nervous System 0.01  m Synapses 1m1m Neurons 100  m Local Networks.
Overview over different methods
APPLICATIONS OF DIFFERENTIATION 4. So far, we have been concerned with some particular aspects of curve sketching:  Domain, range, and symmetry (Chapter.
Chapter 9 Simultaneous Equations Models. What is in this Chapter? In Chapter 4 we mentioned that one of the assumptions in the basic regression model.
Synapses are everywhere neurons synapses Synapse change continuously –From msec –To hours (memory) Lack HH type model for the synapse.
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
1cs533d-winter-2005 Notes  More reading on web site Baraff & Witkin’s classic cloth paper Grinspun et al. on bending Optional: Teran et al. on FVM in.
Governing equation for concentration as a function of x and t for a pulse input for an “open” reactor.
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
7 INVERSE FUNCTIONS. The common theme that links the functions of this chapter is:  They occur as pairs of inverse functions. INVERSE FUNCTIONS.
Continuous-Time Convolution EE 313 Linear Systems and Signals Fall 2005 Initial conversion of content to PowerPoint by Dr. Wade C. Schwartzkopf Prof. Brian.
9/20/2004EE 42 fall 2004 lecture 91 Lecture #9 Example problems with capacitors Next we will start exploring semiconductor materials (chapter 2). Reading:
Characteristics of Op-Amp &
Copyright © 2011 Pearson, Inc. 1.2 Functions and Their Properties.
Unsupervised learning
ICO Learning Gerhard Neumann Seminar A, SS06. Overview Short Overview of different control methods Correlation Based Learning ISO Learning Comparison.
Autumn 2008 EEE8013 Revision lecture 1 Ordinary Differential Equations.
Learning and Stability. Learning and Memory Ramón y Cajal, 19 th century.
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Perfect Competition *MADE BY RACHEL STAND* :). I. Perfect Competition: A Model A. Basic Definitions 1. Perfect Competition: a model of the market based.
Closed Loop Temporal Sequence Learning Learning to act in response to sequences of sensor events.
Synaptic plasticity: Introduction Different induction protocols Calcium, NMDA receptors -And we will also have some formal stuff with how do we mathematically.
The importance of sequences and infinite series in calculus stems from Newton’s idea of representing functions as sums of infinite series.  For instance,
Functions. Quick Review What you’ll learn about Numeric Models Algebraic Models Graphic Models The Zero Factor Property Problem Solving Grapher.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
LURE 2009 SUMMER PROGRAM John Alford Sam Houston State University.
Biological Modeling of Neural Networks Week 6 Hebbian LEARNING and ASSOCIATIVE MEMORY Wulfram Gerstner EPFL, Lausanne, Switzerland 6.1 Synaptic Plasticity.
Spike-timing-dependent plasticity (STDP) and its relation to differential Hebbian learning.
The BCM theory of synaptic plasticity. The BCM theory of cortical plasticity BCM stands for Bienestock Cooper and Munro, it dates back to It was.
Statistical learning and optimal control: A framework for biological learning and motor control Lecture 4: Stochastic optimal control Reza Shadmehr Johns.
LYAPUNOV STABILITY THEORY:
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Signals and Systems Analysis NET 351 Instructor: Dr. Amer El-Khairy د. عامر الخيري.
Keeping the neurons cool Homeostatic Plasticity Processes in the Brain.
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
Spike-timing-dependent plasticity (STDP) and its relation to differential Hebbian learning.
Modeling and Equation Solving
APPLICATIONS OF DIFFERENTIATION
APPLICATIONS OF DIFFERENTIATION
Closed Loop Temporal Sequence Learning
Differential Hebbian Learning – Introducing Temporal Asymmetry
Mathematical Descriptions of Systems
Input-to-State Stability for Switched Systems
Research Methods in Acoustics Lecture 9: Laplace Transform and z-Transform Jonas Braasch.
The normal distribution
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Volume 40, Issue 6, Pages (December 2003)
Spike timing-dependent plasticity
Spike-timing-dependent plasticity (STDP)
Presentation transcript:

Spike timing dependent plasticity - STDP Markram et. al ms -10 ms Pre before Post: LTP Post before Pre: LTD

Pre follows Post: Long-term Depression Pre t Pre Post t Post Synaptic change % Spike Timing Dependent Plasticity: Temporal Hebbian Learning Weight-change curve (Bi&Poo, 2001) Pre t Pre Post t Post Pre precedes Post: Long-term Potentiation Acausal Causal (possibly)

Overview over different methods You are here !

I. Pawlow History of the Concept of Temporally Asymmetrical Learning: Classical Conditioning

I. Pawlow History of the Concept of Temporally Asymmetrical Learning: Classical Conditioning Correlating two stimuli which are shifted with respect to each other in time. Pavlov’s Dog: “Bell comes earlier than Food” This requires to remember the stimuli in the system. Eligibility Trace: A synapse remains “eligible” for modification for some time after it was active (Hull 1938, then a still abstract concept).

  0 = 1 11 Unconditioned Stimulus (Food) Conditioned Stimulus (Bell) Response  X   + Stimulus Trace E The first stimulus needs to be “remembered” in the system Classical Conditioning: Eligibility Traces

I. Pawlow History of the Concept of Temporally Asymmetrical Learning: Classical Conditioning Eligibility Traces Note: There are vastly different time-scales for (Pavlov’s) behavioural experiments: Typically up to 4 seconds as compared to STDP at neurons: Typically milliseconds (max.)

Overview over different methods Mathematical formulation of learning rules is similar but time-scales are much different.

  Early: “Bell” Late: “Food” x Differential Hebb Learning Rule XiXi X0X0 Simpler Notation x = Input u = Traced Input V V’(t) uiui u0u0

Defining the Trace In general there are many ways to do this, but usually one chooses a trace that looks biologically realistic and allows for some analytical calculations, too. EPSP-like functions:  -function: Double exp.: This one is most easy to handle analytically and, thus, often used. Dampened Sine wave: Shows an oscillation. k kk

Defining the Traced Input u Convolution used to define the traced input, Correlation used to calculate weight growth (see below). uw

Defining the Traced Input u u Specifically (we are dealing with causal functions!): If x is a spike train (using the  -function): Then: For example:

Differential Hebb Rules – The Basic Rule General: Two inputs only. Thus we get for the output: w 0 =1=const. One weight unchanging: Same h for all inputs. ISO rule Isotropic Sequence Order Lng. (as we can also allow w 0 to change!) The basic rule: ISO-Learning

Differential Hebb Rules – More rules (but why?) ISO3 Three factor learning The denotes that we are only using positive contributions ISO3 - Learning ICO Input correlation Learning (as we take the derivative of the unchanging input u 0 ) ICO - Learning

Stability Analysis   x X1X1 X0X0 V u1u1 u0u0 AC CC X0X0 X1X1 Inputs

Stability Analysis Desired contribution Undesired contribution Some problems with these differential equations: 1) As we are integrating to ∞ strictly we need to assume that there is no second pulse pair coming in “ever”. 2) Furthermore we should assume that w 1 ’ →0 (hence  small) or we get second order influences, too.

Stability Analysis (ISO) Under these assumptions we can calculate  w AC and  w CC to find out whether the rules are stable or not. In general we assume two inputs: and and get for ISO: ISO is (only) asymptotically stable for t → ∞ X0X0 X1X1 Inputs T

Magn. of one step Stability Analysis for pulse pair inputs (ISO) w1w1 Setting x 0 =0 The remaining upward drift is only due to the AC term influence (Instable !) Single pairing relaxation behavior This shows that early arrival of a new pulse pair might easily fall into a not fully relaxed system. (Instable !) t ww Notice the AC contribution

Compare to STDP ISO: Weight change curve Learning Window (weight change curve) The weight change curve plots  w in dependence on the pulse pairing distance T in steps, where we define T>0 if the x 1 signal arrives before x 0 and T<0 else.

ISO rule The basic rule: ISO-Learning ICO Input correlation Learning (as we take the derivative of the unchanging input u 0 ) ICO - Learning Stability Analysis: Compare ISO with ICO Notice the difference

Stability Analysis: ICO Same as for ISO! Fully stable ! No more drift! ICO: Weight change curve (same as for ISO) Single pulse pair (no more AC term in ICO). ISO ICO t ww

ISO rule The basic rule: ISO-Learning ICO ICO - Learning Stability Analysis: More comparisons Conjoint learning-control- signal (same for all inputs !) Single input as designated learning-control-signal. Makes ICO a heterosynaptic rule of questionable biological realism.

Stability Analysis: More comparisons This difference is especially visible when wanting to symmetrize the rules (both weights can change!). ISO-Sym One control signal ! ICO-Sym Two control signals !

ICO-sym is truly symmetrical, but needs two control signals. ISO-sym behaves in a difficult and unstable oscillatory way. X0X0 X1X1 Inputs T Synapse w 1 grows because x 1 is before x 0. The Effects of Symmetry Synapse w 0 shrinks because x 0 is after x 1.

ISO3: uses – like ISO – a single learning-control-signal ISO3 ISO3 - Learning Idea: The system should learn ONLY at that moment in time when there was a “relevant” event r ! We use a shorter trace for r, as it should remain rather restricted in time. Same filter function h but parameters a r and b r. We also define T r as the interval between x 1 and r. Many times T r =T, hence r occurs together with x 0.

Stability Analysis: ISO3 Observations: 1)Cannot be solved anymore! 2)AC term is generally NOT equal to zero. 3)Not even asymptotic convergence can be generally assured. So what have we gained ? One can show that for T r =T the AC term vanishes if v has its maximum at T.

Stability Analysis: ISO3, graphical proof Maximum at T T Contributions of AC and CC graphically depicted as x 0 has not yet happened If we restrict learning to the moment when x 0 occurs then we do not have any AC contribution. !! A questionable assumption: argmax(u 1 ) = T !!

Stability Analysis: ISO3 Single pulse pair (ISO3 is stable and relaxes instantaneously). ISO ISO3 t ww Weight change curve (no more STDP!) No more upwards drift for ISO3

A General Problem: T is usually unknown and variable Introducing a filter bank: (example ISO) Spreading out the earlier input over time! Remember: “A questionable assumption: argmax(u 1 ) = T”

Stability Analysis: ISO3 with a filter bank With a filter bank we get for the output: Single weights develop now as: Original Rule was: CCAC With delta-function inputs at t=0 and t=T we get: It is possible to prove that this term becomes zero as a consequence of the learning!