Mechanisms and Models of Persistent Neural Activity:

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Rhythms in the Nervous System : Synchronization and Beyond Rhythms in the nervous system are classified by frequency. Alpha 8-12 Hz Beta Gamma
Omri Barak Collaborators: Larry Abbott David Sussillo Misha Tsodyks Sloan-Swartz July 12, 2011 Messy nice stuff & Nice messy stuff.
Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.
WINNERLESS COMPETITION PRINCIPLE IN NEUROSCIENCE Mikhail Rabinovich INLS University of California, San Diego ’
Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011.
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Leech Heart Half- Center Oscillator: Control of Burst Duration by Low- Voltage Activated Calcium Current Math 723: Mathematical Neuroscience Khaldoun Hamade.
Why are cortical spike trains irregular? How Arun P Sripati & Kenneth O Johnson Johns Hopkins University.
Marseille, Jan 2010 Alfonso Renart (Rutgers) Jaime de la Rocha (NYU, Rutgers) Peter Bartho (Rutgers) Liad Hollender (Rutgers) Néstor Parga (UA Madrid)
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Network of Neurons Computational Neuroscience 03 Lecture 6.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Levels in Computational Neuroscience Reasonably good understanding (for our purposes!) Poor understanding Poorer understanding Very poorer understanding.
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
The three main phases of neural development 1. Genesis of neurons (and migration). 2. Outgrowth of axons and dendrites, and synaptogenesis. 3. Refinement.
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
How facilitation influences an attractor model of decision making Larissa Albantakis.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Biological Modeling of Neural Networks: Week 15 – Population Dynamics: The Integral –Equation Approach Wulfram Gerstner EPFL, Lausanne, Switzerland 15.1.
Ising Models for Neural Data John Hertz, Niels Bohr Institute and Nordita work done with Yasser Roudi (Nordita) and Joanna Tyrcha (SU) Math Bio Seminar,
Percolation in Living Neural Networks Ilan Breskin, Jordi Soriano, Tsvi Tlusty, and Elisha Moses Department of Physics of Complex Systems, The Weizmann.
Basal Ganglia. Involved in the control of movement Dysfunction associated with Parkinson’s and Huntington’s disease Site of surgical procedures -- Deep.
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
Biological Modeling of Neural Networks Week 8 – Noisy input models: Barrage of spike arrivals Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
Biological Modeling of Neural Networks Week 8 – Noisy output models: Escape rate and soft threshold Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Modeling Neural Networks Christopher Krycho Advisor: Dr. Eric Abraham May 14, 2009.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Neural Networks with Short-Term Synaptic Dynamics (Leiden, May ) Misha Tsodyks, Weizmann Institute Mathematical Models of Short-Term Synaptic plasticity.
Mechanisms and Models of Persistent Neural Activity:
It’s raining outside; want to go to the pub? It’s dry outside; want to go to the pub? Sure; I’ll grab the umbrella. What, are you insane? I’ll grab the.
Ch 9. Rhythms and Synchrony 9.7 Adaptive Cooperative Systems, Martin Beckerman, Summarized by M.-O. Heo Biointelligence Laboratory, Seoul National.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
Does the brain compute confidence estimates about decisions?
Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.
Mechanisms and Models of Persistent Neural Activity:
Deep Feedforward Networks
How Neurons Do Integrals
Robustness in Neurons & Networks
Volume 88, Issue 2, Pages (October 2015)
One-Dimensional Dynamics of Attention and Decision Making in LIP
The Brain as an Efficient and Robust Adaptive Learner
Volume 79, Issue 5, Pages (September 2013)
Volume 36, Issue 5, Pages (December 2002)
Learning Precisely Timed Spikes
Spatial Patterns of Persistent Neural Activity Vary with the Behavioral Context of Short- Term Memory  Kayvon Daie, Mark S. Goldman, Emre R.F. Aksay  Neuron 
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Volume 40, Issue 6, Pages (December 2003)
Michiel W.H. Remme, Máté Lengyel, Boris S. Gutkin  Neuron 
Kiah Hardcastle, Surya Ganguli, Lisa M. Giocomo  Neuron 
Responses of Collicular Fixation Neurons to Gaze Shift Perturbations in Head- Unrestrained Monkey Reveal Gaze Feedback Control  Woo Young Choi, Daniel.
Volume 36, Issue 5, Pages (December 2002)
Memory without Feedback in a Neural Network
Volume 19, Issue 3, Pages (April 2017)
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Patrick Kaifosh, Attila Losonczy  Neuron 
The Brain as an Efficient and Robust Adaptive Learner
Rapid Neocortical Dynamics: Cellular and Network Mechanisms
Shunting Inhibition Modulates Neuronal Gain during Synaptic Excitation
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis

Outline Neural mechanisms of integration: Linear network theory Critique of traditional models of memory-related activity & integration, and possible remedies

r Issue: How do neurons accumulate & store signals in working memory? In many memory & decision-making circuits, neurons accumulate and/or maintain signals for ~1-10 seconds stimulus neuronal activity (firing rates) accumulation storage (working memory) time Most neurons intrinsically have brief memories Puzzle: synapse r tneuron synaptic input firing rate r Input stimulus ~10-100 ms

Neural Integrator of the Goldfish Eye Movement System (from lab of David Tank)

The Oculomotor Neural Integrator Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: “Tuning curve” Firing rate Eye position: Eye position 1 sec time (data from Aksay et al., Nature Neuroscience, 2001)

& eye movement commands Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

Standard Model: Network Positive Feedback 1) Recurrent excitation (Machens et al., Science, 2005) 2) Recurrent (dis)inhibition Command input: This architecture has also suggested how firing can be maintained in the absence of input: Typical story: 2 sources of positive feedback, rec. excit. & mutual inhibition. Typical isolated single neuron firing rate: time tneuron Neuron receiving network positive feedback:

Many-neuron Patterns of Activity Represent Eye Position Activity of 2 neurons saccade Eye position represented by location along a low dimensional manifold (“line attractor”) (H.S. Seung, D. Lee)

Line Attractor Picture of the Neural Integrator Geometrical picture of eigenvectors: r2 No decay along direction of eigenvector with eigenvalue = 1 r1 Decay along direction of eigenvectors with eigenvalue < 1 “Line Attractor” or “Line of Fixed Points”

Next up… 1) A nonlinear network model of the oculomotor integrator, and brief discussion of Hessians and sensitivity analysis 2) The problem of robustness of persistent activity 3) Some “non-traditional” (non-positive feedback) models of integration a) Functionally feedforward models b) Negative-derivative feedback models Note: skipped issue of noise: see Lim & Goldman, Neural Comp. and Ganguli/Huh/Sompolinsky PNAS [4) Projects: a) Cellular mechanisms of persistence & “bistability neuromodulators” b) Discretized vs. continuous attractors, and noise: tradeoffs c) “Integrate-and-forage” model of ant colony decision making (relates to Pillow talk on fitting methods & Baccus synapse model) d) Learning/Fitting attractor models (many possibilities) (could relate to Pillow talk on coupling through spike-history filter, and potential instability in these GLM methods) ]

The Oculomotor Neural Integrator Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: Firing rate Eye position: Eye position 1 sec time (data from Aksay et al., Nature Neuroscience, 2001)

& eye movement commands Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

Network Model Coupled nonlinear equations: Mathematically intractable? Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input explain all terms in model, note that also (not shown) have nonlinearity that enforces rates > 0 Coupled nonlinear equations: Mathematically intractable?

Network Model Integrator! Coupled nonlinear equations: Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input Will focus on steady-state behavior (not showing dynamics of approach to these steady-state values) For persistent activity: must sum to 0 Integrator! Coupled nonlinear equations: Mathematically intractable?

total inhibitory current received Fitting the Model Fitting condition for neuron i: Current needed to maintain firing rate r total excitatory current received total inhibitory current received Background input Knowns: -r’s: known at each eye position (tuning curves) -f(r): known from single-neuron experiments (not shown) For each neuron, equation is of the form f = W.s + T (could write this on the board) Unknowns: -synaptic weights Wij > 0 and external inputs Ti -synaptic nonlinearities sexc(r), sinh(r) Assume form of sexc,inh(r)  constrained linear regression for Wij, Ti (data = rates ri at different eye positions)

Model Integrates its Inputs and Reproduces the Tuning Curves of Every Neuron Example model neuron voltage trace: Network integrates its inputs …and all neurons precisely match tuning curve data Firing rate (Hz) Time (sec) At end of slide: We next wanted to know what the microcircuit architecture of this network is. Before showing you our results, I want to show you the key experiment constraining this architecture. gray: raw firing rate (black: smoothed rate) green: perfect integral solid lines: experimental tuning curves boxes: model rates (& variability)

Inactivation Experiments Suggest Presence of a Threshold Process Experiment: Remove inhibition Record Inactivate stable at high rates drift at low rates firing rate Model: time Persistence maintained at high firing rates:  These high rates occur when inactivated side would be at low rates  Suggests such low rates are below a threshold for contributing

Mechanism for generating persistent activity Network activity when eyes directed rightward: Left side Right side Implications: -The only positive feedback LOOP is due to recurrent excitation -Due to thresholds, there is no mutual inhibitory feedback loop mutual excitatory positive feedback, or could be intrinsic cellular processes that kick in only at high rates Excitation, not inhibition, maintains persistent activity! Inhibition is anatomically recurrent, but functionally feedforward 19

Model Constrains the Possible Form of Synaptic Nonlinearities… 2-parameter set of tested synaptic nonlinearities s(r) Best-fit performance for different nonlinearities Networks with saturating synapses can’t fit data for any choice of weights Wij At end of slide: We next wanted to know what the microcircuit architecture of this network is. Before showing you our results, I want to show you the key experiment constraining this architecture. Fisher et al., Neuron, 2013

…But Many Very Different Networks Give Near-Perfect Model Performance Global excitation Local excitation Circuits with very different connectivity… right exc. inh. left right exc. inh. left …but nearly identical performance: right side left side both of above are for the same synaptic nonlinearity.

Sensitivity Analysis: Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 Insensitive direction (low curvature) Sensitive direction (high curvature) diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions

Sensitivity Analysis: Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions between pairs of parameter eigenvectors/eigenvalues: identify patterns of weight changes to which network is most sensitive Insensitive direction (low curvature) Sensitive direction (high curvature) 3 most important components!

Sensitive & Insensitive Directions in Connectivity Matrix Sensitive directions (of model-fitting cost function) Insensitive Eigenvector 1: make all connections more excitatory Eigenvector 2: strengthen excitation & inhibition Eigenvector 3: vary high vs. low threshold neurons Eigenvector 10: Offsetting changes in weights more exc. less inh. exc inh exc inh exc inh perturb perturb perturb perturb We could compute the shape of the cost function and find the most sensitive directions/patterns of synaptic connectivity. Note that showing for 1 example network (top row is class 2) right side avg. Fisher et al., Neuron, 2013

insensitive eigenvectors Diversity of Solutions: Example Circuits Differing Only in Insensitive Components Two circuits with different connectivity… …differ only in their insensitive eigenvectors …but near-identical performance… right side left side

W Issue: Robustness of Integrator w Integrator equation: I r(t) Experimental values: Single isolated neuron: Integrator circuit: Synaptic feedback w must be tuned to accuracy of:

W Need for fine-tuning in linear feedback models w wr (feedback) Fine-tuned model: W r(t) external input w decay feedback Leaky behavior Unstable behavior r r (decay) wr (feedback) dr/dt r (decay) wr (feedback) dr/dt r rate time (sec) rate time (sec)

Geometry of Robustness & Hypothesis for Robustness on Faster Time Scales 1) Plasticity on slow time scales: Reshapes the trough to make it flat 2) To control on faster time scales: Add ridges to surface to add “friction”-like slowing of drift -OR- Fill attractor with viscous fluid to slow drift (Koulakov et al. 2002; Goldman et al. 2003)

Questions: Are positive feedback loops the only way to perform integration? (the dogma) 2) Could alternative mechanisms describe persistent activity data?

Challenging the Positive Feedback Picture Part 2: Corrective Feedback Model Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback transformation Claim: x + x – f(y) g(x – f(y)) y g - f(y) f Equation:

Integration from Negative Derivative Feedback Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback signal x + y g - Integrator!

Persistent Activity from Negative-Derivative Feedback Math: Picture: (-) corrective signal (+) corrective firing rate time

Negative derivative feedback arises naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

Negative derivative feedback arises naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

Networks Maintain Analog Memory and Integrate their Inputs

Robustness to Loss of Cells or Intrinsic or Synaptic Gains Change: -intrinsic gains -synaptic -Exc. cell death -Inh. cell

Balanced Inputs Lead to Irregular Spiking Across a Graded Range of Persistent Firing Rates Spiking model structure: Model output (purely derivative feedback) : Experimental distribution of CV’s of interspike intervals: time (sec) time (sec) time (sec) (Compte et al., 2003) Lim & Goldman (2013); see also Boerlin et al. (2013)

Working memory task not easily explained by traditional feedback models 5 neurons recorded during a PFC delay task (Batuev et al., 1979, 1994):

Response of Individual Neurons in Line Attractor Networks All neurons exhibit similar slow decay: Due to strong coupling that mediates positive feedback Time (sec) Neuronal firing rates Summed output Problem: Does not reproduce the differences between neurons seen experimentally! Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay

Feedforward Networks Can Integrate! (Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input

Feedforward Networks Can Integrate! (Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input Integral of input! (up to duration ~Nt) (can prove this works analytically)

Same Network Integrates Any Input for ~Nt

Improvement in Required Precision of Tuning Feedback-based Line Attractor: 10 sec decay to hold 2 sec of activity Feedforward Integrator 2 sec decay to hold 2 sec of activity Time (sec) Neuronal firing rates Summed output Neuronal firing rates Time (sec) Summed output Time (sec)

Feedforward Models Can Fit PFC Recordings Line Attractor Feedforward Network

Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task MacDonald et al., Neuron, 2011: feedforward progression (Goldman, Neuron, 2009) (See also similar data during spatial navigation memory tasks by Pastalkova et al. 2008; Harvey et al. 2012)

Generalization to Coupled Networks: Feedforward transitions between patterns of activity Feedforward network Recurrent (coupled) network Map each neuron to a combination of neurons by applying a coordinate rotation matrix R (Schur decomposition) Connectivity matrix Wij: Geometric picture: (Math of Schur: See Goldman, Neuron, 2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)

Responses of functionally feedforward networks activity patterns… Feedforward network activity patterns & neuronal firing rates Effect of stimulating pattern 1:

Math Puzzle: Eigenvalue analysis does not predict long time scale of response! Line attractor networks: Eigenvalue spectra: Neuronal responses: Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode??? (Goldman, Neuron, 2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)

Math Puzzle: Schur vs. Eigenvector Decompositions

Answer to Math Puzzle: Pseudospectral analysis ( Trefethen & Embree, Spectra & Pseudospectra, 2005) Eigenvalues l: Pseudoeigenvalues le: Set of all values le that satisfy the inequality: ||(W – le1)v|| <e Govern transient responses Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices) Satisfy equation: (W – l1)v =0 Govern long-time asymptotic behavior Black dots: eigenvalues; Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e (from Supplement to Goldman, Neuron, 2009)

Answer to Math Puzzle: Pseudo-eigenvalues Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode???

Answer to Math Puzzle: Pseudo-eigenvalues Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Pseudoeigenvals Real(l) Imag(l) transiently acts like persistent mode 1 -1 1 (Goldman, Neuron, 2009)

Summary Short-term memory (~10’s seconds) is maintained by persistent neural activity following the offset of a remembered stimulus Possible mechanisms Tuned positive feedback (attractor model) 2) Negative derivative feedback -Features:  Balance of excitation and inhibition, as observed  Robust to many natural perturbations  Produces observed irregular firing statistics 3) Feedforward network (possibly in disguise) -Disadvantage: Finite memory lifetime ~ # of feedforward stages -Advantage: Higher-dimensional representation can produce many different temporal response patterns -Math: Not well-characterized by eigenvalue decomposition; Schur decomposition or pseudospectral analysis better

Summary Short-term memory (~10’s seconds) is maintained by persistent neural activity following the offset of a remembered stimulus Tuned positive feedback (attractor model) Possible mechanisms 2) Negative derivative feedback -Features:  Balance of excitation and inhibition  Robust to many natural perturbations  Produces observed irregular firing statistics Modeling issue: Degeneracy of model-fitting solutions -Key Question: Does this model degeneracy reflect lack of experimental constraints…or patterns of connectivity that may differ from animal to animal? -Hypothesis: (some) model degeneracy is real & provides redundancy that allows the system to robustly re-tune itself

Theory (Goldman lab, UCD) Acknowledgments Theory (Goldman lab, UCD) Experiments Itsaso Olasagasti (U. Geneva) Dimitri Fisher (Brain Corp.) Sukbin Lim (U. Chicago) David Tank (Princeton Univ.) Emre Aksay (Cornell Med.) Guy Major (Cardiff Univ.) Robert Baker (NYU Medical)

Extra Slide(s)

Two Possible Threshold Mechanisms Revealed by the Model Synaptic nonlinearity s(r) & anatomical connectivity Wij for 2 model networks: Mechanism 2 • High-threshold cells dominate the inhibitory connectivity Mechanism 1 • Synaptic thresholds synaptic activation firing rate synaptic activation firing rate Exc Right side neurons Left side neurons Inh Exc Weight matrix: gives the synaptic weight from each presynaptic neuron onto each postsynaptic neuron. Left side neurons are 1-50; right side 51-100. First 25 of each of these is excitatory, second 25 is inhibitory. So, e.g. upper left block is left side-excitatory neurons onto left-side excitatory neurons, etc. Inh Left side neurons Right side neurons low-threshold inhibitory neurons CLARIFY THRESHOLD IDEA? 57