Mechanisms and Models of Persistent Neural Activity:

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Rhythms in the Nervous System : Synchronization and Beyond Rhythms in the nervous system are classified by frequency. Alpha 8-12 Hz Beta Gamma
Omri Barak Collaborators: Larry Abbott David Sussillo Misha Tsodyks Sloan-Swartz July 12, 2011 Messy nice stuff & Nice messy stuff.
Neural Network of the Cerebellum: Temporal Discrimination and the Timing of Responses Michael D. Mauk Dean V. Buonomano.
WINNERLESS COMPETITION PRINCIPLE IN NEUROSCIENCE Mikhail Rabinovich INLS University of California, San Diego ’
Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011.
III-28 [122] Spike Pattern Distributions in Model Cortical Networks Joanna Tyrcha, Stockholm University, Stockholm; John Hertz, Nordita, Stockholm/Copenhagen.
Learning crossmodal spatial transformations through STDP Gerhard Neumann Seminar B, SS 06.
Leech Heart Half- Center Oscillator: Control of Burst Duration by Low- Voltage Activated Calcium Current Math 723: Mathematical Neuroscience Khaldoun Hamade.
Why are cortical spike trains irregular? How Arun P Sripati & Kenneth O Johnson Johns Hopkins University.
Marseille, Jan 2010 Alfonso Renart (Rutgers) Jaime de la Rocha (NYU, Rutgers) Peter Bartho (Rutgers) Liad Hollender (Rutgers) Néstor Parga (UA Madrid)
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
Network of Neurons Computational Neuroscience 03 Lecture 6.
Stable Propagation of Synchronous Spiking in Cortical Neural Networks Markus Diesmann, Marc-Oliver Gewaltig, Ad Aertsen Nature 402: Flavio Frohlich.
Levels in Computational Neuroscience Reasonably good understanding (for our purposes!) Poor understanding Poorer understanding Very poorer understanding.
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
How facilitation influences an attractor model of decision making Larissa Albantakis.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
From T. McMillen & P. Holmes, J. Math. Psych. 50: 30-57, MURI Center for Human and Robot Decision Dynamics, Sept 13, Phil Holmes, Jonathan.
Neural basis of Perceptual Learning Vikranth B. Rao University of Rochester Rochester, NY.
Biological Modeling of Neural Networks: Week 15 – Population Dynamics: The Integral –Equation Approach Wulfram Gerstner EPFL, Lausanne, Switzerland 15.1.
Unsupervised learning
Ising Models for Neural Data John Hertz, Niels Bohr Institute and Nordita work done with Yasser Roudi (Nordita) and Joanna Tyrcha (SU) Math Bio Seminar,
Percolation in Living Neural Networks Ilan Breskin, Jordi Soriano, Tsvi Tlusty, and Elisha Moses Department of Physics of Complex Systems, The Weizmann.
Bump attractors and the homogeneity assumption Kevin Rio NEUR April 2011.
Basal Ganglia. Involved in the control of movement Dysfunction associated with Parkinson’s and Huntington’s disease Site of surgical procedures -- Deep.
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
The Function of Synchrony Marieke Rohde Reading Group DyStURB (Dynamical Structures to Understand Real Brains)
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Neural Networks with Short-Term Synaptic Dynamics (Leiden, May ) Misha Tsodyks, Weizmann Institute Mathematical Models of Short-Term Synaptic plasticity.
Mechanisms and Models of Persistent Neural Activity:
It’s raining outside; want to go to the pub? It’s dry outside; want to go to the pub? Sure; I’ll grab the umbrella. What, are you insane? I’ll grab the.
Ch 9. Rhythms and Synchrony 9.7 Adaptive Cooperative Systems, Martin Beckerman, Summarized by M.-O. Heo Biointelligence Laboratory, Seoul National.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.
Mechanisms and Models of Persistent Neural Activity:
Deep Feedforward Networks
Effective Connectivity
How Neurons Do Integrals
Robustness in Neurons & Networks
Volume 88, Issue 2, Pages (October 2015)
One-Dimensional Dynamics of Attention and Decision Making in LIP
The Brain as an Efficient and Robust Adaptive Learner
Volume 79, Issue 5, Pages (September 2013)
Volume 36, Issue 5, Pages (December 2002)
Learning Precisely Timed Spikes
Spatial Patterns of Persistent Neural Activity Vary with the Behavioral Context of Short- Term Memory  Kayvon Daie, Mark S. Goldman, Emre R.F. Aksay  Neuron 
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Volume 40, Issue 6, Pages (December 2003)
Volume 36, Issue 5, Pages (December 2002)
Adaptation without Plasticity
Memory without Feedback in a Neural Network
Effective Connectivity
Motor Learning with Unstable Neural Representations
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations  Anders Lansner  Trends in Neurosciences  Volume.
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Patrick Kaifosh, Attila Losonczy  Neuron 
Adaptation without Plasticity
The Brain as an Efficient and Robust Adaptive Learner
Rapid Neocortical Dynamics: Cellular and Network Mechanisms
Shunting Inhibition Modulates Neuronal Gain during Synaptic Excitation
Patrick Kaifosh, Attila Losonczy  Neuron 
Presentation transcript:

Mechanisms and Models of Persistent Neural Activity: Linear Network Theory Mark Goldman Center for Neuroscience UC Davis

Outline Neural mechanisms of integration: Linear network theory Critique of traditional models of memory-related activity & integration, and possible remedies

r Issue: How do neurons accumulate & store signals in working memory? In many memory & decision-making circuits, neurons accumulate and/or maintain signals for ~1-10 seconds stimulus neuronal activity (firing rates) accumulation storage (working memory) time Most neurons intrinsically have brief memories Puzzle: synapse r tneuron synaptic input firing rate r Input stimulus ~10-100 ms

Neural Integrator of the Goldfish Eye Movement System Sebastian Seung Bob Baker David Tank Emre Aksay

The Oculomotor Neural Integrator Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: Eye position: time (data from Aksay et al., Nature Neuroscience, 2001)

& eye movement commands Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

Standard Model: Network Positive Feedback 1) Recurrent excitation (Machens et al., Science, 2005) 2) Recurrent (dis)inhibition Command input: This architecture has also suggested how firing can be maintained in the absence of input: Typical story: 2 sources of positive feedback, rec. excit. & mutual inhibition. Typical isolated single neuron firing rate: time tneuron Neuron receiving network positive feedback:

Many-neuron Patterns of Activity Represent Eye Position Activity of 2 neurons saccade Eye position represented by location along a low dimensional manifold (“line attractor”) (H.S. Seung, D. Lee)

Line Attractor Picture of the Neural Integrator Geometrical picture of eigenvectors: r2 No decay along direction of eigenvector with eigenvalue = 1 r1 Decay along direction of eigenvectors with eigenvalue < 1 “Line Attractor” or “Line of Fixed Points”

Outline 1) A nonlinear network model of the oculomotor integrator, and a brief discussion of Hessians and sensitivity analysis 2) The problem of robustness of persistent activity 3) Some “non-traditional” (non-positive feedback) models of integration a) Functionally feedforward models b) Negative-derivative feedback models [4) Project: Finely discretized vs. continuous attractors, and noise: phenomenology & connections to Bartlett’s & Bard’s talks]

The Oculomotor Neural Integrator Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: Eye position: time (secs) (data from Aksay et al., Nature Neuroscience, 2001)

& eye movement commands Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

Network Model Coupled nonlinear equations: Mathematically intractable? Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input explain all terms in model Coupled nonlinear equations: Mathematically intractable?

Network Model Integrator! Coupled nonlinear equations: Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input Will focus on steady-state behavior (not showing dynamics of approach to these steady-state values) For persistent activity: must sum to 0 Integrator! Coupled nonlinear equations: Mathematically intractable?

opposite-side inhibition Fitting the Model Conductance-based model fit by constructing a cost function that simultaneously enforced: Intracellular current injection experiments Database of single-neuron tuning curves Firing rate drift patterns following focal lesions Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input During fixations (B=0, dr/dt = 0):

Model Integrates its Inputs and Reproduces the Tuning Curves of Every Neuron Network integrates its inputs …and all neurons precisely match tuning curve data Firing rate (Hz) Time (sec) At end of slide: We next wanted to know what the microcircuit architecture of this network is. Before showing you our results, I want to show you the key experiment constraining this architecture. gray: raw firing rate (black: smoothed rate) green: perfect integral solid lines: experimental tuning curves boxes: model rates (& variability)

Inactivation Experiments Suggest Presence of a Threshold Process Experiment: Remove inhibition Record Inactivate stable at high rates drift at low rates firing rate Model: time Persistence maintained at high firing rates:  These high rates occur when inactivated side would be at low rates  Suggests such low rates are below a threshold for contributing

Two Possible Threshold Mechanisms Revealed by the Model Synaptic nonlinearity s(r) & anatomical connectivity Wij for 2 model networks: Mechanism 2 • High-threshold cells dominate the inhibitory connectivity Mechanism 1 • Synaptic thresholds synaptic activation firing rate synaptic activation firing rate Exc Right side neurons Left side neurons Inh Exc Weight matrix: gives the synaptic weight from each presynaptic neuron onto each postsynaptic neuron. Left side neurons are 1-50; right side 51-100. First 25 of each of these is excitatory, second 25 is inhibitory. So, e.g. upper left block is left side-excitatory neurons onto left-side excitatory neurons, etc. Inh Left side neurons Right side neurons low-threshold inhibitory neurons CLARIFY THRESHOLD IDEA? 18

Mechanism for generating persistent activity Network activity when eyes directed rightward: Left side Right side Implications: -The only positive feedback LOOP is due to recurrent excitation -Due to thresholds, there is no mutual inhibitory feedback loop mutual excitatory positive feedback, or could be intrinsic cellular processes that kick in only at high rates Excitation, not inhibition, maintains persistent activity! Inhibition is anatomically recurrent, but functionally feedforward 19

Sensitivity Analysis: Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 Insensitive direction (low curvature) Sensitive direction (high curvature) diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions

Oculomotor Integrator: Which features of the connectivity are most critical? Hessian of the fit’s cost function onto neuron i: 1. Diagonal elements: sensitivity to mistuning individual weights 3 most important components! 2. Largest principal components: most sensitive patterns of weight changes

Sensitive & Insensitive Directions in Connectivity Matrix Sensitive directions (of model-fitting cost function) Insensitive Eigenvector 1: make all connections more excitatory Eigenvector 2: strengthen excitation & inhibition Eigenvector 3: vary high vs. low threshold neurons Eigenvector 10: Offsetting changes in weights exc inh exc inh exc inh perturb perturb perturb perturb We could compute the shape of the cost function and find the most sensitive directions/patterns of synaptic connectivity. Note that showing for 1 example network (top row is class 2) Fisher et al., Neuron, in press

insensitive eigenvectors Diversity of Solutions: Example Circuits Differing Only in Insensitive Components Two circuits with different connectivity… …differ only in their insensitive eigenvectors …but near-identical performance…

W Issue: Robustness of Integrator w Integrator equation: I r(t) Experimental values: Single isolated neuron: Integrator circuit: Synaptic feedback w must be tuned to accuracy of:

W Need for fine-tuning in linear feedback models w wr (feedback) Fine-tuned model: W r(t) external input w decay feedback Leaky behavior Unstable behavior r r (decay) wr (feedback) dr/dt r (decay) wr (feedback) dr/dt r rate time (sec) rate time (sec)

Geometry of Robustness & Hypothesis for Robustness on Faster Time Scales 1) Plasticity on slow time scales: Reshapes the trough to make it flat 2) To control on faster time scales: Add ridges to surface to add “friction”-like slowing of drift -OR- Fill attractor with viscous fluid to slow drift Course project!

Questions: Are positive feedback loops the only way to perform integration? (the dogma) 2) Could alternative mechanisms describe persistent activity data?

Working memory task not easily explained by traditional feedback models 5 neurons recorded during a PFC delay task (Batuev et al., 1979, 1994):

Response of Individual Neurons in Line Attractor Networks All neurons exhibit similar slow decay: Due to strong coupling that mediates positive feedback Time (sec) Neuronal firing rates Summed output Problem: Does not reproduce the differences between neurons seen experimentally! Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay

Feedforward Networks Can Integrate! (Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input

Feedforward Networks Can Integrate! (Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input Integral of input! (up to duration ~Nt) (can prove this works analytically)

Same Network Integrates Any Input for ~Nt

Improvement in Required Precision of Tuning Feedback-based Line Attractor: 10 sec decay to hold 2 sec of activity Feedforward Integrator 2 sec decay to hold 2 sec of activity Time (sec) Neuronal firing rates Summed output Neuronal firing rates Time (sec) Summed output Time (sec)

Feedforward Models Can Fit PFC Recordings Line Attractor Feedforward Network

Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task Data courtesy of H. Eichenbaum [Similar to data of Pastalkova et al., Science, 2008; Harvey et al., Nature, 2012] feedforward progression (Goldman, Neuron, 2009)

Generalization to Coupled Networks: Feedforward transitions between patterns of activity Feedforward network Recurrent (coupled) network Map each neuron to a combination of neurons by applying a coordinate rotation matrix R (Schur decomposition) Connectivity matrix Wij: Geometric picture: (Math of Schur: See Goldman, Neuron, 2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)

Responses of functionally feedforward networks activity patterns… Feedforward network activity patterns & neuronal firing rates Effect of stimulating pattern 1:

Math Puzzle: Eigenvalue analysis does not predict long time scale of response! Line attractor networks: Eigenvalue spectra: Neuronal responses: Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode??? (Goldman, Neuron, 2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)

Math Puzzle: Schur vs. Eigenvector Decompositions

Answer to Math Puzzle: Pseudospectral analysis ( Trefethen & Embree, Spectra & Pseudospectra, 2005) Eigenvalues l: Pseudoeigenvalues le: Set of all values le that satisfy the inequality: ||(W – le1)v|| <e Govern transient responses Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices) Satisfy equation: (W – l1)v =0 Govern long-time asymptotic behavior Black dots: eigenvalues; Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e (from Supplement to Goldman, Neuron, 2009)

Answer to Math Puzzle: Pseudo-eigenvalues Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode???

Answer to Math Puzzle: Pseudo-eigenvalues Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Pseudoeigenvals Real(l) Imag(l) transiently acts like persistent mode 1 -1 1 (Goldman, Neuron, 2009)

Challenging the Positive Feedback Picture Part 2: Corrective Feedback Model Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback transformation Claim: x + x – f(y) g(x – f(y)) y g - f(y) f Equation:

Integration from Negative Derivative Feedback Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback signal x + y g - Integrator!

Positive vs. Negative-Derivative Feedback Positive feedback mechanism Derivative feedback mechanism Energy landscape (Wpos=1): (-) corrective signal (+) corrective firing rate time

Negative derivative feedback arises naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

Negative derivative feedback arises naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

Networks Maintain Analog Memory and Integrate their Inputs

Robustness to Loss of Cells or Intrinsic or Synaptic Gains Change: -intrinsic gains -synaptic -Exc. cell death -Inh. cell

Balanced Inputs Lead to Irregular Spiking Across a Graded Range of Persistent Firing Rates Spiking model structure: Model output (purely derivative feedback) : Experimental distribution of CV’s of interspike intervals: (Compte et al., 2003)

Summary Short-term memory (~10’s seconds) is maintained by persistent neural activity following the offset of a remembered stimulus Possible mechanisms Tuned positive feedback (attractor model) 2) Feedforward network (possibly in disguise) -Disadvantage: Finite memory lifetime ~ # of feedforward stages -Advantage: Higher-dimensional representation can produce many different temporal response patterns -Math: Not well-characterized by eigenvalue decomposition; Schur decomposition or pseudospectral analysis better 3) Negative derivative feedback -Features:  Balance of excitation and inhibition, as observed  Robust to many natural perturbations  Produces observed irregular firing statistics

Theory (Goldman lab, UCD) Acknowledgments Theory (Goldman lab, UCD) Experiments Itsaso Olasagasti (USZ) Dimitri Fisher David Tank (Princeton Univ.) Emre Aksay (Cornell Med.) Guy Major (Cardiff Univ.) Robert Baker (NYU Medical)