Download presentation
Presentation is loading. Please wait.
Published byJanel Hunter Modified over 6 years ago
1
Mechanisms and Models of Persistent Neural Activity:
Linear Network Theory Mark Goldman Center for Neuroscience UC Davis
2
Outline (Today & tomorrow)
Neural mechanisms of integration: Linear network theory Nonlinear integrator networks Critique of traditional models of memory-related activity & integration, and possible remedies
3
r Issue: How do neurons accumulate & store signals in working memory?
In many memory & decision-making circuits, neurons accumulate and/or maintain signals for ~1-10 seconds stimulus neuronal activity (firing rates) accumulation storage (working memory) time Most neurons intrinsically have brief memories Puzzle: synapse r tneuron synaptic input firing rate r Input stimulus ~ ms
4
Neural Integrator of the Goldfish Eye Movement System
(from lab of David Tank)
5
The Oculomotor Neural Integrator
Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: “Tuning curve” Firing rate Eye position: Eye position 1 sec time (data from Aksay et al., Nature Neuroscience, 2001; picture of eye from MarinEyes)
6
& eye movement commands
Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands
7
Standard Model: Network Positive Feedback
1) Recurrent excitation (Machens et al., Science, 2005) 2) Recurrent (dis)inhibition Command input: This architecture has also suggested how firing can be maintained in the absence of input: Typical story: 2 sources of positive feedback, rec. excit. & mutual inhibition. Typical isolated single neuron firing rate: time tneuron Neuron receiving network positive feedback:
8
Many-neuron Patterns of Activity Represent Eye Position
Activity of 2 neurons saccade Eye position represented by location along a low dimensional manifold (“line attractor”) (H.S. Seung, D. Lee)
10
Line Attractor Picture of the Neural Integrator
Geometrical picture of eigenvectors: r2 No decay along direction of eigenvector with eigenvalue = 1 r1 Decay along direction of eigenvectors with eigenvalue < 1 “Line Attractor” or “Line of Fixed Points”
11
Next up… 1) A nonlinear network model of the oculomotor integrator
2) The problem of robustness of persistent activity 3) Some “non-traditional” (non-positive feedback) models of integration a) Negative-derivative feedback models [b) Functionally feedforward models] Most likely skip – See Goldman 2009, Lim & Goldman 2012 Note: skipped issue of noise: see Lim & Goldman, Neural Comp. and Ganguli/Huh/Sompolinsky PNAS
12
The Oculomotor Neural Integrator
Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: Firing rate Eye position: Eye position 1 sec time (data from Aksay et al., Nature Neuroscience, 2001; picture of eye from MarinEyes)
13
& eye movement commands
Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands
14
Network Model Coupled nonlinear equations: Mathematically intractable?
Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input explain all terms in model, note that also (not shown) have nonlinearity that enforces rates > 0 Coupled nonlinear equations: Mathematically intractable?
15
Network Model Integrator! Coupled nonlinear equations:
Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input Will focus on steady-state behavior (not showing dynamics of approach to these steady-state values) For persistent activity: must sum to 0 Integrator! Coupled nonlinear equations: Mathematically intractable?
16
total inhibitory current received
Fitting the Model Fitting condition for neuron i: Current needed to maintain firing rate r total excitatory current received total inhibitory current received Background input Knowns: -r’s: known at each eye position (tuning curves) -f(r): known from single-neuron experiments (not shown) For each neuron, equation is of the form f = W.s + T (could write this on the board) Unknowns: -synaptic weights Wij > 0 and external inputs Ti -synaptic nonlinearities sexc(r), sinh(r) Assume form of sexc,inh(r) constrained linear regression for Wij, Ti (data = rates ri at different eye positions)
17
Model Integrates its Inputs and
Reproduces the Tuning Curves of Every Neuron Example model neuron voltage trace: Network integrates its inputs …and all neurons precisely match tuning curve data Firing rate (Hz) Time (sec) At end of slide: We next wanted to know what the essential macroscopic structure, and microcircuit architecture of this network is. Before showing you our results, I want to show you the key experiment constraining this architecture. If asked, note that top cell received strong direct saccadic input (hence pulses at times of saccadic), unlike lower left cell. gray: raw firing rate (black: smoothed rate) green: perfect integral solid lines: experimental tuning curves boxes: model rates (& variability)
18
Inactivation Experiments Suggest Presence
of a Threshold Process Experiment: Remove inhibition Record Inactivate stable at high rates drift at low rates firing rate Model: time Persistence maintained at high firing rates: These high rates occur when inactivated side would be at low rates Suggests such low rates are below a threshold for contributing
19
Mechanism for generating persistent activity
Network activity when eyes directed rightward: Left side Right side Implications: -The only positive feedback LOOP is due to recurrent excitation -Due to thresholds, there is no mutual inhibitory feedback loop mutual excitatory positive feedback, or could be intrinsic cellular processes that kick in only at high rates Excitation, not inhibition, maintains persistent activity! Inhibition is anatomically recurrent, but functionally feedforward 19
20
Model Constrains the Possible Form of Synaptic Nonlinearities…
2-parameter set of tested synaptic nonlinearities s(r) Best-fit performance for different nonlinearities Networks with saturating synapses can’t fit data for any choice of weights Wij Fisher et al., Neuron, 2013
21
…But Many Very Different Networks Give Near-Perfect Model Performance
Global excitation Local excitation Circuits with very different connectivity… right exc. inh. left right exc. inh. left …but nearly identical performance: right side left side both of above are for the same synaptic nonlinearity.
22
Sensitivity Analysis:
Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 Insensitive direction (low curvature) Sensitive direction (high curvature) diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions
23
Sensitivity Analysis:
Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions between pairs of parameter eigenvectors/eigenvalues: identify patterns of weight changes to which network is most sensitive Insensitive direction (low curvature) Sensitive direction (high curvature)
24
Sensitive & Insensitive Directions in Connectivity Matrix
Sensitive directions (of model-fitting cost function) Insensitive Eigenvector 1: make all connections more excitatory Eigenvector 2: weaken excitation & inhibition Eigenvector 3: vary low vs. high threshold neurons Eigenvector 10: Offsetting changes in weights more exc. less inh. less inh. less exc. perturb perturb perturb perturb We could compute the shape of the cost function and find the most sensitive directions/patterns of synaptic connectivity. Note that showing for 1 example network (top row is class 2) right side avg. Fisher et al., Neuron, 2013
25
Diversity of Solutions: Example Circuits Differing Only in Insensitive Components
Two circuits with different connectivity… …differ only in their insensitive components log(difference) Differences between circuits along each component -5 -10 …but near-identical performance… Note that the 4 most sensitive eigenvectors have exponentially small differences (note: log units). Transition: topographic circuit has local connectivity…allows possibility that network could maintain many different patterns of activation, depending on how stimulated –OR- to distinguish these possibilities, need to stimulate…we first did this naturally by considering 2 different ways to stimulate circuit) right side left side
26
Issue: Do We Really Have a Line Attractor?
Example firing rate data for 2 neurons across many fixations: Data from a line attractor? firing rate r1 of neuron 1 firing rate r2 of neuron 2 10 20 40 30 50 60 r1 r2 r1 r2 No! Data came from 2 independent attractors (plane attractor) Q: How can we determine this experimentally? A: Drive neurons to different firing patterns
27
Context-Dependent Integration of Signals
Source 1 Target 1 Source 2 Target 2 Source 3 Target 3 Observation: In different contexts, circuits may need to accumulate signals from different sources and project outputs to different targets Question (“Remembering the context”): How can an integrator network keep track of which source(s) of input it is integrating in a given task or context? i.e. can the integrator keep from jumbling up the different sources of inputs MAYBE QUESTION BETTER PHRASED AS “HOW DOES THE INTEGRATOR REMEMBER BOTH ITS ACCUMULATED INPUT AND THE CONTEXT IN WHICH THE ACCUMULATION OCCURRED?” larval zebrafish (image from the-scientist.com)
28
Context-Dependent Integration of Signals
saccadic Target 1 vestibular Target 2 optokinetic Target 3 Observation: In different contexts, circuits may need to accumulate signals from different sources and project outputs to different targets Question (“Remembering the context”): How can an integrator network keep track of which source(s) of input it is integrating in a given task or context? r1 r2 Eye Position coding axis “Line attractor” E= 00 E= +200 Dogma (“common integrator hypothesis”): The oculomotor integrator is a line attractor that maintains a single representation of eye position, regardless of the input source i.e. can the integrator keep from jumbling up the different sources of inputs MAYBE QUESTION BETTER PHRASED AS “HOW DOES THE INTEGRATOR REMEMBER BOTH ITS ACCUMULATED INPUT AND THE CONTEXT IN WHICH THE ACCUMULATION OCCURRED?”
29
Testing the Common Integrator Hypothesis
Experiment (larval zebrafish, circuit-wide optical recordings): -Observe activity following a saccades to different eye positions -Bring eyes to same eye positions with visual (optokinetic) stimulus Result: Patterns of activity following saccadic vs. optokinetic input: Saccadic and Optokinetic firing trajectories Patterns show little correlation! SVD 1) i.e. can the integrator keep from jumbling up the different sources of inputs (Daie et al., Neuron, 2015)
30
Model with Multiple Attractors Reproduces the Data
Experiments: Model with 2 attractors: Model prediction: Data reproduced if external inputs have oppositely directed gradients Model constructed by targeting inputs to the 4 SVD modes and giving appropriate dynamics to each mode. Current work is directly fitting model to richer data set. Mathematically intractable?
31
Summary: Possible General Principles for Context-Dependent Processing
(Daie et al., Neuron, 2015) 1) Context is stored in the spatial pattern of responses explain all terms in model, note that also (not shown) have nonlinearity that enforces rates > 0 2) Parameter value (e.g. eye position) is stored in the amplitude of the spatial pattern 3) Different processing streams are kept distinct because they target different dynamical modes of a network Mathematically intractable?
32
W Issue: Robustness of Integrator w Integrator equation: I r(t)
Experimental values: Single isolated neuron: Integrator circuit: Synaptic feedback w must be tuned to accuracy of:
33
W Need for fine-tuning in linear feedback models w wr (feedback)
Fine-tuned model: W r(t) external input w decay feedback Leaky behavior Unstable behavior r r (decay) wr (feedback) dr/dt r (decay) wr (feedback) dr/dt r rate time (sec) rate time (sec)
34
Geometry of Robustness
& Hypotheses for Robustness on Faster Time Scales 1) Plasticity on slow time scales: Reshapes the trough to make it flat 2) To control on faster time scales: Add ridges to surface to add “friction”-like slowing of drift -OR- Fill attractor with viscous fluid to slow drift (Koulakov et al. 2002; Goldman et al., 2003)
35
Questions: Are positive feedback loops the only way
to perform integration? (the dogma) 2) Could alternative mechanisms describe persistent activity data?
36
Challenging the Positive Feedback Picture Corrective Feedback Model
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback transformation Claim: x + x – f(y) g(x – f(y)) y g - f(y) f Equation:
37
Integration from Negative Derivative Feedback
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback signal x + y g - Integrator!
38
Persistent Activity from Negative-Derivative Feedback
Math: Picture: (-) corrective signal (+) corrective firing rate time
39
Negative derivative feedback arises
naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced what is a derivative: dr/dt ~ r(t) – r(t-dt) = difference, with equal strengths, between a signal now and in the past (delayed) or –dr/dt ~ r(t-dt) – r(t) : in our case, first term is excitation/positive fdbk, which is delayed so reflects the past and second term is inhibition/neg fdbk, which is fast so approximately represents the current time Lim & Goldman, Nature Neuroscience, 2013
40
Negative derivative feedback arises
naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, 2013
41
Networks Maintain Analog Memory and Integrate their Inputs
42
Robustness to Loss of Cells or Intrinsic or Synaptic Gains
Change: -intrinsic gains -synaptic -Exc. cell death -Inh. cell
43
Balanced Inputs Lead to Irregular Spiking
Across a Graded Range of Persistent Firing Rates Spiking model structure: Model output (purely derivative feedback) : Experimental distribution of CV’s of interspike intervals: time (sec) time (sec) time (sec) (Compte et al., 2003) Lim & Goldman (2013); see also Boerlin et al. (2013)
44
Summary Short-term memory (~10’s seconds) is maintained by persistent
neural activity following the offset of a remembered stimulus Tuned positive feedback (attractor model) Possible mechanisms 2) Negative derivative feedback -Features: Balance of excitation and inhibition Robust to many natural perturbations Produces observed irregular firing statistics Modeling issue: Degeneracy of model-fitting solutions -Key Question: Does this model degeneracy reflect lack of experimental constraints…or patterns of connectivity that may differ from animal to animal? -Hypothesis: (some) model degeneracy is real & provides redundancy that allows the system to robustly re-tune itself
45
Working memory task not easily explained by traditional feedback models
5 neurons recorded during a PFC delay task (Batuev et al., 1979, 1994):
46
Response of Individual Neurons in Line Attractor Networks
All neurons exhibit similar slow decay: Due to strong coupling that mediates positive feedback Time (sec) Neuronal firing rates Summed output Problem: Does not reproduce the differences between neurons seen experimentally! Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay
47
Feedforward Networks Can Integrate!
(Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input
48
Feedforward Networks Can Integrate!
(Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input Integral of input! (up to duration ~Nt) (can prove this works analytically)
49
Same Network Integrates Any Input for ~Nt
50
Improvement in Required Precision of Tuning
Feedback-based Line Attractor: 10 sec decay to hold 2 sec of activity Feedforward Integrator 2 sec decay to hold 2 sec of activity Time (sec) Neuronal firing rates Summed output Neuronal firing rates Time (sec) Summed output Time (sec)
51
Feedforward Models Can Fit PFC Recordings
Line Attractor Feedforward Network
52
Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task
MacDonald et al., Neuron, 2011: feedforward progression (Goldman, Neuron, 2009) (See also similar data during spatial navigation memory tasks by Pastalkova et al. 2008; Harvey et al. 2012)
53
Generalization to Coupled Networks:
Feedforward transitions between patterns of activity Feedforward network Recurrent (coupled) network Map each neuron to a combination of neurons by applying a coordinate rotation matrix R (Schur decomposition) Connectivity matrix Wij: Geometric picture: (Math of Schur: See Goldman, Neuron, 2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)
54
Responses of functionally feedforward networks
activity patterns… Feedforward network activity patterns & neuronal firing rates Effect of stimulating pattern 1:
55
Math Puzzle: Eigenvalue analysis does not predict long time scale of response!
Line attractor networks: Eigenvalue spectra: Neuronal responses: Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode??? (Goldman, Neuron, 2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)
56
Math Puzzle: Schur vs. Eigenvector Decompositions
57
Answer to Math Puzzle: Pseudospectral analysis
( Trefethen & Embree, Spectra & Pseudospectra, 2005) Eigenvalues l: Pseudoeigenvalues le: Set of all values le that satisfy the inequality: ||(W – le1)v|| <e Govern transient responses Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices) Satisfy equation: (W – l1)v =0 Govern long-time asymptotic behavior Black dots: eigenvalues; Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e (from Supplement to Goldman, Neuron, 2009)
58
Answer to Math Puzzle: Pseudo-eigenvalues
Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode???
59
Answer to Math Puzzle: Pseudo-eigenvalues
Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Pseudoeigenvals Real(l) Imag(l) transiently acts like persistent mode 1 -1 1 (Goldman, Neuron, 2009)
60
Summary Short-term memory (~10’s seconds) is maintained by persistent
neural activity following the offset of a remembered stimulus Possible mechanisms Tuned positive feedback (attractor model) 2) Negative derivative feedback -Features: Balance of excitation and inhibition, as observed Robust to many natural perturbations Produces observed irregular firing statistics 3) Feedforward network (possibly in disguise) -Disadvantage: Finite memory lifetime ~ # of feedforward stages -Advantage: Higher-dimensional representation can produce many different temporal response patterns -Math: Not well-characterized by eigenvalue decomposition; Schur decomposition or pseudospectral analysis better
61
Summary Short-term memory (~10’s seconds) is maintained by persistent
neural activity following the offset of a remembered stimulus Tuned positive feedback (attractor model) Possible mechanisms 2) Negative derivative feedback -Features: Balance of excitation and inhibition Robust to many natural perturbations Produces observed irregular firing statistics Modeling issue: Degeneracy of model-fitting solutions -Key Question: Does this model degeneracy reflect lack of experimental constraints…or patterns of connectivity that may differ from animal to animal? -Hypothesis: (some) model degeneracy is real & provides redundancy that allows the system to robustly re-tune itself
62
Theory (Goldman lab, UCD)
Acknowledgments Theory (Goldman lab, UCD) Experiments Itsaso Olasagasti (U. Geneva) Dimitri Fisher (Brain Corp.) Sukbin Lim (U. Chicago) David Tank (Princeton Univ.) Emre Aksay (Cornell Med.) Guy Major (Cardiff Univ.) Robert Baker (NYU Medical)
63
Extra Slide(s)
64
Two Possible Threshold Mechanisms Revealed by the Model
Synaptic nonlinearity s(r) & anatomical connectivity Wij for 2 model networks: Mechanism 2 • High-threshold cells dominate the inhibitory connectivity Mechanism 1 • Synaptic thresholds synaptic activation firing rate synaptic activation firing rate Exc Right side neurons Left side neurons Inh Exc Weight matrix: gives the synaptic weight from each presynaptic neuron onto each postsynaptic neuron. Left side neurons are 1-50; right side First 25 of each of these is excitatory, second 25 is inhibitory. So, e.g. upper left block is left side-excitatory neurons onto left-side excitatory neurons, etc. Inh Left side neurons Right side neurons low-threshold inhibitory neurons CLARIFY THRESHOLD IDEA? 64
65
Multi-Modal Attractor Network Reproduces the Data
Experiments: Model: Saccadic and Optokinetic firing trajectories Target inputs to observed SVD modes… simply requires opposing spatial gradients of external inputs explain all terms in model, note that also (not shown) have nonlinearity that enforces rates > 0 Mathematically intractable?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.