Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mechanisms and Models of Persistent Neural Activity:

Similar presentations


Presentation on theme: "Mechanisms and Models of Persistent Neural Activity:"— Presentation transcript:

1 Mechanisms and Models of Persistent Neural Activity:
Linear Network Theory Mark Goldman Center for Neuroscience UC Davis

2 Outline Neural mechanisms of integration: Linear network theory
Critique of traditional models of memory-related activity & integration, and possible remedies

3 r Issue: How do neurons accumulate & store signals in working memory?
In many memory & decision-making circuits, neurons accumulate and/or maintain signals for ~1-10 seconds stimulus neuronal activity (firing rates) accumulation storage (working memory) time Most neurons intrinsically have brief memories Puzzle: synapse r tneuron synaptic input firing rate r Input stimulus ~ ms

4 Neural Integrator of the Goldfish Eye Movement System
(from lab of David Tank)

5 The Oculomotor Neural Integrator
Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: “Tuning curve” Firing rate Eye position: Eye position 1 sec time (data from Aksay et al., Nature Neuroscience, 2001)

6 & eye movement commands
Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

7 Standard Model: Network Positive Feedback
1) Recurrent excitation (Machens et al., Science, 2005) 2) Recurrent (dis)inhibition Command input: This architecture has also suggested how firing can be maintained in the absence of input: Typical story: 2 sources of positive feedback, rec. excit. & mutual inhibition. Typical isolated single neuron firing rate: time tneuron Neuron receiving network positive feedback:

8 Many-neuron Patterns of Activity Represent Eye Position
Activity of 2 neurons saccade Eye position represented by location along a low dimensional manifold (“line attractor”) (H.S. Seung, D. Lee)

9

10 Line Attractor Picture of the Neural Integrator
Geometrical picture of eigenvectors: r2 No decay along direction of eigenvector with eigenvalue = 1 r1 Decay along direction of eigenvectors with eigenvalue < 1 “Line Attractor” or “Line of Fixed Points”

11 Next up… 1) A nonlinear network model of the oculomotor integrator,
and brief discussion of Hessians and sensitivity analysis 2) The problem of robustness of persistent activity 3) Some “non-traditional” (non-positive feedback) models of integration a) Functionally feedforward models b) Negative-derivative feedback models Note: skipped issue of noise: see Lim & Goldman, Neural Comp. and Ganguli/Huh/Sompolinsky PNAS [4) Projects: a) Cellular mechanisms of persistence & “bistability neuromodulators” b) Discretized vs. continuous attractors, and noise: tradeoffs c) “Integrate-and-forage” model of ant colony decision making (relates to Pillow talk on fitting methods & Baccus synapse model) d) Learning/Fitting attractor models (many possibilities) (could relate to Pillow talk on coupling through spike-history filter, and potential instability in these GLM methods) ]

12 The Oculomotor Neural Integrator
Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: Firing rate Eye position: Eye position 1 sec time (data from Aksay et al., Nature Neuroscience, 2001)

13 & eye movement commands
Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

14 Network Model Coupled nonlinear equations: Mathematically intractable?
Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input explain all terms in model, note that also (not shown) have nonlinearity that enforces rates > 0 Coupled nonlinear equations: Mathematically intractable?

15 Network Model Integrator! Coupled nonlinear equations:
Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input Will focus on steady-state behavior (not showing dynamics of approach to these steady-state values) For persistent activity: must sum to 0 Integrator! Coupled nonlinear equations: Mathematically intractable?

16 total inhibitory current received
Fitting the Model Fitting condition for neuron i: Current needed to maintain firing rate r total excitatory current received total inhibitory current received Background input Knowns: -r’s: known at each eye position (tuning curves) -f(r): known from single-neuron experiments (not shown) For each neuron, equation is of the form f = W.s + T (could write this on the board) Unknowns: -synaptic weights Wij > 0 and external inputs Ti -synaptic nonlinearities sexc(r), sinh(r) Assume form of sexc,inh(r)  constrained linear regression for Wij, Ti (data = rates ri at different eye positions)

17 Model Integrates its Inputs and
Reproduces the Tuning Curves of Every Neuron Example model neuron voltage trace: Network integrates its inputs …and all neurons precisely match tuning curve data Firing rate (Hz) Time (sec) At end of slide: We next wanted to know what the microcircuit architecture of this network is. Before showing you our results, I want to show you the key experiment constraining this architecture. gray: raw firing rate (black: smoothed rate) green: perfect integral solid lines: experimental tuning curves boxes: model rates (& variability)

18 Inactivation Experiments Suggest Presence
of a Threshold Process Experiment: Remove inhibition Record Inactivate stable at high rates drift at low rates firing rate Model: time Persistence maintained at high firing rates:  These high rates occur when inactivated side would be at low rates  Suggests such low rates are below a threshold for contributing

19 Mechanism for generating persistent activity
Network activity when eyes directed rightward: Left side Right side Implications: -The only positive feedback LOOP is due to recurrent excitation -Due to thresholds, there is no mutual inhibitory feedback loop mutual excitatory positive feedback, or could be intrinsic cellular processes that kick in only at high rates Excitation, not inhibition, maintains persistent activity! Inhibition is anatomically recurrent, but functionally feedforward 19

20 Model Constrains the Possible Form of Synaptic Nonlinearities…
2-parameter set of tested synaptic nonlinearities s(r) Best-fit performance for different nonlinearities Networks with saturating synapses can’t fit data for any choice of weights Wij At end of slide: We next wanted to know what the microcircuit architecture of this network is. Before showing you our results, I want to show you the key experiment constraining this architecture. Fisher et al., Neuron, 2013

21 …But Many Very Different Networks Give Near-Perfect Model Performance
Global excitation Local excitation Circuits with very different connectivity… right exc. inh. left right exc. inh. left …but nearly identical performance: right side left side both of above are for the same synaptic nonlinearity.

22 Sensitivity Analysis:
Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 Insensitive direction (low curvature) Sensitive direction (high curvature) diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions

23 Sensitivity Analysis:
Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions between pairs of parameter eigenvectors/eigenvalues: identify patterns of weight changes to which network is most sensitive Insensitive direction (low curvature) Sensitive direction (high curvature) 3 most important components!

24 Sensitive & Insensitive Directions in Connectivity Matrix
Sensitive directions (of model-fitting cost function) Insensitive Eigenvector 1: make all connections more excitatory Eigenvector 2: strengthen excitation & inhibition Eigenvector 3: vary high vs. low threshold neurons Eigenvector 10: Offsetting changes in weights more exc. less inh. exc inh exc inh exc inh perturb perturb perturb perturb We could compute the shape of the cost function and find the most sensitive directions/patterns of synaptic connectivity. Note that showing for 1 example network (top row is class 2) right side avg. Fisher et al., Neuron, 2013

25 insensitive eigenvectors
Diversity of Solutions: Example Circuits Differing Only in Insensitive Components Two circuits with different connectivity… …differ only in their insensitive eigenvectors …but near-identical performance… right side left side

26 W Issue: Robustness of Integrator w Integrator equation: I r(t)
Experimental values: Single isolated neuron: Integrator circuit: Synaptic feedback w must be tuned to accuracy of:

27 W Need for fine-tuning in linear feedback models w wr (feedback)
Fine-tuned model: W r(t) external input w decay feedback Leaky behavior Unstable behavior r r (decay) wr (feedback) dr/dt r (decay) wr (feedback) dr/dt r rate time (sec) rate time (sec)

28 Geometry of Robustness
& Hypothesis for Robustness on Faster Time Scales 1) Plasticity on slow time scales: Reshapes the trough to make it flat 2) To control on faster time scales: Add ridges to surface to add “friction”-like slowing of drift -OR- Fill attractor with viscous fluid to slow drift (Koulakov et al. 2002; Goldman et al. 2003)

29 Questions: Are positive feedback loops the only way
to perform integration? (the dogma) 2) Could alternative mechanisms describe persistent activity data?

30 Challenging the Positive Feedback Picture
Part 2: Corrective Feedback Model Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback transformation Claim: x + x – f(y) g(x – f(y)) y g - f(y) f Equation:

31 Integration from Negative Derivative Feedback
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback signal x + y g - Integrator!

32 Persistent Activity from Negative-Derivative Feedback
Math: Picture: (-) corrective signal (+) corrective firing rate time

33 Negative derivative feedback arises
naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

34 Negative derivative feedback arises
naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

35 Networks Maintain Analog Memory and Integrate their Inputs

36 Robustness to Loss of Cells or Intrinsic or Synaptic Gains
Change: -intrinsic gains -synaptic -Exc. cell death -Inh. cell

37 Balanced Inputs Lead to Irregular Spiking
Across a Graded Range of Persistent Firing Rates Spiking model structure: Model output (purely derivative feedback) : Experimental distribution of CV’s of interspike intervals: time (sec) time (sec) time (sec) (Compte et al., 2003) Lim & Goldman (2013); see also Boerlin et al. (2013)

38 Working memory task not easily explained by traditional feedback models
5 neurons recorded during a PFC delay task (Batuev et al., 1979, 1994):

39 Response of Individual Neurons in Line Attractor Networks
All neurons exhibit similar slow decay: Due to strong coupling that mediates positive feedback Time (sec) Neuronal firing rates Summed output Problem: Does not reproduce the differences between neurons seen experimentally! Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay

40 Feedforward Networks Can Integrate!
(Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input

41 Feedforward Networks Can Integrate!
(Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input Integral of input! (up to duration ~Nt) (can prove this works analytically)

42 Same Network Integrates Any Input for ~Nt

43 Improvement in Required Precision of Tuning
Feedback-based Line Attractor: 10 sec decay to hold 2 sec of activity Feedforward Integrator 2 sec decay to hold 2 sec of activity Time (sec) Neuronal firing rates Summed output Neuronal firing rates Time (sec) Summed output Time (sec)

44 Feedforward Models Can Fit PFC Recordings
Line Attractor Feedforward Network

45 Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task
MacDonald et al., Neuron, 2011: feedforward progression (Goldman, Neuron, 2009) (See also similar data during spatial navigation memory tasks by Pastalkova et al. 2008; Harvey et al. 2012)

46 Generalization to Coupled Networks:
Feedforward transitions between patterns of activity Feedforward network Recurrent (coupled) network Map each neuron to a combination of neurons by applying a coordinate rotation matrix R (Schur decomposition) Connectivity matrix Wij: Geometric picture: (Math of Schur: See Goldman, Neuron, 2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)

47 Responses of functionally feedforward networks
activity patterns… Feedforward network activity patterns & neuronal firing rates Effect of stimulating pattern 1:

48 Math Puzzle: Eigenvalue analysis does not predict long time scale of response!
Line attractor networks: Eigenvalue spectra: Neuronal responses: Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode??? (Goldman, Neuron, 2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)

49 Math Puzzle: Schur vs. Eigenvector Decompositions

50 Answer to Math Puzzle: Pseudospectral analysis
( Trefethen & Embree, Spectra & Pseudospectra, 2005) Eigenvalues l: Pseudoeigenvalues le: Set of all values le that satisfy the inequality: ||(W – le1)v|| <e Govern transient responses Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices) Satisfy equation: (W – l1)v =0 Govern long-time asymptotic behavior Black dots: eigenvalues; Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e (from Supplement to Goldman, Neuron, 2009)

51 Answer to Math Puzzle: Pseudo-eigenvalues
Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode???

52 Answer to Math Puzzle: Pseudo-eigenvalues
Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Pseudoeigenvals Real(l) Imag(l) transiently acts like persistent mode 1 -1 1 (Goldman, Neuron, 2009)

53 Summary Short-term memory (~10’s seconds) is maintained by persistent
neural activity following the offset of a remembered stimulus Possible mechanisms Tuned positive feedback (attractor model) 2) Negative derivative feedback -Features:  Balance of excitation and inhibition, as observed  Robust to many natural perturbations  Produces observed irregular firing statistics 3) Feedforward network (possibly in disguise) -Disadvantage: Finite memory lifetime ~ # of feedforward stages -Advantage: Higher-dimensional representation can produce many different temporal response patterns -Math: Not well-characterized by eigenvalue decomposition; Schur decomposition or pseudospectral analysis better

54 Summary Short-term memory (~10’s seconds) is maintained by persistent
neural activity following the offset of a remembered stimulus Tuned positive feedback (attractor model) Possible mechanisms 2) Negative derivative feedback -Features:  Balance of excitation and inhibition  Robust to many natural perturbations  Produces observed irregular firing statistics Modeling issue: Degeneracy of model-fitting solutions -Key Question: Does this model degeneracy reflect lack of experimental constraints…or patterns of connectivity that may differ from animal to animal? -Hypothesis: (some) model degeneracy is real & provides redundancy that allows the system to robustly re-tune itself

55 Theory (Goldman lab, UCD)
Acknowledgments Theory (Goldman lab, UCD) Experiments Itsaso Olasagasti (U. Geneva) Dimitri Fisher (Brain Corp.) Sukbin Lim (U. Chicago) David Tank (Princeton Univ.) Emre Aksay (Cornell Med.) Guy Major (Cardiff Univ.) Robert Baker (NYU Medical)

56 Extra Slide(s)

57 Two Possible Threshold Mechanisms Revealed by the Model
Synaptic nonlinearity s(r) & anatomical connectivity Wij for 2 model networks: Mechanism 2 • High-threshold cells dominate the inhibitory connectivity Mechanism 1 • Synaptic thresholds synaptic activation firing rate synaptic activation firing rate Exc Right side neurons Left side neurons Inh Exc Weight matrix: gives the synaptic weight from each presynaptic neuron onto each postsynaptic neuron. Left side neurons are 1-50; right side First 25 of each of these is excitatory, second 25 is inhibitory. So, e.g. upper left block is left side-excitatory neurons onto left-side excitatory neurons, etc. Inh Left side neurons Right side neurons low-threshold inhibitory neurons CLARIFY THRESHOLD IDEA? 57


Download ppt "Mechanisms and Models of Persistent Neural Activity:"

Similar presentations


Ads by Google