Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mechanisms and Models of Persistent Neural Activity:

Similar presentations


Presentation on theme: "Mechanisms and Models of Persistent Neural Activity:"— Presentation transcript:

1 Mechanisms and Models of Persistent Neural Activity:
Linear Network Theory Mark Goldman Center for Neuroscience UC Davis

2 Outline Neural mechanisms of integration: Linear network theory
Critique of traditional models of memory-related activity & integration, and possible remedies

3 r Issue: How do neurons accumulate & store signals in working memory?
In many memory & decision-making circuits, neurons accumulate and/or maintain signals for ~1-10 seconds stimulus neuronal activity (firing rates) accumulation storage (working memory) time Most neurons intrinsically have brief memories Puzzle: synapse r tneuron synaptic input firing rate r Input stimulus ~ ms

4 Neural Integrator of the Goldfish Eye Movement System
Sebastian Seung Bob Baker David Tank Emre Aksay

5 The Oculomotor Neural Integrator
Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: Eye position: time (data from Aksay et al., Nature Neuroscience, 2001)

6 & eye movement commands
Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

7 Standard Model: Network Positive Feedback
1) Recurrent excitation (Machens et al., Science, 2005) 2) Recurrent (dis)inhibition Command input: This architecture has also suggested how firing can be maintained in the absence of input: Typical story: 2 sources of positive feedback, rec. excit. & mutual inhibition. Typical isolated single neuron firing rate: time tneuron Neuron receiving network positive feedback:

8 Many-neuron Patterns of Activity Represent Eye Position
Activity of 2 neurons saccade Eye position represented by location along a low dimensional manifold (“line attractor”) (H.S. Seung, D. Lee)

9 Line Attractor Picture of the Neural Integrator
Geometrical picture of eigenvectors: r2 No decay along direction of eigenvector with eigenvalue = 1 r1 Decay along direction of eigenvectors with eigenvalue < 1 “Line Attractor” or “Line of Fixed Points”

10 Outline 1) A nonlinear network model of the oculomotor integrator,
and a brief discussion of Hessians and sensitivity analysis 2) The problem of robustness of persistent activity 3) Some “non-traditional” (non-positive feedback) models of integration a) Functionally feedforward models b) Negative-derivative feedback models [4) Project: Finely discretized vs. continuous attractors, and noise: phenomenology & connections to Bartlett’s & Bard’s talks]

11 The Oculomotor Neural Integrator
Eye velocity coding command neurons excitatory inhibitory persistent activity: stores running total of input commands Integrator neurons: Eye position: time (secs) (data from Aksay et al., Nature Neuroscience, 2001)

12 & eye movement commands
Network Architecture (Aksay et al., 2000) outputs Firing rates: R L Eye Position Left side neurons firing rate 100 R L Eye Position 100 Right side neurons firing rate Wipsi Wcontra inputs midline 4 neuron populations: Inhibitory Excitatory Pay attention to low-threshold neurons vs. high-threshold. Define low-threshold as active for all eye positions. don’t comment on model yet!. Recurrent (dis)inhibition Recurrent excitation inputs background inputs & eye movement commands

13 Network Model Coupled nonlinear equations: Mathematically intractable?
Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input explain all terms in model Coupled nonlinear equations: Mathematically intractable?

14 Network Model Integrator! Coupled nonlinear equations:
Outputs Wcontra Wipsi Wij = weight of connection from neuron j to neuron i Burst commands & Tonic background inputs Firing rate dynamics of each neuron: Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input Will focus on steady-state behavior (not showing dynamics of approach to these steady-state values) For persistent activity: must sum to 0 Integrator! Coupled nonlinear equations: Mathematically intractable?

15 opposite-side inhibition
Fitting the Model Conductance-based model fit by constructing a cost function that simultaneously enforced: Intracellular current injection experiments Database of single-neuron tuning curves Firing rate drift patterns following focal lesions Firing rate changes Intrinsic leak same-side excitation opposite-side inhibition Bkgd. input Burst command input During fixations (B=0, dr/dt = 0):

16 Model Integrates its Inputs and
Reproduces the Tuning Curves of Every Neuron Network integrates its inputs …and all neurons precisely match tuning curve data Firing rate (Hz) Time (sec) At end of slide: We next wanted to know what the microcircuit architecture of this network is. Before showing you our results, I want to show you the key experiment constraining this architecture. gray: raw firing rate (black: smoothed rate) green: perfect integral solid lines: experimental tuning curves boxes: model rates (& variability)

17 Inactivation Experiments Suggest Presence
of a Threshold Process Experiment: Remove inhibition Record Inactivate stable at high rates drift at low rates firing rate Model: time Persistence maintained at high firing rates:  These high rates occur when inactivated side would be at low rates  Suggests such low rates are below a threshold for contributing

18 Two Possible Threshold Mechanisms Revealed by the Model
Synaptic nonlinearity s(r) & anatomical connectivity Wij for 2 model networks: Mechanism 2 • High-threshold cells dominate the inhibitory connectivity Mechanism 1 • Synaptic thresholds synaptic activation firing rate synaptic activation firing rate Exc Right side neurons Left side neurons Inh Exc Weight matrix: gives the synaptic weight from each presynaptic neuron onto each postsynaptic neuron. Left side neurons are 1-50; right side First 25 of each of these is excitatory, second 25 is inhibitory. So, e.g. upper left block is left side-excitatory neurons onto left-side excitatory neurons, etc. Inh Left side neurons Right side neurons low-threshold inhibitory neurons CLARIFY THRESHOLD IDEA? 18

19 Mechanism for generating persistent activity
Network activity when eyes directed rightward: Left side Right side Implications: -The only positive feedback LOOP is due to recurrent excitation -Due to thresholds, there is no mutual inhibitory feedback loop mutual excitatory positive feedback, or could be intrinsic cellular processes that kick in only at high rates Excitation, not inhibition, maintains persistent activity! Inhibition is anatomically recurrent, but functionally feedforward 19

20 Sensitivity Analysis:
Which features of the connectivity are most critical? Cost function curvature is described by “Hessian” matrix of 2nd derivatives: Cost function surface: cost C W1 W2 Insensitive direction (low curvature) Sensitive direction (high curvature) diagonal elements: sensitivity to varying a single parameter off-diagonal elements: interactions

21 Oculomotor Integrator:
Which features of the connectivity are most critical? Hessian of the fit’s cost function onto neuron i: 1. Diagonal elements: sensitivity to mistuning individual weights 3 most important components! 2. Largest principal components: most sensitive patterns of weight changes

22 Sensitive & Insensitive Directions in Connectivity Matrix
Sensitive directions (of model-fitting cost function) Insensitive Eigenvector 1: make all connections more excitatory Eigenvector 2: strengthen excitation & inhibition Eigenvector 3: vary high vs. low threshold neurons Eigenvector 10: Offsetting changes in weights exc inh exc inh exc inh perturb perturb perturb perturb We could compute the shape of the cost function and find the most sensitive directions/patterns of synaptic connectivity. Note that showing for 1 example network (top row is class 2) Fisher et al., Neuron, in press

23 insensitive eigenvectors
Diversity of Solutions: Example Circuits Differing Only in Insensitive Components Two circuits with different connectivity… …differ only in their insensitive eigenvectors …but near-identical performance…

24

25 W Issue: Robustness of Integrator w Integrator equation: I r(t)
Experimental values: Single isolated neuron: Integrator circuit: Synaptic feedback w must be tuned to accuracy of:

26 W Need for fine-tuning in linear feedback models w wr (feedback)
Fine-tuned model: W r(t) external input w decay feedback Leaky behavior Unstable behavior r r (decay) wr (feedback) dr/dt r (decay) wr (feedback) dr/dt r rate time (sec) rate time (sec)

27 Geometry of Robustness
& Hypothesis for Robustness on Faster Time Scales 1) Plasticity on slow time scales: Reshapes the trough to make it flat 2) To control on faster time scales: Add ridges to surface to add “friction”-like slowing of drift -OR- Fill attractor with viscous fluid to slow drift Course project!

28 Questions: Are positive feedback loops the only way
to perform integration? (the dogma) 2) Could alternative mechanisms describe persistent activity data?

29 Working memory task not easily explained by traditional feedback models
5 neurons recorded during a PFC delay task (Batuev et al., 1979, 1994):

30 Response of Individual Neurons in Line Attractor Networks
All neurons exhibit similar slow decay: Due to strong coupling that mediates positive feedback Time (sec) Neuronal firing rates Summed output Problem: Does not reproduce the differences between neurons seen experimentally! Problem 2: To generate stable activity for 2 seconds (+/- 5%) requires 10-second long exponential decay

31 Feedforward Networks Can Integrate!
(Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input

32 Feedforward Networks Can Integrate!
(Goldman, Neuron, 2009) Simplest example: Chain of neuron clusters that successively filter an input Integral of input! (up to duration ~Nt) (can prove this works analytically)

33 Same Network Integrates Any Input for ~Nt

34 Improvement in Required Precision of Tuning
Feedback-based Line Attractor: 10 sec decay to hold 2 sec of activity Feedforward Integrator 2 sec decay to hold 2 sec of activity Time (sec) Neuronal firing rates Summed output Neuronal firing rates Time (sec) Summed output Time (sec)

35 Feedforward Models Can Fit PFC Recordings
Line Attractor Feedforward Network

36 Recent data: “Time cells” observed in rat hippocampal recordings during delayed-comparison task
Data courtesy of H. Eichenbaum [Similar to data of Pastalkova et al., Science, 2008; Harvey et al., Nature, 2012] feedforward progression (Goldman, Neuron, 2009)

37 Generalization to Coupled Networks:
Feedforward transitions between patterns of activity Feedforward network Recurrent (coupled) network Map each neuron to a combination of neurons by applying a coordinate rotation matrix R (Schur decomposition) Connectivity matrix Wij: Geometric picture: (Math of Schur: See Goldman, Neuron, 2009; Murphy & Miller, Neuron, 2009; Ganguli et al., PNAS, 2008)

38 Responses of functionally feedforward networks
activity patterns… Feedforward network activity patterns & neuronal firing rates Effect of stimulating pattern 1:

39 Math Puzzle: Eigenvalue analysis does not predict long time scale of response!
Line attractor networks: Eigenvalue spectra: Neuronal responses: Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode??? (Goldman, Neuron, 2009; see also: Murphy & Miller, Neuron 2009; Ganguli & Sompolinsky, PNAS 2008)

40 Math Puzzle: Schur vs. Eigenvector Decompositions

41 Answer to Math Puzzle: Pseudospectral analysis
( Trefethen & Embree, Spectra & Pseudospectra, 2005) Eigenvalues l: Pseudoeigenvalues le: Set of all values le that satisfy the inequality: ||(W – le1)v|| <e Govern transient responses Can differ greatly from eigenvalues when eigenvectors are highly non-orthogonal (nonnormal matrices) Satisfy equation: (W – l1)v =0 Govern long-time asymptotic behavior Black dots: eigenvalues; Surrounding contours: colors give boundaries of set of pseudoeigenvals., for different values of e (from Supplement to Goldman, Neuron, 2009)

42 Answer to Math Puzzle: Pseudo-eigenvalues
Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Real(l) Imag(l) 1 no persistent mode???

43 Answer to Math Puzzle: Pseudo-eigenvalues
Normal networks: Eigenvalues Neuronal responses Imag(l) Real(l) 1 persistent mode Feedforward networks: Pseudoeigenvals Real(l) Imag(l) transiently acts like persistent mode 1 -1 1 (Goldman, Neuron, 2009)

44 Challenging the Positive Feedback Picture
Part 2: Corrective Feedback Model Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback transformation Claim: x + x – f(y) g(x – f(y)) y g - f(y) f Equation:

45 Integration from Negative Derivative Feedback
Fundamental control theory result: Strong negative feedback of a signal produces an output equal to the inverse of the negative feedback signal x + y g - Integrator!

46 Positive vs. Negative-Derivative Feedback
Positive feedback mechanism Derivative feedback mechanism Energy landscape (Wpos=1): (-) corrective signal (+) corrective firing rate time

47 Negative derivative feedback arises
naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

48 Negative derivative feedback arises
naturally in balanced cortical networks slow fast Derivative feedback arises when: 1) Positive feedback is slower than negative feedback 2) Excitation & Inhibition are balanced Lim & Goldman, Nature Neuroscience, in press

49 Networks Maintain Analog Memory and Integrate their Inputs

50 Robustness to Loss of Cells or Intrinsic or Synaptic Gains
Change: -intrinsic gains -synaptic -Exc. cell death -Inh. cell

51 Balanced Inputs Lead to Irregular Spiking
Across a Graded Range of Persistent Firing Rates Spiking model structure: Model output (purely derivative feedback) : Experimental distribution of CV’s of interspike intervals: (Compte et al., 2003)

52 Summary Short-term memory (~10’s seconds) is maintained by persistent
neural activity following the offset of a remembered stimulus Possible mechanisms Tuned positive feedback (attractor model) 2) Feedforward network (possibly in disguise) -Disadvantage: Finite memory lifetime ~ # of feedforward stages -Advantage: Higher-dimensional representation can produce many different temporal response patterns -Math: Not well-characterized by eigenvalue decomposition; Schur decomposition or pseudospectral analysis better 3) Negative derivative feedback -Features:  Balance of excitation and inhibition, as observed  Robust to many natural perturbations  Produces observed irregular firing statistics

53 Theory (Goldman lab, UCD)
Acknowledgments Theory (Goldman lab, UCD) Experiments Itsaso Olasagasti (USZ) Dimitri Fisher David Tank (Princeton Univ.) Emre Aksay (Cornell Med.) Guy Major (Cardiff Univ.) Robert Baker (NYU Medical)


Download ppt "Mechanisms and Models of Persistent Neural Activity:"

Similar presentations


Ads by Google