Download presentation
Presentation is loading. Please wait.
Published byKenna Button Modified over 9 years ago
1
Omri Barak Collaborators: Larry Abbott David Sussillo Misha Tsodyks Sloan-Swartz July 12, 2011 Messy nice stuff & Nice messy stuff
2
Neural representation Representation of task parameters by neural population. We know that large populations of neurons are involved. Yet we look for and are inspired by impressive single neurons. Case study: Delayed vibrotactile discrimination (from Ranulfo Romo’s lab)
3
f 1 f 2 time (sec) Romo & Salinas, Nat Neurosci Rev, 2003
4
f1f2 f1>f2? YN time (sec) Romo & Salinas, Nat Neurosci Rev, 2003
5
Romo task Encoding of analog variable Memory of analog variable Arithmetic operation “f1-f2”
6
Romo, Brody, Hernandez, Lemus. Nature 1999 Machens, Romo, Brody. Science 2005
7
Striking tuning properties Lead to “simple / low dimensional” models “Typical” neurons are used to define model populations.
8
Existing models Machens et al. 2005 Miller et al. 2006 Barak et al. 2010 Miller et al. 2003 Not shown: Verguts Deco Singh and Eliasmith 2006
9
But… Are all cells that good? Barak et al. 2010 Brody et al. 2003 Jun et al. 2010 prestim-1.500.51.52.53.54 0 20 40 Time (sec) 10 Hz 22 Hz 34 Hz
10
Echo state network Jaeger 2001 Maass et al 2002 Buonomano and Merzenich 1995
11
Echo state network x r -4-2 0 24 -0.5 0 0.5 1 -2 0 24 0 0.5 1 1.5 2 x + Noise N = 1000 / 2000 K = 100 (sparseness) g = 1.5
12
Implementing the Romo task f1f1 f2f2 f r Sussillo and Abbott 2009 Jaeger and Haas 2004
13
Input (f 1,f 2 ) Output
14
Input (f 1,f 2 ) Output Unit activity
15
It works, but… How does it work? –After the training, we have a network that is almost a black box. Relation to experimental data.
16
Hypothesis Consider the state of the network in 1000- D as the trial evolves
17
f1f1 f2f2 time (sec)
18
Hypothesis Focus only at the end of the 2 nd stimulus. For each (f1,f2) pair, there is a point in 1000-D space.
20
Hypothesis Focus only at the end of the 2 nd stimulus. For each (f1,f2) pair, there is a point in 1000-D space. So there is a 2D manifold in the 1000-D space. Can the dynamics (after learning) draw a line through this manifold?
22
Dynamics or just fancy readout? Distance in state space The two responses are different in network activity, not just through the particular readout we chose.
23
Saddle point
24
Searching for a saddle in 1000D Vector function: Scalar function:
25
Searching for a saddle in 1000D
26
1 Number of unstable eigenvalues Distance along trajectory Number of unstable eigenvalues Norm of fixed point
27
Saddle point
29
Slightly more realistic Positive firing rates Avoid fixed point between trials. Introduce reset signal. Chaotic activity in delay period = 0
30
It works
31
Nice persistent neurons Time Activity
32
a 1 -a 2 plane Romo and Salinas 2003 f 1 tuning f 2 tuning
33
Problems / predictions Reset signal Generalization
34
Reset There is a reset (Barak et al 2010, Churchland et al) There is no reset, and performance shows it (Buonomano et al 2007) Time (sec) Correlation Correlation between trials with different frequencies
35
Generalization Interpolation vs. Extrapolation f1f1 f2f2
36
Generalization Interpolation vs. Extrapolation f1f1 f2f2
37
Generalization Interpolation vs. Extrapolation f1f1 f2f2
38
Extrapolation Delosh et al 1997
39
Conclusions Response properties of individual neurons can be misleading. An echo state network can solve decision making tasks. Dynamical systems analysis can reveal function of echo state networks. Need to find a middle ground.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.