Download presentation
Presentation is loading. Please wait.
1
Learning with spikes, and the Unresolved Question in Neuroscience/Complex Systems Tony Bell Helen Wills Neuroscience Institute University of California at Berkeley
2
Learning in real neurons: Long-term potentiation and depression (LTP/LTD) Bliss & Lomo 1973 discovered associative and input specific (Hebbian) changes in sizes of EPSC’s: a potential memory mechanism (the memory trace). Found first in hippocampus: known to be implicated in learning and memory. LTP from high-frequency presynaptic stimulation, or low-frequency presynaptic stimulation and postsynaptic depolarisation. LTD from prolonged low-frequency stimulation. Levy & Steward (1983) played with timing of weak and strong input from entorhinal cortex to hippocampus, finding LTD when weak after strong, LTP when strong up to 20ms after weak or simultaneous. Spike Timing-Dependent Plasticity (STDP) Markram et al (1997) find 10ms window for time-dependence of plasticity, by manipulating pre- and post-synaptic timings.
3
Spike Timing Dependent Plasticity Experimenting with pre- and post-synaptic spike-timings at a synapse between a retinal ganglion cell and a tectal cell. (Zhang et al, 1998)
4
STDP is different in different neurons. Diverse mechanisms - Common objective?? Figure from Abbott and Nelson
5
STDP is different in different neurons. Diverse mechanisms - Common objective?? This may be true, but first we had better understand the mechanism, or we will most likely think up a bad theory based on our current prejudices and it won’t have any relevance to biology (which, like the rest of the world, is stranger than we can suppose….)
13
Equation for membrane voltage (cable equation) … membrane capacitance … conductance along dendrite … maximum conductance for channel species k … time-varying fraction of those channels open … reversal potential for channel species k
14
Equation for ion channel kinetics (non-linear Markov model) ………. voltage: information from within the cell ………. extracellular ligand: information from other cells …intracellular calcium: information from other molecules etc
15
Can we connect the information-theoretic learning principles we studied yesterday to the biophysical and molecular reality of these processes? Let’s give it a go in a simplified model…. the Spike Response Model (a sophisticated variant of the ‘integrate-and-fire’ model).
16
Gerstner’s SPIKE RESPONSE MODEL: ∑ u = k W ij R (t - t ) klklkl l IMPLICIT DIFFERENTIATION T = = klkl ∂ ∂ t t k l W R (t - t ) ijklklkl. u k. HOW DOES ONE SPIKE TIMING AFFECT ANOTHER?
17
The Idea is: output spikes to be as sensitive as possible to inputs. Assuming: a deterministic feedforward invertible network, Maximum Likelihood: try to map inputs uniformly into unit hypercube: p(y) = p(x)p(x) | x y W p(y) 1 Maximum Spikelihood: map inputs into independent Poisson processes: p(t' i' ) = p(t i) | try to p(t' i') t i W t' i'
18
SPIKELIHOOD t i W t' i' LIKELIHOOD x y W L(x) = log |W| + ∑ log q(u ) i i L(t i) > log |T| + ∑ log q(n ' ) i ~ i use all firing rates equally USE THE BANDWIDTH BE NON-LOSSY make the spikecount Poisson OBJECTIVE FUNCTIONS FOR RATE AND SPIKING MODELS:
19
L(t i) > log |T| + ∑ log q( n' ) i ~ i THE LEARNING RULE for the objective: is mean rate rate at input synapse sum over spikes from neuron j when T is a single
20
Simulation results: Coincidence detection (Demultiplexing). unmixing matrix (learned) mixing matrix original spike trains multiplexed spike trains demulti- plexed spike trains time (ms) A 9x9 network extracts independent point processes from correlated ones
21
original demulti- plexed IdentityUnmixingMixing × =
22
Compare with STDP: Froemke& Dan, Nature 2002 Bell & Parra (NIPS 17) But real STDP has a predictive component: (spikes also talk about future spikes) OUT causal predictive IN The Spike Response Model is causal. It only takes into account how output spikes talk about past input spikes: Postsynaptic calcium integrates this information (Zucker 98), both causal (NMDA channels -> CAM-K) and predictive (L-channels -> calcineurin) ?
23
-requires a non-lossy map [t, i] -> [t, i] (which we enforced…) -learning is (horrendously) non-local -model does not match STDP curves -model ignores predictive information -information only flows from synapse to soma, and not back down Problems with this spikelihood model: inout
24
By infomaxing from input spikes to output spikes, we are ignoring the information that flows from output spikes (and elsewhere in the dendrites) back down to where the input information came from - the site of learning: the protein/calcium machinery at post- synaptic densities, where the plasticity calculation actually takes place. What happens if you include this in your Jacobian? Then the Jacobian between all spike-timings becomes the sum total of all intradendritic causalities. And spikes are talking to synapses, not other spikes. This is a massively overcomplete inter- level information flow (1000 times as many synaptic events as neural events). What kind of density estimation scheme do we then have?
25
ie: inside the cells: (timings voltage calcium) models and creates between the cell: (spikes) The Within models and creates the Between:
26
neurotransmitter (glutamate) NMDA channel voltage- dependent L-channel Ca 2+ Ca 2+ AMPA channel synapse endoplasmic reticulum dendrite protein machinery vesicle with glu receptors is trafficked to plasma membrane Post-synaptic machinery (site of learning) integrates incoming spike information with global cell state. Ca converts timing and voltage information into molecular change 2+
27
network of 2 agents network of neurons network of macromolecules network of protein complexes (eg: synapses) Networks within networks: 1 cell 1 brain
28
( = STDP) A Multi-Level View of Learning LEARNING at a LEVEL is CHANGE IN INTERACTIONS between its UNITS, implemented by INTERACTIONS at the LEVEL beneath, and by extension resulting in CHANGE IN LEARNING at the LEVEL above. Increasing Timescale Separation of timescales allows INTERACTIONS at one LEVEL to be LEARNING at the LEVEL above. Interactions=fast Learning=slow LEVELUNITINTERACTIONSLEARNING societyorganism behaviour ecologysociety predation, symbiosis natural selection sensory-motor learning organismcellspikessynaptic plasticity cell proteinmolecular forces gene expression, protein recycling voltage, Ca bulk molecular changes synapse amino acid synapse proteindirect,V,Ca molecular changes
29
Advantages: A closed system can model itself (sleep, thought…) World modeling is not done directly. Rather, it occurs as a side-effect of self-modeling. The world is a ‘boundary-condition’ on this modeling, imposed by the level above - by the social level. The variables which form the probability model are explicitly located at the level beneath the level being modeled. Generalising to molecular and social networks suggests that gene expression and reward-based social agency may just be other forms of inter-level density estimation.
30
Does the ‘standard model’ really suffice? ReinforcementDecision V1 Retina Thalamus V whatever Action Eh..somewhere else
31
Does the ‘standard model’ really suffice? ReinforcementDecision V1 Retina Thalamus V whatever Action Eh..somewhere else Or is it ‘levels-chauvinism’?
32
The emerging computational theory of perception is Bayesian inference. It postulates that the sensory system combines a prior probability over possible states of the world, with a likelihood that observed sensory data was caused by each possible state, and computes a posterior probability over the states of the world given the sensory data. The emerging computational theory of movement is stochastic optimal control. It postulates that the motor system combines a utility function quantifying the goodness of each possible outcome, with a dynamics model of how outcomes are caused by control sequences, and computes a control law (state-control mapping) which optimizes expected utility. The standard (or rather the slightly more emerged) neurostatistical model, as articulated by Emo Todorov: But we haven’t seen yet what unsupervised models may do when they are involved in sensory-motor loops. They may sidestep common criticisms of feedforward unsupervised theories ……
33
Infomax between Layers. (eg: V1 density-estimates Retina) Infomax between Levels. (eg: synapses density-estimate spikes) 1 2 within-level feedforward molecular sublevel is ‘implementation’ social superlevel is ‘reward’ predicts independent activity only models outside input between-level includes all feedback molecular net models/creates social net is boundary condition permits arbitrary activity dependencies models input and intrinsic together retina V1 synaptic weights x y all neural spikes all synaptic readout synapses, dendites t y pdf of all spike times pdf of all synaptic ‘readouts’ If we can make this pdf uniform then we have a model constructed from all synaptic and dendritic causality This SHIFT in looking at the problem alters the question so that if it is answered, we have an unsupervised theory of ‘whole brain learning’.
34
What about the mathematics? Is it tractable? Not yet. A new, in many ways satisfactory, objective is defined, but the gradient calculation seems very difficult. But this is still progress.
35
by gradient descent in a parameter of the model : Density Estimation when the input is affected: Make the model like the reality by minimising the Kullback-Leibler Divergence: changing one’s model to fit the world It is easier to live in a world where one can change the world to fit the model, as well as
36
Conclusion: This should be easier, but it isn’t yet. I’m open to suggestions… What have we learned from other complex self-organising systems? Is there a simpler model which captures the essence of the problem?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.