Download presentation
Presentation is loading. Please wait.
Published byErlin Chandra Modified over 6 years ago
1
Synapses Signal is carried chemically across the synaptic cleft
2
Post-synaptic conductances
Requires pre- and post-synaptic depolarization Coincidence detection, Hebbian
3
Synaptic plasticity LTP, LTD Spike-timing dependent plasticity
4
Short-term synaptic plasticity
Depression Facilitation
5
A simple model neuron: FitzHugh-Nagumo
I = 0 phase portrait
6
Phase portrait of the FitzHugh-Nagumo neuron model
W V
7
Reduced dynamical model for neurons
8
Population coding Population code formulation Methods for decoding: population vector Bayesian inference maximum a posteriori maximum likelihood Fisher information
9
Cricket cercal cells coding wind velocity
10
Population vector RMS error in estimate Theunissen & Miller, 1991
11
Population coding in M1 Cosine tuning: Pop. vector: For sufficiently large N, is parallel to the direction of arm movement
12
The population vector is neither general nor optimal.
“Optimal”: Bayesian inference and MAP
13
Bayesian inference By Bayes’ law, Introduce a cost function, L(s,sBayes); minimise mean cost. For least squares, L(s,sBayes) = (s – sBayes)2 ; solution is the conditional mean.
14
MAP: s* which maximizes p[s|r] ML: s* which maximizes p[r|s]
MAP and ML MAP: s* which maximizes p[s|r] ML: s* which maximizes p[r|s] Difference is the role of the prior: differ by factor p[s]/p[r] For cercal data:
15
Decoding an arbitrary continuous stimulus
E.g. Gaussian tuning curves
16
Need to know full P[r|s]
Assume Poisson: Assume independent: Population response of 11 cells with Gaussian tuning curves
17
Apply ML: maximise P[r|s] with respect to s
Set derivative to zero, use sum = constant From Gaussianity of tuning curves, If all s same
18
Apply MAP: maximise p[s|r] with respect to s
Set derivative to zero, use sum = constant From Gaussianity of tuning curves,
19
Given this data: Prior with mean -2, variance 1 Constant prior MAP:
20
How good is our estimate?
For stimulus s, have estimated sest Bias: Variance: Mean square error: Cramer-Rao bound: Fisher information
21
Fisher information Alternatively: For the Gaussian tuning curves w/Poisson statistics:
22
Fisher information for Gaussian tuning curves
Quantifies local stimulus discriminability
23
Do narrow or broad tuning curves produce better encodings?
Approximate: Thus, Narrow tuning curves are better But not in higher dimensions!
24
Fisher information and discrimination
Recall d' = mean difference/standard deviation Can also decode and discriminate using decoded values. Trying to discriminate s and s+Ds: Difference in estimate is Ds (unbiased) variance in estimate is 1/IF(s).
25
Comparison of Fisher information and human discrimination
thresholds for orientation tuning Minimum STD of estimate of orientation angle from Cramer-Rao bound data from discrimination thresholds for orientation of objects as a function of size and eccentricity
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.