Presentation is loading. Please wait.

Presentation is loading. Please wait.

Neural Codes. Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo.

Similar presentations


Presentation on theme: "Neural Codes. Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo."— Presentation transcript:

1 Neural Codes

2 Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo Model Integrate and Fire Model

3 Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo Model Integrate and Fire Model

4 Neuronal Codes – Action potentials as the elementary units voltage clamp from a brain cell of a fly

5 Neuronal Codes – Action potentials as the elementary units voltage clamp from a brain cell of a fly after band pass filtering

6 Neuronal Codes – Action potentials as the elementary units voltage clamp from a brain cell of a fly after band pass filtering generated electronically by a threshold discriminator circuit

7 Neuronal Codes – Probabilistic response and Bayes’ rule stimulus spike trains conditional probability:

8 Neuronal Codes – Probabilistic response and Bayes’ rule conditional probability ensembles of signals natural situation: joint probability: experimental situation: we choose s(t) prior distribution joint probability

9 Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B". Joint probability is the probability of two events in conjunction. That is, it is the probability of both events together. The joint probability of A and B is written P(A,B). Consider the simple scenario of rolling two fair six-sided dice, labelled die 1 and die 2. Define the following three events: A: Dice 1 lands on 3. B: Dice 2 lands on 1. C: The dice sum to 8. The prior probability of each event describes how likely the outcome is before the dice are rolled, without any knowledge of the roll's outcome. For example, die 1 is equally likely to fall on each of its 6 sides, so P(A) = 1 / 6. Similarly P(B) = 1 / 6. Likewise, of the 6 × 6 = 36 possible ways that two dice can land, and only 5 of them result in a sum of 8 (namely 2 and 6, 3 and 5, 4 and 4, 5 and 3, and 6 and 2), so P(C) = 5 / 36. Prior

10 A: Dice 1 lands on 3. B: Dice 2 lands on 1. C: The dice sum to 8. Some of these events can both occur at the same time; for example events A and C can happen at the same time, in the case where dice 1 lands on 3 and dice 2 lands on 5. This is the only one of the 36 outcomes where both A and C occur, so its probability is 1/36. The probability of both A and C occurring is called the joint probability of A and C and is written P(A,C)=1/36. On the other hand, if dice 2 lands on 1, the dice cannot sum to 8, so P(B,C)=0. Now suppose we roll the dice and cover up dice 2, so we can only see dice 1, and observe that dice 1 landed on 3. Given this partial information, the probability that the dice sum to 8 is no longer 5/36; instead it is 1/6, since dice 2 must land on 5 to achieve this result. This is called the conditional probability, because it's the probability of C under the condition that is A is observed, and is written P(C|A), which is read "the probability of C given A. On the other hand, if we roll the dice and cover up dice 2, and observe dice 1, this has no impact on the probability of event B, which only depends on dice 2. We say events A and B are statistically independent or just independent and in this case: P(B|A)=P(B). Joint Conditional

11 Neuronal Codes – Probabilistic response and Bayes’ rule But: the brain “sees” only {t i } and must “say” something about s(t) But: there is no unique stimulus in correspondence with a particular spike train thus, some stimuli are more likely than others given a particular spike train experimental situation: response-conditional ensemble

12 Neuronal Codes – Probabilistic response and Bayes’ rule Bayes’ rule: what we see: what our brain “sees”:

13 Neuronal Codes – Probabilistic response and Bayes’ rule motion sensitive neuron H1 in the fly’s brain: average angular velocity of motion across the View Field in a 200ms window spike count determined by the experimenter property of the neuron Correlation, Not (!) independent

14 Neuronal Codes – Probabilistic response and Bayes’ rule spikes determine the probability of a stimulus from given spike train stimuli

15 Neuronal Codes – Probabilistic response and Bayes’ rule determine the probability of a stimulus from given spike train

16 Neuronal Codes – Probabilistic response and Bayes’ rule determine probability of a spike train from a given stimulus

17 Neuronal Codes – Probabilistic response and Bayes’ rule determine probability of a spike train from a given stimulus

18 Neuronal Codes – Probabilistic response and Bayes’ rule How do we measure this time dependent firing rate?

19 Neuronal Codes – Probabilistic response and Bayes’ rule Nice probabilistic stuff, but SO, WHAT?

20 Neuronal Codes – Probabilistic response and Bayes’ rule SO, WHAT? We can characterize the neuronal code in two ways: translating stimuli into spikestranslating spikes into stimuli Bayes’ rule: (traditional approach) -> If we can give a complete listing of either set of rules, than we can solve any translation problem thus, we can switch between these two points of view (how the brain “sees” it)

21 Neuronal Codes – Probabilistic response and Bayes’ rule We can switch between these two points of view. And why is that important? These two points of view may differ in their complexity!

22 Neuronal Codes – Probabilistic response and Bayes’ rule

23 average number of spikes depending on stimulus amplitude average stimulus depending on spike count

24 Neuronal Codes – Probabilistic response and Bayes’ rule average number of spikes depending on stimulus amplitude average stimulus depending on spike count non-linear relation almost perfectly linear relation That’s interesting, isn’t it?

25 Neuronal Codes – Probabilistic response and Bayes’ rule For a deeper discussion read, for instance, that nice, difficult book: Rieke, F. et al. (1996). Spikes: Exploring the neural code. MIT Press.


Download ppt "Neural Codes. Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo."

Similar presentations


Ads by Google