Download presentation
Presentation is loading. Please wait.
Published byLynne Morgan Modified over 8 years ago
1
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Jinsan Yang Biointelligence Lab School of Computer Sci. & Eng. Seoul National University
2
(c) 2000-2008 SNU CSE Biointelligence Lab2 I A Probability Primer 1.1 What is Probability 1.2 Bayes Theorem 1.3 Measuring Information 1.4 Making an Inferance 1.5 Learning from Data 1.6 Graphical Models and Other Bayesian Algorithms
3
(c) 2000-2008 SNU CSE Biointelligence Lab3 1.1 What is Probability Two views of interpretation for probability Frequentist view Bayesian view Probability distribution and density Random variable, vector from sample space Probability mass function (pmf), probability density function (pdf) Cumulative distribution function (cdf), distribution function Expectation and Statistics mean, variance, covariance, correlation
4
(c) 2000-2008 SNU CSE Biointelligence Lab4 1.1 What is Probability Joint, conditional and marginal probability Independence and correlation Independent rv’s: Uncorrelated rv’s: If two variables are independent, they are uncorrelated but the reverse is not always true
5
(c) 2000-2008 SNU CSE Biointelligence Lab5 1.2 Bayes Theorem How to update the belief of a hypothesis based on how well the acquired data were predicted from the hypothesis Prior, posterior probability Generative model Marginal data likelihood
6
(c) 2000-2008 SNU CSE Biointelligence Lab6 1.3 Measuring information Smaller P(X) means more informative Entropy Average information Measure of randomness Mutual information Decrease of uncertainty about the world X by observing Y
7
(c) 2000-2008 SNU CSE Biointelligence Lab7 1.3 Measuring information Kullback-Leibler Divergence Measure the difference in two probability distributions by information Difference in information between P(X) and Q(X) when X follow the distribution P(X) Does not satisfy the symmetry condition
8
(c) 2000-2008 SNU CSE Biointelligence Lab8 1.4 Making an inference Maximum likelihood estimate Finding the state of the world X by maximizing the likelihood P(Y|X) of sensory input y to the brain Point estimate may not be enough Maximum a posteriori estimate Combining sensory information and prior probability of world states Bayesian estimate Full probability distribution instead of a point estimate
9
(c) 2000-2008 SNU CSE Biointelligence Lab9 1.4 Making an inference Bayes filtering Using the posterior probability as the prior probability in the next step When the state changes by a state transition probability Examples : Kalman filter, particle filter
10
(c) 2000-2008 SNU CSE Biointelligence Lab10 1.5 Learning from data Learning sensory transformation or state transition from experience by parameter estimation Fisher information: measure of steepness of the likelihood (How good is the estimation) Bayesian Learning: principled way of adding regularization terms
11
(c) 2000-2008 SNU CSE Biointelligence Lab11 1.5 Learning from data Bayesian Learning: principled way of adding regularization terms Maximizing wrt. w is the same as minimizing the least mean squared error with a penalty term
12
(c) 2000-2008 SNU CSE Biointelligence Lab12 1.5 Learning from data Marginal likelihood is a good criterion to see whether the prior is consistent with the observed data (called evidence) Represent dependency relations by graphical models Bayesian network: directed acyclic graph (DAG) 1.6 Graphical models and other Bayesian algorithms
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.