Download presentation
Presentation is loading. Please wait.
Published byIsabella Potter Modified over 11 years ago
1
J. Daunizeau Institute of Empirical Research in Economics, Zurich, Switzerland Brain and Spine Institute, Paris, France Bayesian inference
2
Overview of the talk 1 Probabilistic modelling and representation of uncertainty 1.1 Bayesian paradigm 1.2 Hierarchical models 1.3 Frequentist versus Bayesian inference 2 Numerical Bayesian inference methods 2.1 Sampling methods 2.2 Variational methods (ReML, EM, VB) 3 SPM applications 3.1 aMRI segmentation 3.2 Decoding of brain images 3.3 Model-based fMRI analysis (with spatial priors) 3.4 Dynamic causal modelling
3
Overview of the talk 1 Probabilistic modelling and representation of uncertainty 1.1 Bayesian paradigm 1.2 Hierarchical models 1.3 Frequentist versus Bayesian inference 2 Numerical Bayesian inference methods 2.1 Sampling methods 2.2 Variational methods (ReML, EM, VB) 3 SPM applications 3.1 aMRI segmentation 3.2 Decoding of brain images 3.3 Model-based fMRI analysis (with spatial priors) 3.4 Dynamic causal modelling
4
Degree of plausibility desiderata: - should be represented using real numbers(D1) - should conform with intuition(D2) - should be consistent(D3) a=2 b=5 a=2 normalization: marginalization: conditioning : (Bayes rule) Bayesian paradigm probability theory: basics
5
Bayesian paradigm deriving the likelihood function - Model of data with unknown parameters: e.g., GLM: - But data is noisy: - Assume noise/residuals is small: Distribution of data, given fixed parameters: f
6
Likelihood: Prior: Bayes rule: Bayesian paradigm likelihood, priors and the model evidence generative model m
7
Bayesian paradigm forward and inverse problems forward problem likelihood inverse problem posterior distribution
8
Principle of parsimony : « plurality should not be assumed without necessity » y=f(x) x Occams razor : model evidence p(y|m) space of all data sets Model evidence: Bayesian paradigm model comparison
9
hierarchy causality Hierarchical models principle
10
Hierarchical models directed acyclic graphs (DAGs)
11
prior densities posterior densities Hierarchical models univariate linear hierarchical model
12
ifthen reject H0 estimate parameters (obtain test stat.) define the null, e.g.: apply decision rule, i.e.: classical SPM ifthen accept H0 invert model (obtain posterior pdf) define the null, e.g.: apply decision rule, i.e.: Bayesian PPM Frequentist versus Bayesian inference a (quick) note on hypothesis testing
13
define the null and the alternative hypothesis in terms of priors, e.g.: ifthen reject H0 apply decision rule, i.e.: Frequentist versus Bayesian inference what about bilateral tests? space of all datasets Savage-Dickey ratios (nested models, i.i.d. priors):
14
Overview of the talk 1 Probabilistic modelling and representation of uncertainty 1.1 Bayesian paradigm 1.2 Hierarchical models 1.3 Frequentist versus Bayesian inference 2 Numerical Bayesian inference methods 2.1 Sampling methods 2.2 Variational methods (ReML, EM, VB) 3 SPM applications 3.1 aMRI segmentation 3.2 Decoding of brain images 3.3 Model-based fMRI analysis (with spatial priors) 3.4 Dynamic causal modelling
15
Sampling methods MCMC example: Gibbs sampling
16
Variational methods VB / EM / ReML VB : maximize the free energy F(q) w.r.t. the variational posterior q(θ) under some (e.g., mean field, Laplace) approximation
17
Overview of the talk 1 Probabilistic modelling and representation of uncertainty 1.1 Bayesian paradigm 1.2 Hierarchical models 1.3 Frequentist versus Bayesian inference 2 Numerical Bayesian inference methods 2.1 Sampling methods 2.2 Variational methods (ReML, EM, VB) 3 SPM applications 3.1 aMRI segmentation 3.2 Decoding of brain images 3.3 Model-based fMRI analysis (with spatial priors) 3.4 Dynamic causal modelling
18
realignmentsmoothing normalisation general linear model template Gaussian field theory p <0.05 statisticalinference segmentation and normalisation segmentation and normalisation dynamic causal modelling dynamic causal modelling posterior probability maps (PPMs) posterior probability maps (PPMs) multivariate decoding multivariate decoding
19
grey matterCSFwhite matter … … class variances class means i th voxel value i th voxel label class frequencies aMRI segmentation mixture of Gaussians (MoG) model
20
Decoding of brain images recognizing brain states from fMRI + fixation cross >> pace response log-evidence of X-Y sparse mappings: effect of lateralization log-evidence of X-Y bilateral mappings: effect of spatial deployment
21
fMRI time series analysis spatial priors and model comparison PPM: regions best explained by short-term memory model PPM: regions best explained by long-term memory model fMRI time series GLM coeff prior variance of GLM coeff prior variance of data noise AR coeff (correlated noise) short-term memory design matrix (X) long-term memory design matrix (X)
22
m2m2 m1m1 m3m3 m4m4 V1 V5 stim PPC attention V1 V5 stim PPC attention V1 V5 stim PPC attention V1 V5 stim PPC attention m1m1 m2m2 m3m3 m4m4 15 10 5 0 V1 V5 stim PPC attention 1.25 0.13 0.46 0.39 0.26 0.10 estimated effective synaptic strengths for best model (m 4 ) models marginal likelihood Dynamic Causal Modelling network structure identification
23
12 3 12 3 12 3 12 3 time u DCMs and DAGs a note on causality
24
m1m1 m2m2 differences in log- model evidences subjects fixed effect random effect assume all subjects correspond to the same model assume different subjects might correspond to different models Dynamic Causal Modelling model comparison for group studies
25
I thank you for your attention.
26
A note on statistical significance lessons from the Neyman-Pearson lemma Neyman-Pearson lemma: the likelihood ratio (or Bayes factor) test is the most powerful test of size to test the null. MVB (Bayes factor) u =1.09, power=56% CCA (F-statistics) F =2.20, power=20% error I rate 1 - error II rate ROC analysis what is the threshold u, above which the Bayes factor test yields a error I rate of 5%?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.