Methods for Dummies 2009 Bayes for Beginners Georgina Torbet & Raphael Kaplan.

Slides:



Advertisements
Similar presentations
J. Daunizeau Institute of Empirical Research in Economics, Zurich, Switzerland Brain and Spine Institute, Paris, France Bayesian inference.
Advertisements

Bayesian inference Lee Harrison York Neuroimaging Centre 01 / 05 / 2009.
Bayesian models for fMRI data
Statistical Issues in Research Planning and Evaluation
Bayes for beginners Methods for dummies 27 February 2013 Claire Berna
Classical inference and design efficiency Zurich SPM Course 2014
Bayesian models for fMRI data
07/01/15 MfD 2014 Xin You Tai & Misun Kim
Bayesian models for fMRI data Methods & models for fMRI data analysis 06 May 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
J. Daunizeau Wellcome Trust Centre for Neuroimaging, London, UK Institute of Empirical Research in Economics, Zurich, Switzerland Bayesian inference.
Group analyses of fMRI data Methods & models for fMRI data analysis 28 April 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
Group analyses of fMRI data Methods & models for fMRI data analysis 26 November 2008 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Review of Stats Fundamentals
General Linear Model & Classical Inference
Estimating the Transfer Function from Neuronal Activity to BOLD Maria Joao Rosa SPM Homecoming 2008 Wellcome Trust Centre for Neuroimaging.
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
Graziella Quattrocchi & Louise Marshall Methods for Dummies 2014
1 Today Null and alternative hypotheses 1- and 2-tailed tests Regions of rejection Sampling distributions The Central Limit Theorem Standard errors z-tests.
Bayes for Beginners Presenters: Shuman ji & Nick Todd.
Chapter 8 Introduction to Hypothesis Testing
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
SPM Course Zurich, February 2015 Group Analyses Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London With many thanks to.
10.2 Tests of Significance Use confidence intervals when the goal is to estimate the population parameter If the goal is to.
Bayes for Beginners Reverend Thomas Bayes ( ) Velia Cardin Marta Garrido.
The Practice of Statistics Third Edition Chapter 10: Estimating with Confidence Copyright © 2008 by W. H. Freeman & Company Daniel S. Yates.
Corinne Introduction/Overview & Examples (behavioral) Giorgia functional Brain Imaging Examples, Fixed Effects Analysis vs. Random Effects Analysis Models.
Group analyses of fMRI data Methods & models for fMRI data analysis November 2012 With many thanks for slides & images to: FIL Methods group, particularly.
Bayesian Inference and Posterior Probability Maps Guillaume Flandin Wellcome Department of Imaging Neuroscience, University College London, UK SPM Course,
1 Chapter 10: Introduction to Inference. 2 Inference Inference is the statistical process by which we use information collected from a sample to infer.
Statistical Hypotheses & Hypothesis Testing. Statistical Hypotheses There are two types of statistical hypotheses. Null Hypothesis The null hypothesis,
Statistical Inference for the Mean Objectives: (Chapter 9, DeCoursey) -To understand the terms: Null Hypothesis, Rejection Region, and Type I and II errors.
DCM – the theory. Bayseian inference DCM examples Choosing the best model Group analysis.
Contrasts & Statistical Inference
Bayesian models for fMRI data Methods & models for fMRI data analysis November 2011 With many thanks for slides & images to: FIL Methods group, particularly.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Ch. 5 Bayesian Treatment of Neuroimaging Data Will Penny and Karl Friston Ch. 5 Bayesian Treatment of Neuroimaging Data Will Penny and Karl Friston 18.
Methods for Dummies Second level Analysis (for fMRI) Chris Hardy, Alex Fellows Expert: Guillaume Flandin.
Bayesian inference Lee Harrison York Neuroimaging Centre 23 / 10 / 2009.
Mixture Models with Adaptive Spatial Priors Will Penny Karl Friston Acknowledgments: Stefan Kiebel and John Ashburner The Wellcome Department of Imaging.
Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Zurich, February 2008 Bayesian Inference.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Hypothesis Tests for 1-Proportion Presentation 9.
Bayes for Beginners Anne-Catherine Huys M. Berk Mirza Methods for Dummies 20 th January 2016.
Bayesian Inference in SPM2 Will Penny K. Friston, J. Ashburner, J.-B. Poline, R. Henson, S. Kiebel, D. Glaser Wellcome Department of Imaging Neuroscience,
Chapter 9 Introduction to the t Statistic
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
Bayesian Model Selection and Averaging SPM for MEG/EEG course Peter Zeidman 17 th May 2016, 16:15-17:00.
Chapter 8: Inferences Based on a Single Sample: Tests of Hypotheses
J. Daunizeau ICM, Paris, France TNU, Zurich, Switzerland
Probability and Statistics
Group Analyses Guillaume Flandin SPM Course London, October 2016
General Linear Model & Classical Inference
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Contrasts & Statistical Inference
SPM2: Modelling and Inference
Wellcome Trust Centre for Neuroimaging
Bayes for Beginners Luca Chech and Jolanda Malamud
Bayesian inference J. Daunizeau
M/EEG Statistical Analysis & Source Localization
Contrasts & Statistical Inference
Bayesian Inference in SPM2
Wellcome Centre for Neuroimaging, UCL, UK.
Mixture Models with Adaptive Spatial Priors
Will Penny Wellcome Trust Centre for Neuroimaging,
Bayesian Model Selection and Averaging
Contrasts & Statistical Inference
Group DCM analysis for cognitive & clinical studies
Presentation transcript:

Methods for Dummies 2009 Bayes for Beginners Georgina Torbet & Raphael Kaplan

Bayesian Probability “Probability”: often used to refer to frequency … but Bayesian Probability: a measure of a state of knowledge. It quantifies uncertainty. Allows us to reason using uncertain statements. A Bayesian model is continually updated as more data is acquired.

How did this come about? Billiard Table: A white billiard ball is rolled along a line and we look at where it stops. We suppose that it has a uniform probability of falling anywhere on the line. It stops at a point p. A red billiard ball is then rolled n times under the same uniform assumption. How many times does the red ball roll further than the white ball?

Bayes' Theorem Bayes' Theorem shows the relationship between a conditional probability and its inverse. i.e. it allows us to make an inference from the probability of a hypothesis given the evidence to the probability of that evidence given the hypothesis and vice versa

Bayes' Theorem P(A|B) = P(B|A) P(A) P(B) P(A) – the PRIOR PROBABILITY – represents your knowledge about A before you have gathered data. e.g. if 0.01 of a population has schizophrenia then the probability that a person drawn at random would have schizophrenia is 0.01

Bayes' Theorem P(A|B) = P(B|A) P(A) P(B) P(B|A) – the CONDITIONAL PROBABILITY – the probability of B, given A. e.g. you are trying to roll a total of 8 on two dice. What is the probability that you achieve this, given that the first die rolled a 6?

Bayes' Theorem P(A|B) = P(B|A) P(A) P(B) So the theorem says: The probability of A given B is equal to the probability of B given A, times the prior probability of A, divided by the prior probability of B.

A Simple Example Mode of transport:Probability he is late: Car50% Bus20% Train1% Suppose that Bob is late one day. His boss wishes to estimate the probability that he traveled to work that day by car. He does not know which mode of transportation Bob usually uses, so he gives a prior probability of 1 in 3 to each of the three possibilities.

A Simple Example P(A|B) = P(B|A) P(A) / P(B) P(car|late) = P(late|car) x P(car) / P(late) P(late|car) = 0.5 (he will be late half the time he drives) P(car) = 0.33 (this is the boss' assumption) P(late) = 0.5 x x x 0.33 (all the probabilities that he will be late added together) P(car|late) = 0.5 x 0.33 / 0.5 x x x 0.33 = / 0.71 x 0.33 =

More complex example Disease present in 0.5% population (i.e ) Blood test is 99% accurate (i.e. 0.99) False positive 5% (i.e. 0.05) - If someone tests positive, what is the probability that they have the disease? P(A|B) = P(B|A) P(A) / P(B) P(disease|pos) = P(pos|disease) x P(disease) / P(pos) = 0.99 x / (0.99x0.005)+(0.05x0.995) = / = / =

What does this mean? If someone tests positive for the disease, they have a chance of having the disease. i.e. there is just a 9% chance that they have it. Even though the test is very accurate, because the condition is so rare the test may not be useful.

So why is Bayesian probability useful? It allows us to put probability values on unknowns. We can make logical inferences even regarding uncertain statements. This can show counterintuitive results – e.g. that the disease test may not be useful.

Bayes in Brain Imaging realignmentsmoothing normalisation general linear model template Gaussian field theory p <0.05 statisticalinference

Bayes in SPM Realignment & Spatial normalization Spatial Priors (for the extent of an activation) Posterior Probability Maps (PPMs) Connectivity (DCM)

The GLM (again) = + X β ε y Observed Signal/Data = Experimental Matrix x Parameter Estimates(prior) + Error (Artifact, Random Noise)

Bayes and β Use priors to predict the variance of the regressors (the β’s) in our GLM. Allows for comparison of the strength of different β’s and how they could contribute to the linear model. Furthermore, it allows us to ask how plausible a particular β value/parameter estimate is given our data?

Why can’t we always use a T-Test to find out what we need?

Shortcomings of Classical Inference in fMRI 1.One can never reject the alternative hypothesis. The chance of getting a zero effect is zero! (e.g. Looking at whether a brain region responds to viewing faces, but does not respond at all to viewing trees.) 2. Along the same lines, if you have enough people or scans, almost any effect can become significant at every voxel. (Multiple comparisons) 3.Correcting for multiple comparisons. P value of an activation changes with a search volume. Does not truly work that way. How do we rephrase this question to find the answers we want?

“ All these problems would be eschewed by using the probability that a voxel had activated, or indeed its activation was greater than some threshold. This sort of inference is precluded by classical approaches, which simply give the likelihood of getting the data, given no activation. What one would really like is the probability distribution of the activation given the data. This is the posterior probability used in Bayesian inference. -Chapter 17, page 4 of Human Brain Function (chapter authors Karl Friston and Will Penny. Eds. Ashburner, Friston, & Penny) What is the solution then?

Comparing Bayes Classical Inference- What is the likelihood that our data is not the result of random chance? (e.g. Following a nested design; What is the likelihood of getting this data given there is no activation?) Bayesian Inference- Does our hypothesis fit our data? Does it work better than other models? (e.g. Assess how well a model fits our data; What is the likelihood of getting this activation given the data?)

Why Use It? After all, it is a subjective model isn’t it? Our inference is only as good as our prior, right? -We can rule out or accept the null hypothesis. By looking at the null given the data, instead of the data given the null. -This also means we can compare any model (including the null hypothesis), even the validity of our priors! -We can estimate the plausibility of whether one Beta might have a stronger effect than another Beta in our GLM.

Bayesian paradigm Likelihood and priors generative model m Likelihood: Prior: Bayes rule:

Hierarchical Models Levels of Analysis Even though we cannot measure at every level, but we can place priors on what we think might be going on at each level. Use Empirical Bayes assumptions We can then compare models at each level to determine what best fits our data at each level from a single neurotransmitter, all the way up to a cognitive network. (Churchland and Sejnowski, 1988; Science)

Hierarchical models hierarchy causality

Applying Bayesian Model Comparison to Neuroimaging What are some ways we can use Bayesian Inference in SPM8?

Example#1 Pharmacological Neuroimaging experiment Clinical application (Parkinson’s, Alzheimer’s, etc.) Use priors to compare an activation in a particular brain region (basal ganglia, hippocampus, etc.) that a drug targets to the rest of the brain. Using model comparison, we can assess the relative strengths of a particular region to decide whether a targeted brain region was influenced by pharmacological intervention more than the rest of the brain or other specific regions.

Example#2 EEG/MEG Source Reconstruction (Mattout et al, 2006, Neuroimage)

Other Example/Uses Anatomical Segmentation Dynamic Causal Modeling (DCM) grey matterCSFwhite matter [Ashburner et al., Human Brain Function, 2003]

Take Home Messages Bayesian Inference allows you to ask different questions than you normally would with more classical approaches. (e.g. It allows you to accept instead of fail to reject the null hypothesis as the most likely hypothesis/model) It is an extremely useful tool in model comparison. You can compare models that are not nested (instead of comparing to random chance) It allows for incorporation of prior evidence and helps constrain inferences to see how plausible they are against the given data.

Conclusion Bayesian inference is applicable to something other than a billiards game

Acknowledgements and Recommended Resources Jean Daunizeau and his SPM course slides Past MFD slides Human Brain Function (eds. Ashburner, Friston, and Penny)