**please note** Many slides in part 1 are corrupt and have lost images and/or text. Part 2 is fine. Unfortunately, the original is not available, so please.

Slides:



Advertisements
Similar presentations
Chapter 4. Elements of Statistics # brief introduction to some concepts of statistics # descriptive statistics inductive statistics(statistical inference)
Advertisements

Mkael Symmonds, Bahador Bahrami
Random Field Theory Methods for Dummies 2009 Lea Firmin and Anna Jafarpour.
Multiple comparison correction
Statistical Inference and Random Field Theory Will Penny SPM short course, London, May 2003 Will Penny SPM short course, London, May 2003 M.Brett et al.
STA305 Spring 2014 This started with excerpts from STA2101f13
Multiple comparison correction
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research University of Zurich With many thanks for slides & images to: FIL Methods.
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2014 Many thanks to Justin.
C82MST Statistical Methods 2 - Lecture 4 1 Overview of Lecture Last Week Per comparison and familywise error Post hoc comparisons Testing the assumptions.
Classical inference and design efficiency Zurich SPM Course 2014
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Copyright © Cengage Learning. All rights reserved. 9 Inferences Based on Two Samples.
07/01/15 MfD 2014 Xin You Tai & Misun Kim
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Multiple comparison correction Methods & models for fMRI data analysis 18 March 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Comparison of Parametric and Nonparametric Thresholding Methods for Small Group Analyses Thomas Nichols & Satoru Hayasaka Department of Biostatistics U.
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
False Discovery Rate Methods for Functional Neuroimaging Thomas Nichols Department of Biostatistics University of Michigan.
Lorelei Howard and Nick Wright MfD 2008
Giles Story Philipp Schwartenbeck
Voxel Based Morphometry
Multiple Comparison Correction in SPMs Will Penny SPM short course, Zurich, Feb 2008 Will Penny SPM short course, Zurich, Feb 2008.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005 David Carmichael MfD 2006 David Carmichael.
Basics of fMRI Inference Douglas N. Greve. Overview Inference False Positives and False Negatives Problem of Multiple Comparisons Bonferroni Correction.
Random field theory Rumana Chowdhury and Nagako Murase Methods for Dummies November 2010.
Computational Biology Jianfeng Feng Warwick University.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Multiple comparison correction Methods & models for fMRI data analysis October 2013 With many thanks for slides & images to: FIL Methods group & Tom Nichols.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Thresholding using FEAT
Multiple comparisons in M/EEG analysis Gareth Barnes Wellcome Trust Centre for Neuroimaging University College London SPM M/EEG Course London, May 2013.
1 Inference on SPMs: Random Field Theory & Alternatives Thomas Nichols, Ph.D. Department of Statistics & Warwick Manufacturing Group University of Warwick.
Methods for Dummies Random Field Theory Annika Lübbert & Marian Schneider.
Thresholding and multiple comparisons
Classical Inference on SPMs Justin Chumbley SPM Course Oct 23, 2008.
Contrasts & Statistical Inference
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005.
Random Field Theory Ciaran S Hill & Christian Lambert Methods for Dummies 2008.
Functional Brain Signal Processing: EEG & fMRI Lesson 14
The False Discovery Rate A New Approach to the Multiple Comparisons Problem Thomas Nichols Department of Biostatistics University of Michigan.
Methods for Dummies Overview and Introduction
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
G. Cowan Computing and Statistical Data Analysis / Stat 9 1 Computing and Statistical Data Analysis Stat 9: Parameter Estimation, Limits London Postgraduate.
Copyright © Cengage Learning. All rights reserved. 9 Inferences Based on Two Samples.
Multiple comparisons problem and solutions James M. Kilner
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2015 With thanks to Justin.
Multiple comparison correction
False Discovery Rate for Functional Neuroimaging Thomas Nichols Department of Biostatistics University of Michigan Christopher Genovese & Nicole Lazar.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Topological Inference
Linear Filters and Edges Chapters 7 and 8
General Linear Model & Classical Inference
Inference on SPMs: Random Field Theory & Alternatives
Hypothesis Testing: Hypotheses
Methods for Dummies Random Field Theory
Sequence comparison: Multiple testing correction
Multiple comparisons in M/EEG analysis
M/EEG Statistical Analysis & Source Localization
Topological Inference
Contrasts & Statistical Inference
Inference on SPMs: Random Field Theory & Alternatives
Statistical Parametric Mapping
Sequence comparison: Multiple testing correction
M/EEG Statistical Analysis & Source Localization
Contrasts & Statistical Inference
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
Contrasts & Statistical Inference
Presentation transcript:

**please note** Many slides in part 1 are corrupt and have lost images and/or text. Part 2 is fine. Unfortunately, the original is not available, so please refer to previous years’ slides for part 1. Thanks, PS

Random Field Theory Laurel Morris & Tim Howe Methods for Dummies February 2012

Overview Part 1 Multiple comparisons Family-wise error Bonferroni correction Spatial correlation Part 2 Solution = Random Field Theory Example in SPM

Raw data collected as group of voxels Calculate a test statistic for each voxel Many many many voxels…

Rejecting the null hypothesis Determine if value of single specified voxel is significant Create a null hypothesis, H 0 (activation is zero) = data randomly distributed, Gaussian distribution of noise Compare our voxel’s value to a null distribution

Bonferroni correction P FWE = acceptable Type 1 error rate α = corrected p-value n = number of tests  = P FWE /n But … P FWE ≤ n 

Spatial Correlation Averaging over one voxel and its neighbours (  independent observations) Usually weighted average using a (Gaussian) smoothing kernel Dependence between voxels :physiological signal data acquisition spatial preprocessing

FWHM

Overview Bonferroni correction α=P FWE /n Corrected p value Unfeasibly conservative Too many false negatives This is NOT an acceptable method. It is because Bonferroni correction is based on the assuption that all the voxels are independent. Random field theory (RFT) α = P FWE ≒ E[EC] Corrected p value A large volume of data requiring a large number of statistical measures Creates a multiple comparisons problem

RFT for dummies - Part II15 Random Field Theory Part II Tim Howe 15

RFT for dummies - Part II16 Definitions Random field theory (RFT) is a body of mathematics which defines theoretical results for continuously-varying (ie. smooth) topologies [1]. These results can be approximately applied to statistical maps. The Random field is a continuously varying topology in n dimensions, or in our case an array of random numbers whose values are mapped onto such a space. This mapping implies that the values exhibit some spatial correlation (ie. the value of a given element is dependent on its neighbours in the field). [2]. [1] Brett M., Penny W. and Keibel S. (2003) Human Brain Mapping. Chapter 14: An introduction to Random Field Theory. [2] 16

RFT for dummies - Part II17 The Random Field resembles our data under the null hypothesis Our data under the null hypothesis resembles the Random field, in that it is A) random: but B) spatially correlated: Firstly because of the smoothing we have applied but ALSO because neighbouring voxels may share activation due to underlying anatomical connectivity NULL hypothesis : all activations were merely driven by chance each voxel value has a random number 17

Why we need RFT PROBLEM: As described earlier, the Bonferroni correction is too conservative for the spatially correlated data we're interested in. SOLUTION: under the null hypothesis our data approximate the random field Therefore any deductions we can make about the RF will also hold for our null hypothesis.

RFT for dummies - Part II19 The Euler characteristic (EC): The Euler characteristic is an invariant topological property of a space. For our purposes, the EC can be thought of as the (# blobs - # holes) in the random field after we apply a threshold to it. Threshold: z = 0 Threshold: z =1 19 We can calculate this for a random field of a given size and smoothness, and in turn it can help us solve the multiple comparisons problem...

20 Euler Characteristic and FWE Euler Characteristic Topological Measure  #blobs - #holes At high thresholds, just counts blobs FWER= P(Max voxel  u | H o ) = P(One or more blobs | H o )  P(EC  1 | H o )  E(EC | H o ) Random Field Threshold No holes Never more than 1 blob So at high thresholds, expected EC approximates the chance of a blob appearing at random, and so approximates α !

Process of RFT application: 1. Estimation of smoothness 2. Establish RFT parameters and generate Euler characteristic (EC) 3. Obtaining P FWE

RFT for dummies - Part II22 1. Smoothness Estimation: Creating a Random field that resembles our data Our data approximate to a random field of a given smoothness. This property can be thought of as the degree of spatial correlation, or intuitively as the variance of gradient in each spatial dimension. We do not know this a priori. Although we may know the FWHM of our smoothing kernel, we are ignorant of the underlying anatomical correlation between voxels. 22 for the RANDOM FIELD to approximate our DATA, we need to give it an equivalent SMOOTHNESS.

The smoothness is calculated a posteriori by SPM from the observed degree of spatial correlation It does this by estimating the number of RESOLUTION ELEMENTS (RESELS) This is approximately equal to the number independent observations.

The number of ressels depend on the FWHM the number of voxels /pixels/ elements. Example RESEL: If we have a field of white noise pixels smoothed with FWHM of 10 by 10 pixels. Then a RESEL is a block of 100 pixels. As there are 10,000 pixels in our image, there are 100 RESELs. RESEL: a block of values, e.g. pixels, that is the same size as the FWHM.  one of a factor which defines p value in RFT

Process of RFT application: 1. Estimation of smoothness 2. Establish RFT parameters and generate Euler characteristic 3. Obtaining P FWE

RFT for dummies - Part II26 2. Estimating RFT parameters: The Euler characteristic (EC): The Euler characteristic is an invariant topological property of a space. For our purposes, the EC can be thought of as the number of blobs in an image after thresholding. Threshold: z = 0 Threshold: z =1 26

RFT for dummies - Part II27 In 2D field,expected EC is: E[EC] = R (4 ln 2) (2π) -3/2 Zt exp(-Zt 2 /2) Might look a bit complicated but basically: E[EC] = R. K. -3/2 z exp(-z 2 /2) where the last section just describes this curve: 27 Where R = # of RESELS and Zt = our threshold value of Z

At the high values of Zt we're interested in, E[EC] < 1, and so gives us a probability that a blob will exist by chance.

Process of RFT application: 1. Estimation of smoothness 2. Establish RFT parameters and generate Euler characteristic 3. Obtaining P FWE

RFT for dummies - Part II30 the average or expected EC: E[EC] E [EC], corresponds (approximately) to the probability of finding an above threshold blob in our statistic image. 30 At High Zt, E[EC]=~ The probability of getting a z-score > threshold by chance α ~ E[EC] = R (4 ln 2) (2π) -3/2 Zt exp(-Zt 2 /2)

31 E[EC] approx = α given that EC is approximately equal to alpha, for a given number of resels, we can set E[EC] as our desired alpha, and use the equation to find the appropriate value of Zt. We then apply that Zt as the threshold for our image, and get a value of threshold that will give us our FWE-corrected p-val. 31

Finding the EC value Our data are of course in 3 dimensions, which makes the equation for the EC a little more complicated, but the principle remains the same.

RFT for dummies - Part II33 SPM8 and RFT 33

RFT for dummies - Part II34 Summery of FWE correction by RFT RFT stages on SPM: 1.First SPM estimates the smoothness (spatial correlation) of our statistical map. R is calculated and saved in RPV.img file. 1.Then it uses the smoothness values in the appropriate RFT equation, to give the expected EC at different thresholds. 1.This allows us to calculate the threshold at which we would expect α % of equivalent statistical maps arising under the null hypothesis to contain at least one area above threshold. 34

18/11/2009RFT for dummies - Part II35 Example 35

RFT for dummies - Part II36

RFT for dummies - Part II37 SPM8 and RFT We can use FWE correction in different ways on SPM8 [1] 1. Using FWE correction on SPM, calculates the threshold over the whole brain image. We can specify the area of interest by masking the rest of the brain when we do the second level statistic analysis. 2. Using uncorrected threshold, none, (usually p= 0.001). Then correcting for the area we specify. (Small Volume Correction (SVC)) [1] SPM manual, 37

RFT for dummies - Part II38 Acknowledgements The topic expert: Guillaume Flandin The organisers: Rumana Chowdhury Peter Smittenaar Suz Prejawa Method for Dummies 2011/12 (note to self: go to B08A)