Methods for Dummies Random Field Theory

Slides:



Advertisements
Similar presentations
VBM Voxel-based morphometry
Advertisements

Mkael Symmonds, Bahador Bahrami
Random Field Theory Methods for Dummies 2009 Lea Firmin and Anna Jafarpour.
Statistical Inference and Random Field Theory Will Penny SPM short course, London, May 2003 Will Penny SPM short course, London, May 2003 M.Brett et al.
OverviewOverview Motion correction Smoothing kernel Spatial normalisation Standard template fMRI time-series Statistical Parametric Map General Linear.
Multiple comparison correction
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research University of Zurich With many thanks for slides & images to: FIL Methods.
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2014 Many thanks to Justin.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
07/01/15 MfD 2014 Xin You Tai & Misun Kim
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Multiple comparison correction Methods & models for fMRI data analysis 18 March 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
Giles Story Philipp Schwartenbeck
Voxel Based Morphometry
General Linear Model & Classical Inference
Multiple Comparison Correction in SPMs Will Penny SPM short course, Zurich, Feb 2008 Will Penny SPM short course, Zurich, Feb 2008.
Methods for Dummies Second level analysis
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005 David Carmichael MfD 2006 David Carmichael.
Basics of fMRI Inference Douglas N. Greve. Overview Inference False Positives and False Negatives Problem of Multiple Comparisons Bonferroni Correction.
With a focus on task-based analysis and SPM12
Random field theory Rumana Chowdhury and Nagako Murase Methods for Dummies November 2010.
Computational Biology Jianfeng Feng Warwick University.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Multiple comparisons in M/EEG analysis Gareth Barnes Wellcome Trust Centre for Neuroimaging University College London SPM M/EEG Course London, May 2013.
Methods for Dummies Random Field Theory Annika Lübbert & Marian Schneider.
Classical Inference on SPMs Justin Chumbley SPM Course Oct 23, 2008.
**please note** Many slides in part 1 are corrupt and have lost images and/or text. Part 2 is fine. Unfortunately, the original is not available, so please.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005.
Random Field Theory Ciaran S Hill & Christian Lambert Methods for Dummies 2008.
Functional Brain Signal Processing: EEG & fMRI Lesson 14
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Statistical Analysis An Introduction to MRI Physics and Analysis Michael Jay Schillaci, PhD Monday, April 7 th, 2007.
Detecting connectivity: MS lesions, cortical thickness, and the “bubbles” task in the fMRI scanner Keith Worsley, McGill (and Chicago) Nicholas Chamandy,
Multiple comparisons problem and solutions James M. Kilner
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2015 With thanks to Justin.
What a Cluster F… ailure!
Group Analyses Guillaume Flandin SPM Course London, October 2016
Topological Inference
The general linear model and Statistical Parametric Mapping
The General Linear Model
2nd Level Analysis Methods for Dummies 2010/11 - 2nd Feb 2011
Inference on SPMs: Random Field Theory & Alternatives
Design Matrix, General Linear Modelling, Contrasts and Inference
Introduction to Permutation for Neuroimaging HST
Statistical Inference
Detecting Sparse Connectivity: MS Lesions, Cortical Thickness, and the ‘Bubbles’ Task in an fMRI Experiment Keith Worsley, Nicholas Chamandy, McGill Jonathan.
Maxima of discretely sampled random fields
Multiple comparisons in M/EEG analysis
and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline
M/EEG Statistical Analysis & Source Localization
The General Linear Model (GLM)
Topological Inference
Contrasts & Statistical Inference
The General Linear Model
Inference on SPMs: Random Field Theory & Alternatives
Statistical Parametric Mapping
The general linear model and Statistical Parametric Mapping
The General Linear Model
The General Linear Model (GLM)
Contrasts & Statistical Inference
The General Linear Model
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
Probabilistic Modelling of Brain Imaging Data
Statistical Inference
The General Linear Model
The General Linear Model
Contrasts & Statistical Inference
Presentation transcript:

Methods for Dummies Random Field Theory Dominic Oliver & Stephanie Alley

Structure What is the multiple comparisons problem? Why we can’t use the classical Bonferroni approach What is Random Field Theory? How do we implement it in SPM?

Processing so far

Hypothesis testing and the type I error Testing t-statistic against our null hypothesis (H0) Probability of result arising by chance p < 0.05 Chosen threshold (u) determines false positive rate () t-statistic distribution means you always run a risk of false positive or type I error t u

Significance of type I errors in fMRI  u Since α remains constant, the more tests you do, the more type I errors Due to the number of tests performed in fMRI, the number of type I errors with this threshold is highly impractical 60,000 voxels per brain 60,000 t-tests α = 0.05 60,000 x 0.05 = 3,000 type I errors

Why is this a problem? 3,000 is a large number but it’s still only 5% of our total voxels Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon t(131) > 3.15, p(uncorrected) < 0.001 Bennett, Miller & Wolford, 2009

So what can we do about it? Adjust the threshold (u), taking in account the number of tests being performed Any values above new threshold are highly unlikely to be false positives t u t > 0.5 t > 1.5 t > 2.5 t > 3.5 t > 4.5 t > 5.5 t > 6.5

The Bonferroni Correction Why can’t we use it?

Bonferroni correction example single-voxel probability threshold  Number of voxels  = PFWE / n e.g. 100,000 t values, each with 40 d.f. When PFWE = 0.05: 0.05/100000 = 0.0000005 t = 5.77 Voxel where t > 5.77 has only 5% chance of appearing in a volume of 100,000 t stats drawn from the null distribution Family-wise error rate

So what’s the issue? Bonferroni correction is too conservative for fMRI data Bonferroni relies on independent values But fMRI data is not independent due to How the scanner collects data and reconstructs the images Anatomy and physiology Preprocessing (e.g. normalisation, smoothing)

The problem What Bonferroni needs: What we have: A series of independent statistical tests What we have: A spatially dependent, continuous statistical image

Spatial dependence This slice is made of 100 x 100 voxels Random values from normal distribution 10,000 independent values Bonferroni accurate

Spatial dependence Add spatial dependence by averaging across 10 x 10 squares Still 10,000 numbers in image but only 100 are independent Bonferroni correction 100 times too conservative

Smoothing Improves signal to noise ratio (SNR) Convolve fMRI signal with Gaussian kernel Improves signal to noise ratio (SNR) Increased sensitivity Incorporates anatomical and functional variations between subjects Improves statistical validity by making distribution of errors more normal

Spatial dependence is increased by smoothing When map is smoothed, each value in the image is replaced by a weighted average of itself and its neighbours Image is blurred and independent values are reduced

If not Bonferroni, then what? Idea behind Bonferroni correction in this context is sound We need to adjust our threshold in a similar way but also take into account the influence on spatial dependence on our data What does this? Random Field Theory

Random Field Theory

Random field theory provides a means of working with smooth statistical maps. It allows us to determine the height threshold that will give the desired FWE. Random fields have specified topological characteristics Apply topological inference to detect activations in SPMs

Three stages of applying RFT: 1. Estimate smoothness 2. Calculate the number of resels 3. Get an estimate of the Euler Characteristic at different thresholds Resels = resolution elements; depend on smoothness and number of voxels (i.e. size of search region)!

1. Estimate Smoothness estimate Image contains both applied and intrinsic smoothness Estimated using the residuals (error) from GLM In practice, smoothness is calculated using the residual values from the statistical analysis  Y = bX + e  b = beta = scaling values, X = design matrix, e = residual error I.i.d. noise = independently and identically distributed noise  all variables have the same probability distribution and are mutual INDEPENDENT

2. Calculate the number of resels Look at your estimated smoothness (FWHM) The smoothed image contains spatial correlation, which is typical of the output from the analysis of functional imaging data. We now have a problem, because there is no simple way of calculating the number of independent observations in the smoothed data, so we cannot use the Bonferroni correction.  This problem can be addressed using random field theory. FWHM (Full Width at Half Maximum)

Express your search volume in resels Resel - A block of values that is the same size as the FWHM Resolution element (Ri = FWHMx x FWHMy x FWHMz) “Restores” independence of the data A resel is simply defined as a block of values (e.g. pixels) that is the same size as the FWHM

3. Get an estimate of the Euler Characteristic Euler Characteristic – a topological property SPM  threshold  EC EC = number of blobs (minus number of holes)* “Seven bridges of Kӧnisberg” Leonhard Euler (1797-1783), Swiss mathematician  Seven bridges of Kӧnisberg *Not relevant: we are only interested in EC at high thresholds (when it approximates P of FEW)

Steps 1 and 2 yield a ‘fitted random field’ (appropriate smoothness) Now: how likely is it to observe above threshold (‘significantly different’) local maxima (or clusters, sets) under H0? How to find out? EC!

Euler Characteristic and FWE Zt = 2.5 EC = 3 Zt = 2.75 The probability of a family wise error is approximately equivalent to the expected Euler Characteristic Number of “above threshold blobs” If we increase the Z score threshold to 2.75, we find that the two central blobs disappear – because the Z scores were less than 2.75. EC = 1

How to get E [EC] at different thresholds # of resels Expected Euler Characteristic Z (or T) -score threshold From this equation, it follows that the threshold depends only on the number of resels in our image.

Given # of resels and E[EC], we can find the appropriate z/t-threshold E [EC] for an image of 100 resels, for Z score thresholds 0 – 5 The graph does a reasonable job of predicting the EC in our image; at a Z threshold of 2.5 it predicted an EC of 1.9, when we observed a value of 3; at Z=2.75 it predicted an EC of 1.1, for an observed EC of 1.

How to get E [EC] at different thresholds # of resels Expected Euler Characteristic Z (or T) -score threshold From this equation, it follows that the threshold depends only on the number of resels in our image. From this equation, it looks like the threshold depends only on the number of resels in our image

Shape of search region matters, too! #resels  E [EC] not strictly accurate! close approximation if ROI is large compared to the size of a resel # + shape + size resel  E [EC] Matters if small or oddly shaped ROI Example: central 30x30 pixel box = max. 16 ‘resels’ (resel-width = 10pixel) edge 2.5 pixel frame = same volume but max. 32 ‘resels’ Multiple comparison correction for frame must be more stringent

Different types of topological Inference Topological inference can be about Peak height (voxel level) Regional extent (cluster level) Number of clusters (set level) space intensity tα tclus Different height and spatial extent thresholds Voxel-level inference based on height thresholds asks: is activation at a given voxel significantly non-zero? Bigger framework: cluster-level and set-level inference. These require height and spatial extent thresholds to be specified by the user Corrected p-values can then be derived that pertain to…

Assumptions made by RFT Underlying fields are continuous with twice-differentiable autocorrelation function Error fields are reasonable lattice approximation to underlying random field with multivariate Gaussian distribution lattice representation - Autocorrelation function need not be Gaussian - If the data have been sufficiently smoothed and the General Linear Models correctly specified (so that the errors are indeed Gaussian) then the RFT assumptions will be met Check: FWHM must be min. 3 voxels in each dimension In general: if smooth enough + GLM correct (error = Gaussian)  RFT

Assumptions are not met if... Data is not sufficiently smooth Small number of subjects High-resolution data Non-parametric methods to get FWE (permutation) Alternative ways of controlling for errors: FDR (false discovery rate) E.g. in random effects analysis (chp.12) with a small number of subjects  because the resulting error fields will not be very smooth and so may violate the ‘reasonable lattice approximation’ assumption.

Implementation in SPM

Implementation in SPM

Activations Significant at Cluster level But not at Voxel Level

Sources Human Brain Function, Chapter 14, An Introduction to Random Field Theory (Brett, Penny, Kiebel) http://www.fil.ion.ucl.ac.uk/spm/course/video/#RFT http://www.fil.ion.ucl.ac.uk/spm/course/video/#MEEG_MCP Former MfD Presentations Expert Guillaume Flandin