Comparison of Parametric and Nonparametric Thresholding Methods for Small Group Analyses Thomas Nichols & Satoru Hayasaka Department of Biostatistics U.

Slides:



Advertisements
Similar presentations
Inference on SPMs: Random Field Theory & Alternatives
Advertisements

Statistical Inference and Random Field Theory Will Penny SPM short course, London, May 2003 Will Penny SPM short course, London, May 2003 M.Brett et al.
Multiple comparison correction
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research University of Zurich With many thanks for slides & images to: FIL Methods.
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2014 Many thanks to Justin.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
07/01/15 MfD 2014 Xin You Tai & Misun Kim
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Multiple comparison correction Methods & models for fMRI data analysis 18 March 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Differentially expressed genes
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
False Discovery Rate Methods for Functional Neuroimaging Thomas Nichols Department of Biostatistics University of Michigan.
1 Overview of Hierarchical Modeling Thomas Nichols, Ph.D. Assistant Professor Department of Biostatistics Mixed Effects.
Giles Story Philipp Schwartenbeck
General Linear Model & Classical Inference
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
Multiple Comparison Correction in SPMs Will Penny SPM short course, Zurich, Feb 2008 Will Penny SPM short course, Zurich, Feb 2008.
Multiple testing in high- throughput biology Petter Mostad.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005 David Carmichael MfD 2006 David Carmichael.
Essential Statistics in Biology: Getting the Numbers Right
7/16/2014Wednesday Yingying Wang
Computational Biology Jianfeng Feng Warwick University.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
SPM Course Zurich, February 2015 Group Analyses Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London With many thanks to.
SnPM: Statistical nonParametric Mapping A Permutation Test for PET & Second Level fMRI Thomas Nichols, University of Michigan Andrew Holmes, University.
Multiple comparisons in M/EEG analysis Gareth Barnes Wellcome Trust Centre for Neuroimaging University College London SPM M/EEG Course London, May 2013.
1 Inference on SPMs: Random Field Theory & Alternatives Thomas Nichols, Ph.D. Department of Statistics & Warwick Manufacturing Group University of Warwick.
Methods for Dummies Random Field Theory Annika Lübbert & Marian Schneider.
1 Data Modeling General Linear Model & Statistical Inference Thomas Nichols, Ph.D. Assistant Professor Department of Biostatistics
Thresholding and multiple comparisons
1 Nonparametric Thresholding Methods (FWE inference w/ SnPM) Thomas Nichols, Ph.D. Assistant Professor Department of Biostatistics University of Michigan.
Wellcome Dept. of Imaging Neuroscience University College London
**please note** Many slides in part 1 are corrupt and have lost images and/or text. Part 2 is fine. Unfortunately, the original is not available, so please.
1 Mixed effects and Group Modeling for fMRI data Thomas Nichols, Ph.D. Department of Statistics Warwick Manufacturing Group University of Warwick Zurich.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005.
The False Discovery Rate A New Approach to the Multiple Comparisons Problem Thomas Nichols Department of Biostatistics University of Michigan.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Tutorial 4 Single subject analysis. Tutorial 4 – topics Computing t values Contrasts Multiple comparisons Deconvolution analysis Trigger average analysis.
1 Identifying Robust Activation in fMRI Thomas Nichols, Ph.D. Assistant Professor Department of Biostatistics University of Michigan
Methods for Dummies Second level Analysis (for fMRI) Chris Hardy, Alex Fellows Expert: Guillaume Flandin.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 6 –Multiple hypothesis testing Marshall University Genomics.
Multiple comparisons problem and solutions James M. Kilner
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Topological Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, May 2015 With thanks to Justin.
False Discovery Rate for Functional Neuroimaging Thomas Nichols Department of Biostatistics University of Michigan Christopher Genovese & Nicole Lazar.
Statistics Part II John VanMeter, Ph.D. Center for Functional and Molecular Imaging Georgetown University Medical Center.
Estimating the False Discovery Rate in Genome-wide Studies BMI/CS 576 Colin Dewey Fall 2008.
What a Cluster F… ailure!
Group Analyses Guillaume Flandin SPM Course London, October 2016
Topological Inference
Nonparametric Inference with SnPM
Group Modeling for fMRI
Inference on SPMs: Random Field Theory & Alternatives
Thomas Nichols, Ph.D. Assistant Professor Department of Biostatistics
Introduction to Permutation for Neuroimaging HST
Thomas Nichols, Ph.D. Assistant Professor Department of Biostatistics
A Nonparametric approach…
Topological Inference
Wellcome Dept. of Imaging Neuroscience University College London
Inference on SPMs: Random Field Theory & Alternatives
Inference on SPMs: Random Field Theory & Alternatives
Inference on SPMs: Random Field Theory & Alternatives
Statistical Parametric Mapping
Statistical Challenges in “Big Data” Human Neuroimaging
Wellcome Dept. of Imaging Neuroscience University College London
Mixed effects and Group Modeling for fMRI data
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich.
WellcomeTrust Centre for Neuroimaging University College London
Presentation transcript:

Comparison of Parametric and Nonparametric Thresholding Methods for Small Group Analyses Thomas Nichols & Satoru Hayasaka Department of Biostatistics U. of Michigan

Abstract Powerful and valid thresholding methods are needed to make the most of functional neuroimaging data. Standard thresholding methods use random field theory (RFT) to obtain Familywise Error Rate (FWER) corrected thresholds [1]. However these methods have had little validation with t images, the statistic relevant for small group random effects analyses [2]. In this work we use the nonparametric permutation test to validate RFT methods. We use real datasets and simulated null t images to assess when, in terms of degrees of freedom (DF) and smoothness, the parametric methods agree with the exact nonparametric methods. We find that for low DF there is dramatic conservativeness in the RFT results, even for typical smoothness (3 voxel FWHM). In such settings the nonparametric permutation can overcome this conservativeness.

Motivation How does Random Field Theory (RFT) perform on small group data? Approach –Use real data and permutation test to evaluate RFT in realistic setting –Use simulated data to understand RFT performance when truth known

Introduction Massively Univariate Modeling –Standard approach to fMRI and PET data –Fit a univariate model at each point in the brain –Create images of test statistic, assessing effect of interest Massive Multiple Comparisons Problem –If one has 100,000 voxels, a  =0.05 threshold will yield 5,000 false positives –Must control some measure of false positives

Familywise Error Rate (FWER) Solution –Familywise Error is existence of any false positives –Standard approach is to control chance of a familywise error, FWER –Standard methods (e.g. Bonferroni) methods control FWER Random Field Theory FWER Solution [1] –General, Easy to apply Only requires smoothness and volume Results available for images of Z, t, F, chi2, etc

–Heavy on assumptions Gaussian data Stationary (or unwarpable) covariance Smooth images required –Theory is for continuous random fields Amount of smoothness needed for t images not known Asymptotic, approximate

Permutation FWER Solution [3] –Less general, computationally intensive Repeatedly analyze your data, permuting each time Permute as allowed by the null hypothesis –Very weak assumptions Only require that data is exchangeable under the null hypothesis For intersubject "second level" analyses (fMRI, PET, MEG) this is reasonable "Exact" - False positive rate is controlled as specified

Methods: Real Data 9 fMRI and 2 PET Group Datasets Each analyzed with summary image approach [2] Corrected threshold and number of significant voxels recorded

One Sample t Images –Size: 32x32x32 (32767 voxels) –Smoothness: 0, 1.5, 3, 6,12 FWHM –Degrees of Freedom: 9, 19, 29 10, 20 or 30 GRFs simulated for each t realization –Realizations: 3000 Record the FWER, proportion of 3000 which reject any voxels Methods: Simluations

–Random Field Threshold Corrected  =0.05 Smoothness estimated (not assumed known) –Permutation 100 relabelings Threshold: 95%ile of permutation dist n of maximum

Results: Real Data Perm. always lower Bonf. usually lower than RFT !

Note on dataset selection: The only datasets analyzed but not included in this poster are other contrasts (usually non-orthogonal) from the above studies. All omitted datasets showed the pattern reported here, that of the RFT thresholds being less sensitive than permutation’s.

Results: Simulations

Familywise Error Thresholds –RF & Perm adapt to smoothness –Perm & Truth close –Bonferroni close to truth for low smoothness 9 df 19 df

Familywise Error Thresholds –RF only good for high DF, high smoothness –Perm exact –Smoothness estimation not sole problem 9 df 19 df

Minimum P-value CDF’s –Show’s all thresholds simultaneously Lowering  doesn’t help

Conclusions RFT conservative on real and simulated small group data –Bonferroni almost always more sensitive for real data considered Smoothness much greater than 3 voxel FWHM is needed –Extreme smoothness required for results to be close to exact E.g.  12 FWHM at 9 DF

For simple one- and two-sample t test group analyses, always compare parametric RFT thresholds to nonparametric thresholds. –Easy to do with SnPM, a nonparametric toolbox for SPM

References [1] Worsley et al, HBM 4:58-73, [2] Holmes & Friston, NI 7(4):S754, [3] Nichols & Holmes, HBM 14:1-25. [4] [5] Wager et al, in preparation. [6] Henson et al, CerebCortex 12: , [7] Marshuetz et al, JoCN, 12/S2: , [8] Watson et al, CerebCortex 3:79-94, [9] Phan et al, Biological Psychiatry, 53: , 2003 [10] Wager et al, in preparation [11] Poster and detailed paper available at