Slides:



Advertisements
Similar presentations
Bayesian models for fMRI data
Advertisements

SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Classical inference and design efficiency Zurich SPM Course 2014
07/01/15 MfD 2014 Xin You Tai & Misun Kim
Multiple comparison correction Methods & models for fMRI data analysis 18 March 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Circular analysis in systems neuroscience – with particular attention to cross-subject correlation mapping Nikolaus Kriegeskorte Laboratory of Brain and.
Statistical Inference
Chapter 14 Conducting & Reading Research Baumgartner et al Chapter 14 Inferential Data Analysis.
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
1 Haskins fMRI Workshop Part III: Across Subjects Analysis - Univariate, Multivariate, Connectivity.
False Discovery Rate Methods for Functional Neuroimaging Thomas Nichols Department of Biostatistics University of Michigan.
Review of Stats Fundamentals
Chapter 8: Hypothesis Testing and Inferential Statistics What are inferential statistics, and how are they used to test a research hypothesis? What is.
Lorelei Howard and Nick Wright MfD 2008
Issues with analysis and interpretation - Type I/ Type II errors & double dipping - Madeline Grade & Suz Prejawa Methods for Dummies 2013.
SPM short course – May 2003 Linear Models and Contrasts The random field theory Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal.
General Linear Model & Classical Inference
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
2nd Level Analysis Jennifer Marchant & Tessa Dekker
Multiple Comparison Correction in SPMs Will Penny SPM short course, Zurich, Feb 2008 Will Penny SPM short course, Zurich, Feb 2008.
Rules Teams of five or less students. Choose team name… Write up four answer cards: A, B, C and D Answer when prompted. Winning team’s players earn a.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005 David Carmichael MfD 2006 David Carmichael.
Basics of fMRI Inference Douglas N. Greve. Overview Inference False Positives and False Negatives Problem of Multiple Comparisons Bonferroni Correction.
With a focus on task-based analysis and SPM12
ANALYSIS OF fMRI DATA BASED ON NN-ARx MODELING Biscay-Lirio, R: Inst. of Cybernetics, Mathematics and Physics, Cuba Bosch-Bayard, J.: Cuban Neuroscience.
FMRI Methods Lecture7 – Review: analyses & statistics.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
Multiple comparisons in M/EEG analysis Gareth Barnes Wellcome Trust Centre for Neuroimaging University College London SPM M/EEG Course London, May 2013.
Inference and Inferential Statistics Methods of Educational Research EDU 660.
FMRI ROI Analysis 7/18/2014 Friday Yingying Wang
Wellcome Dept. of Imaging Neuroscience University College London
Classical Inference on SPMs Justin Chumbley SPM Course Oct 23, 2008.
Experimental design of fMRI studies Methods & models for fMRI data analysis 01 November 2013 Klaas Enno Stephan Translational Neuromodeling Unit (TNU)
Contrasts & Statistical Inference
Issues concerning the interpretation of statistical significance tests.
Random Field Theory Will Penny SPM short course, London, May 2005 Will Penny SPM short course, London, May 2005.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
AP Statistics Chapter 21 Notes
Statistical Inference Christophe Phillips SPM Course London, May 2012.
Psych 230 Psychological Measurement and Statistics Pedro Wolf October 21, 2009.
SPM short – Mai 2008 Linear Models and Contrasts Stefan Kiebel Wellcome Trust Centre for Neuroimaging.
Jump to first page Inferring Sample Findings to the Population and Testing for Differences.
The general linear model and Statistical Parametric Mapping II: GLM for fMRI Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline.
Caveats when using statistics Lessons I have learnt so far Week 14.
More about tests and intervals CHAPTER 21. Do not state your claim as the null hypothesis, instead make what you’re trying to prove the alternative. The.
A Novel Assessment Tool for Alzheimer's and Frontotemporal Dementias Jeanyung Chey 1,2, Hyun Song 2, Jungsuh Suk 1, & Minue J. Kim 3 The Proportional Reasoning.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
General Linear Model & Classical Inference Short course on SPM for MEG/EEG Wellcome Trust Centre for Neuroimaging University College London May 2010 C.
CHAPTER 15: THE NUTS AND BOLTS OF USING STATISTICS.
What a Cluster F… ailure!
Logic of Hypothesis Testing
Statistics for fMRI.
Topological Inference
Statistical Analysis of M/EEG Sensor- and Source-Level Data
Wellcome Trust Centre for Neuroimaging University College London
Chapter 8: Hypothesis Testing and Inferential Statistics
Issues with analysis and interpretation
and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline
Topological Inference
Contrasts & Statistical Inference
Volume 63, Issue 2, Pages (July 2009)
1.3 Data Recording, Analysis and Presentation
Experimental Design Christian Ruff With thanks to: Rik Henson
Statistical Challenges in “Big Data” Human Neuroimaging
Contrasts & Statistical Inference
The General Linear Model
Will Penny Wellcome Trust Centre for Neuroimaging,
Contrasts & Statistical Inference
Wellcome Trust Centre for Neuroimaging University College London
Volume 63, Issue 2, Pages (July 2009)
Presentation transcript:

Beyond Blobs The ‘imager’s fallacy’ and how to avoid it

Neuroscientists – Fishers of Blobs?

A blob = a “result” that you can publish. You should be able to interpret the meaning of each blob (in terms of localized function). Blobs are the first thing you should look for, and the final goal of your analysis

What is a “Blob”? An area activated by a task?

What is a “Blob”? An area activated by a task? An area where task-related activity fits a model?

What is a “Blob”? An area activated by a task? An area where task-related activity fits a model? An area where task-related activity fits a model well enough to pass an arbitrary threshold.

Blobs Are Important Blobs (thresholding) serve a very important purpose. Whole-brain corrected blobs (FDR or FWE corrected) are evidence that ‘something is going on’. To adopt a looser standard (e.g. p < uncorrected) was the sin of the Dead Salmon (Bennet et al.)

Blobs Are Not Representative Significantly activated voxels are not representative of activated areas. This is the problem of voodoo correlations aka double dipping, circularity, non-independence Vul et al. showed the error of treating significantly activated blobs (or, even worse, peaks within blobs) as representative of anything (they’re not). Especially when power (sample size) is low.

Imagine A Study... We visit various towns and cities around The Netherlands We sample 100 people per site (50 men and 50 women). Each person completes a questionnaire: “BLoB” (Belgian Liking of Beer) scale. We want to know: Do Dutch men like drinking beer more or less than women? If so, where in the Netherlands is this difference is seen?

Would this be the first way you’d inspect those results?

The Problem With Blobs They conceal the raw data – you only see (effectively) p-values.

It imposes the arbitrary p < 0.05 cutoff and censors all nonsignificant points (even if they are p = 0.051). The Problem With Blobs

They conceal the raw data – you only see (effectively) p-values. It imposes the arbitrary p < 0.05 cutoff and censors all nonsignificant points (even if they are p = 0.051). We know that blobs are significantly different to some null hypothesis, but we don't know whether each blob is significantly more significant than any non-blob point. The Problem With Blobs

What About The Rest of the Brain?

With Enough Subjects, The Whole Brain is A Blob Thyreau et al. (2012) Neuroimage N = 1326 fMRI study of a face processing task (emotional faces vs. grey circles) in the multicenter IMAGEN consortium.

So What? t-scores / p-values are dependent on sample size. So in threshold on t-scores or p-scores, we are applying a threshold based on our sample size. Sample size in fMRI studies is primarily limited by practical concerns ($$$). Should the practical limitation of sample size determine which areas we call activated ?

The Imager’s Fallacy Richard Henson (2012) Q J Exp Psychology What can functional neuroimaging tell the experimental psychologist? “It is not sufficient to report two statistical maps, one for each condition versus a common baseline, and observe that they look different. This is a common mistake (“imager's fallacy”)… one should not eyeball differences in statistics, but explicitly test statistics of differences.”

New Visualizations Can Help Blobs should not be the Alpha and the Omega of neuroimaging analysis. They should be one part of a comprehensive approach. Look at the un-thresholded statistical parametric maps alongside the thresholded ones. E.g. In FSL you can find these in the stats/ directory of FEAT output for fMRI.

Post-Blob Visualization? Or not quite? Allen, Erhardt, Calhoun (2012) Neuron Data visualization in the neurosciences: overcoming the curse of dimensionality

Visualizing “In Limbo” Areas De Hollander et al. (2014) PLoS ONE An Antidote to the Imager's Fallacy, or How to Identify Brain Areas That Are in Limbo

Visualizing “In Limbo” Areas

Thank