Presentation is loading. Please wait.

Presentation is loading. Please wait.

Methods for Dummies Random Field Theory

Similar presentations


Presentation on theme: "Methods for Dummies Random Field Theory"— Presentation transcript:

1 Methods for Dummies Random Field Theory
Dominic Oliver & Stephanie Alley

2 Structure What is the multiple comparisons problem?
Why we can’t use the classical Bonferroni approach What is Random Field Theory? How do we implement it in SPM?

3 Processing so far

4 Hypothesis testing and the type I error
Testing t-statistic against our null hypothesis (H0) Probability of result arising by chance p < 0.05 Chosen threshold (u) determines false positive rate () t-statistic distribution means you always run a risk of false positive or type I error t u

5 Significance of type I errors in fMRI
u Since α remains constant, the more tests you do, the more type I errors Due to the number of tests performed in fMRI, the number of type I errors with this threshold is highly impractical 60,000 voxels per brain 60,000 t-tests α = 0.05 60,000 x 0.05 = 3,000 type I errors

6 Why is this a problem? 3,000 is a large number but it’s still only 5% of our total voxels Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon t(131) > 3.15, p(uncorrected) < 0.001 Bennett, Miller & Wolford, 2009

7 So what can we do about it?
Adjust the threshold (u), taking in account the number of tests being performed Any values above new threshold are highly unlikely to be false positives t u t > 0.5 t > 1.5 t > 2.5 t > 3.5 t > 4.5 t > 5.5 t > 6.5

8 The Bonferroni Correction
Why can’t we use it?

9 Bonferroni correction example
single-voxel probability threshold  Number of voxels  = PFWE / n e.g. 100,000 t values, each with 40 d.f. When PFWE = 0.05: 0.05/ = t = 5.77 Voxel where t > 5.77 has only 5% chance of appearing in a volume of 100,000 t stats drawn from the null distribution Family-wise error rate

10 So what’s the issue? Bonferroni correction is too conservative for fMRI data Bonferroni relies on independent values But fMRI data is not independent due to How the scanner collects data and reconstructs the images Anatomy and physiology Preprocessing (e.g. normalisation, smoothing)

11 The problem What Bonferroni needs: What we have:
A series of independent statistical tests What we have: A spatially dependent, continuous statistical image

12 Spatial dependence This slice is made of 100 x 100 voxels
Random values from normal distribution 10,000 independent values Bonferroni accurate

13 Spatial dependence Add spatial dependence by averaging across 10 x 10 squares Still 10,000 numbers in image but only 100 are independent Bonferroni correction 100 times too conservative

14 Smoothing Improves signal to noise ratio (SNR)
Convolve fMRI signal with Gaussian kernel Improves signal to noise ratio (SNR) Increased sensitivity Incorporates anatomical and functional variations between subjects Improves statistical validity by making distribution of errors more normal

15 Spatial dependence is increased by smoothing
When map is smoothed, each value in the image is replaced by a weighted average of itself and its neighbours Image is blurred and independent values are reduced

16 If not Bonferroni, then what?
Idea behind Bonferroni correction in this context is sound We need to adjust our threshold in a similar way but also take into account the influence on spatial dependence on our data What does this? Random Field Theory

17 Random Field Theory

18 Random field theory provides a means of working with smooth statistical maps.
It allows us to determine the height threshold that will give the desired FWE. Random fields have specified topological characteristics Apply topological inference to detect activations in SPMs

19 Three stages of applying RFT:
1. Estimate smoothness 2. Calculate the number of resels 3. Get an estimate of the Euler Characteristic at different thresholds Resels = resolution elements; depend on smoothness and number of voxels (i.e. size of search region)!

20 1. Estimate Smoothness estimate Image contains both applied and intrinsic smoothness Estimated using the residuals (error) from GLM In practice, smoothness is calculated using the residual values from the statistical analysis  Y = bX + e  b = beta = scaling values, X = design matrix, e = residual error I.i.d. noise = independently and identically distributed noise  all variables have the same probability distribution and are mutual INDEPENDENT

21 2. Calculate the number of resels Look at your estimated smoothness (FWHM)
The smoothed image contains spatial correlation, which is typical of the output from the analysis of functional imaging data. We now have a problem, because there is no simple way of calculating the number of independent observations in the smoothed data, so we cannot use the Bonferroni correction.  This problem can be addressed using random field theory. FWHM (Full Width at Half Maximum)

22 Express your search volume in resels
Resel - A block of values that is the same size as the FWHM Resolution element (Ri = FWHMx x FWHMy x FWHMz) “Restores” independence of the data A resel is simply defined as a block of values (e.g. pixels) that is the same size as the FWHM

23 3. Get an estimate of the Euler Characteristic
Euler Characteristic – a topological property SPM  threshold  EC EC = number of blobs (minus number of holes)* “Seven bridges of Kӧnisberg” Leonhard Euler ( ), Swiss mathematician  Seven bridges of Kӧnisberg *Not relevant: we are only interested in EC at high thresholds (when it approximates P of FEW)

24 Steps 1 and 2 yield a ‘fitted random field’ (appropriate smoothness)
Now: how likely is it to observe above threshold (‘significantly different’) local maxima (or clusters, sets) under H0? How to find out? EC!

25 Euler Characteristic and FWE
Zt = 2.5 EC = 3 Zt = 2.75 The probability of a family wise error is approximately equivalent to the expected Euler Characteristic Number of “above threshold blobs” If we increase the Z score threshold to 2.75, we find that the two central blobs disappear – because the Z scores were less than 2.75. EC = 1

26 How to get E [EC] at different thresholds
# of resels Expected Euler Characteristic Z (or T) -score threshold From this equation, it follows that the threshold depends only on the number of resels in our image.

27 Given # of resels and E[EC], we can find the appropriate z/t-threshold
E [EC] for an image of 100 resels, for Z score thresholds 0 – 5 The graph does a reasonable job of predicting the EC in our image; at a Z threshold of 2.5 it predicted an EC of 1.9, when we observed a value of 3; at Z=2.75 it predicted an EC of 1.1, for an observed EC of 1.

28 How to get E [EC] at different thresholds
# of resels Expected Euler Characteristic Z (or T) -score threshold From this equation, it follows that the threshold depends only on the number of resels in our image. From this equation, it looks like the threshold depends only on the number of resels in our image

29 Shape of search region matters, too!
#resels  E [EC] not strictly accurate! close approximation if ROI is large compared to the size of a resel # + shape + size resel  E [EC] Matters if small or oddly shaped ROI Example: central 30x30 pixel box = max. 16 ‘resels’ (resel-width = 10pixel) edge 2.5 pixel frame = same volume but max. 32 ‘resels’ Multiple comparison correction for frame must be more stringent

30 Different types of topological Inference
Topological inference can be about Peak height (voxel level) Regional extent (cluster level) Number of clusters (set level) space intensity tclus Different height and spatial extent thresholds Voxel-level inference based on height thresholds asks: is activation at a given voxel significantly non-zero? Bigger framework: cluster-level and set-level inference. These require height and spatial extent thresholds to be specified by the user Corrected p-values can then be derived that pertain to…

31 Assumptions made by RFT
Underlying fields are continuous with twice-differentiable autocorrelation function Error fields are reasonable lattice approximation to underlying random field with multivariate Gaussian distribution lattice representation - Autocorrelation function need not be Gaussian - If the data have been sufficiently smoothed and the General Linear Models correctly specified (so that the errors are indeed Gaussian) then the RFT assumptions will be met Check: FWHM must be min. 3 voxels in each dimension In general: if smooth enough + GLM correct (error = Gaussian)  RFT

32 Assumptions are not met if...
Data is not sufficiently smooth Small number of subjects High-resolution data Non-parametric methods to get FWE (permutation) Alternative ways of controlling for errors: FDR (false discovery rate) E.g. in random effects analysis (chp.12) with a small number of subjects  because the resulting error fields will not be very smooth and so may violate the ‘reasonable lattice approximation’ assumption.

33 Implementation in SPM

34 Implementation in SPM

35 Activations Significant at Cluster level But not at Voxel Level

36 Sources Human Brain Function, Chapter 14, An Introduction to Random Field Theory (Brett, Penny, Kiebel) Former MfD Presentations Expert Guillaume Flandin


Download ppt "Methods for Dummies Random Field Theory"

Similar presentations


Ads by Google