Download presentation
1
With a focus on task-based analysis and SPM12
fMRI Statistics With a focus on task-based analysis and SPM12
2
Models Previously, we looked at GLM models for single subjects (first level models) We can also use a GLM to estimate group effects (second level models) Fixed effects analysis: include all data in a single model (less used option for group analysis) Random effects analysis*: use outputs of first level models as inputs to second level (more efficient, and generalizes) *Strictly speaking, “mixed effects”: both random & fixed effects together
3
Fixed vs. Random Effects
In a fixed effects model: Which brain areas are activated on average across subjects? In a random effects model: Which brain areas are activated in the same way across subjects? If you only have one scan on one condition per subject, fixed is all you can do (e.g., FDG-PET)
4
Two Stage Modeling Limitations
In theory, every first level design matrix should be identical (in practice, somewhat robust to variation: Penny, 2004) Assumes underlying error is identical across subjects (all details of first level analysis condensed to voxelwise betas/contrasts) Limited (in SPM at least) to voxels available for all subjects
5
Covariates & Estimability
Covariates can be useful, especially in a second level model (e.g., age) If any variable (column of X) is a linear combination of others, some betas cannot be estimated uniquely As a result, some contrasts can be rejected as “inestimable” by SPM* Color coding: grey (not uniquely specified) vs. white *strictly speaking they are estimable, but without a unique solution
6
Contrast Examples +1 -1 Contrasts = linear combinations of regressors, to be compared to zero Could compare one to zero: a > 0 Or difference of two: a – b > 0 aka, “a > b” Covariates
7
The Constant Term How the GLM constant term is included also affects what contrasts are estimable Usually better to have it “implicit” Source: Rik Henson “GLM & RFT”
8
Regressor Scaling/Centering
Regressors in the design matrix X need to be scaled, or big ones will dominate small ones SPM automatically scales covariates as they are added Centering affects model error and interpretation Overall mean (default) No centering Factor-based (for covariate interpretation that differs across factor levels) Also, can specify interaction with a factor
9
Regressor Orthogonalization
“Orthogonal” regressors are independent Inner product of vectors = 0 Not (quite) the same thing as “uncorrelated”! Inner product of de-meaned vectors = 0 Rodgers et al. (1984) Linearly independent, orthogonal and uncorrelated variables. The American Statistician, 38:
10
Regressor Orthogonalization
If regressors are not orthogonal* attribution of effects becomes difficult Effect estimates (betas) are artificially reduced Variance not assignable to one source is “lost” *or “uncorrelated”—with default de-meaning, these are functionally identical in SPM
11
Parcellation of Variance
Orthogonal regressors account for different parts of the variance.
12
Parcellation of Variance
Lost Non-orthogonal regressors account for overlapping parts of the variance, and each ends up with only its unique portion.
13
Parcellation of Variance
Lost If the overlap is particularly severe the effects cannot be estimated reliably: inestimable Avoid correlated regressors!
14
Orthogonalization How can we “orthogonalize” non-orthogonal regressors? Avoid the issue in design (for experimental conditions) Principal components analysis/factor analysis? (difficult to interpret) Serial orthogonalization (used by SPM)
15
Serial Orthogonalization
When we have only one regressor, things are simple… Y = 1X1 Y X1 Example from: Evina Chu, “Basis Functions” SPM MfD course (12/2007)
16
Serial Orthogonalization
When we have only one regressor, things are simple… Y = 1X1 1 = 1.5 Y This is the best estimate of Y using only X1 X1 (vector) 1 (length) Example from: Evina Chu, “Basis Functions” SPM MfD course (12/2007)
17
Serial Orthogonalization
Now consider adding a second regressor, one not orthogonal to the first… Y = 1X1 + 2X2 1 = 1 2 = 1 Y X2 X1
18
Serial Orthogonalization
Now consider adding a second regressor, one not orthogonal to the first… Y = 1X1 + 2X2 1 = 1 2 = 1 1 (length) Y We can now estimate Y perfectly using both Xs; however, note that 1 has dropped from 1.5 to 1. X2 is explaining variance X1 could also explain. X2 2 (length) X1
19
Serial Orthogonalization
Let’s orthogonalize X2 with respect to X1. This will create a new variable “X2*”. Y = 1X1 + 2*X2* 1 = 1.5 2* = 1 Y X2 X2* X1
20
Serial Orthogonalization
Let’s orthogonalize X2 with respect to X1. This will create a new variable “X2*”. Y = 1X1 + 2*X2* 1 = 1.5 2* = 1 1 is back to 1.5 1 (length) Y Orthogonalization (via Gram- Schmidt process) produces a new variable, based on the old one and existing variables Serial process, so order matters! X2 2* (length) X2* X1
21
Serial Orthogonalization in SPM
Design matrix is orthogonalized left-to-right, so order matters Tip: put the “most important” covariates first (the ones whose meaning you don’t want to change) If all are “nuisance regressors” that won’t be interpreted, order doesn’t matter Can always plot the final (orthogonalized) variables
22
Significance Testing The t-test is the basic unit of SPM
Measures “signal” (here, contrast) relative to “noise” (variability) T-value generated for each contrast at each voxel Recall that the t-test is implemented in a GLM framework (allowing covariates) SPM outputs spmT (“t-map”) as well as con and beta files
23
F-tests SPM can also do F-tests Produce “F-maps”
In this context, can be thought of as a generalization of t-tests Tests multiple conditions but without the directionality of t-tests Tells “is any combination of these variables having a significant effect?” (but not which ones/which direction) Produce “F-maps”
24
T/F-test Comparison A B F: [1 -1] T: [1 -1] B A A B F: [1 0 0 1]
Source: Rik Henson, “SPM GLM” talk
25
The Multiple Comparisons Problem
Making “independent” assessments at each voxel leads to many, many statistical tests If we assume p < 0.05 threshold and complete independence, 5% of voxels tested will be false positives! ~100,000 voxels –> 5,000 false positives…
26
Multiple Comparisons Correction
We can “correct” by setting a more stringent threshold This is called setting a “family wise” threshold, and leads to a “family wise error rate” Bonferroni correction: divide p threshold by number of tests p < 100,000 voxels becomes p < 0.05/100,000 p < family-wise threshold
27
Multiple Comparisons Correction
Bonferroni assumes independent tests More generally, can use a less strict FWE correction if that doesn’t hold SPM uses Random Field Theory (RFT) Estimates smoothness across the brain Smoothness implies loss of independence between neighbors, and reduces need for correction Less smoothness —> better spatial specificity, but in the extreme RFT can become even more conservative than Bonferroni!
28
Multiple Comparisons Correction
Another option is False Discovery Rate (FDR) correction Instead of controlling the chance of any test being a false positive (p), control the fraction of false positives (q) Often easier to limit to, say, <5% false positives than to a <5% chance of a false positive; thus, less conservative in those cases May or may not assume independence
29
Multiple Comparisons Correction
One other option is to change the search area This only makes sense if done a priori! For example, a specific hypothesis might call for investigating only regions X and Y SPM’s results are always adjusted to the search area Note: SPM masks in two stages, and only the first (prior to result generation) affects statistics!
30
Multiple Comparisons Correction
So far we’ve considered voxelwise correction (aka “peak level” statistics) Can also use the extent of activation (number of contiguous activated voxels) to assess significance (aka “cluster level” statistics) Can predict the expected number/size of clusters (using RFT) Clusters bigger than a certain size are “significant” SPM reports using FWE and FDR values (as with peak level)
31
Excursions, Peaks, Clusters
Excursion set: super-threshold voxels for some threshold Peaks: local maxima in the excursion set Clusters: sets of neighboring voxels in the excursion set “excursion set” in black Source: Durnez et al. 2014
32
SPM Results Reporting Clusters, peaks Sub-peaks (within each
Cluster level (determined by “kE”, the cluster size) Peak level (determined by t value)
33
SPM Results Visualization
Results table gives the fundamentals Accompanied by a “glass brain” (maximum intensity projection) view SPM results can be viewed interactively (clicking in the glass brain or table)
34
Additional SPM Plots Can also use several additional views in SPM:
Slices: several contiguous slices Sections: several orthogonal slices Render: surface map showing near/at surface activations In general, want to plot what you are using to make your inferences (especially for publication)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.