Download presentation
Presentation is loading. Please wait.
1
Statistical Parametric Mapping
Will Penny Wellcome Trust Centre for Neuroimaging, University College London, UK LSTHM, UCL, Jan 14, 2009
2
Statistical Parametric Mapping
Image time-series Kernel Design matrix Realignment Smoothing General linear model Statistical inference Random field theory Normalisation p <0.05 Template Parameter estimates
3
Outline Voxel-wise General Linear Models Random Field Theory
Bayesian modelling
4
Voxel-wise GLMs Time Time model specification parameter estimation
hypothesis statistic Time Intensity single voxel time series SPM
5
Temporal convolution model for the BOLD response
Convolve stimulus function with a canonical hemodynamic response function (HRF): HRF
6
General Linear Model = + Error Covariance Model is specified by
Design matrix X Assumptions about e N: number of scans p: number of regressors
7
2. Weighted Least Squares
Estimation 1. ReML-algorithm L l g 2. Weighted Least Squares Friston et al. 2002, Neuroimage
8
Q: activation during listening ?
Contrasts & SPMs c = Q: activation during listening ? Null hypothesis:
9
Outline Voxel-wise General Linear Models Random Field Theory
Bayesian modelling
10
Inference for Images Noise Signal Signal+Noise
11
Use of ‘uncorrected’ p-value, a=0.1
11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5% Use of ‘uncorrected’ p-value, a=0.1 Percentage of Null Pixels that are False Positives Using an ‘uncorrected’ p-value of 0.1 will lead us to conclude on average that 10% of voxels are active when they are not. This is clearly undesirable. To correct for this we can define a null hypothesis for images of statistics.
12
Family-wise Null Hypothesis
Activation is zero everywhere If we reject a voxel null hypothesis at any voxel, we reject the family-wise Null hypothesis A FP anywhere in the image gives a Family Wise Error (FWE) Family-Wise Error (FWE) rate = ‘corrected’ p-value
13
Use of ‘uncorrected’ p-value, a=0.1
Use of ‘corrected’ p-value, a=0.1 FWE
14
Spatial correlation Independent Voxels Spatially Correlated Voxels
15
Random Field Theory Consider a statistic image as a discretisation of a continuous underlying random field Use results from continuous random field theory Discretisation
16
Euler Characteristic (EC)
Topological measure threshold an image at u EC = # blobs at high u: Prob blob = avg (EC) so FWE, a = avg (EC)
17
Example – 2D Gaussian images
α = R (4 ln 2) (2π) -3/2 u exp (-u2/2) Voxel-wise threshold, u Number of Resolution Elements (RESELS), R N=100x100 voxels, Smoothness FWHM=10, gives R=10x10=100
18
Example – 2D Gaussian images
α = R (4 ln 2) (2π) -3/2 u exp (-u2/2) For R=100 and α=0.05 RFT gives u=3.8
19
SPM results
20
SPM results...
21
Outline Voxel-wise General Linear Models Random Field Theory
Bayesian Modelling
22
Motivation Even without applied spatial smoothing, activation maps
(and maps of eg. AR coefficients) have spatial structure Contrast AR(1) We can increase the sensitivity of our inferences by smoothing data with Gaussian kernels (SPM2). This is worthwhile, but crude. Can we do better with a spatial model (SPM5) ? Aim: For SPM5 to remove the need for spatial smoothing just as SPM2 removed the need for temporal smoothing
23
The Model Y=XW+E q1 q2 r1 r2 u1 u2 l a b A W Y Voxel-wise AR:
Spatial pooled AR: Y=XW+E [TxN] [TxK] [KxN] [TxN]
24
Synthetic Data 1 : from Laplacian Prior
reshape(w1,32,32) t
25
Prior, Likelihood and Posterior
In the prior, W factorises over k and A factorises over p: The likelihood factorises over n: The posterior over W therefore does’nt factor over k or n. It is a Gaussian with an NK-by-NK full covariance matrix. This is unwieldy to even store, let alone invert ! So exact inference is intractable.
26
Variational Bayes L F KL
27
Variational Bayes If you assume posterior factorises
then F can be maximised by letting where
28
Variational Bayes In the prior, W factorises over k and A factorises over p: In chosen approximate posterior, W and A factorise over n: So, in the posterior for W we only have to store and invert N K-by-K covariance matrices.
29
Updating approximate posterior
Regression coefficients, W AR coefficients, A Spatial precisions for W Spatial precisions for A Observation noise
30
Synthetic Data 1 : from Laplacian Prior
x t
31
F Iteration Number
32
Least Squares VB – Laplacian Prior y y x x Coefficients = 1024
`Coefficient RESELS’ = 366
33
Synthetic Data II : blobs
True Smoothing Global prior Laplacian prior
34
Sensitivity 1-Specificity
35
Event-related fMRI: Faces versus chequerboard
Smoothing Global prior Laplacian Prior
36
Event-related fMRI: Familiar faces versus unfamiliar faces
Smoothing Penny WD, Trujillo-Barreto NJ, Friston KJ. Bayesian fMRI time series analysis with spatial priors. Neuroimage Jan 15;24(2): Global prior Laplacian Prior
37
Summary Voxel-wise General Linear Models Random Field Theory
Bayesian Modelling Graph-partitioned spatial priors for functional magnetic resonance images. Harrison LM, Penny W, Flandin G, Ruff CC, Weiskopf N, Friston KJ. Neuroimage Dec;43(4):
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.