Download presentation
Presentation is loading. Please wait.
1
Classical (frequentist) inference Methods & models for fMRI data analysis 22 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical Research in Economics University of Zurich Functional Imaging Laboratory (FIL) Wellcome Trust Centre for Neuroimaging University College London With many thanks for slides & images to: FIL Methods group, especially Guillaume Flandin
2
Overview of SPM RealignmentSmoothing Normalisation General linear model Statistical parametric map (SPM) Image time-series Parameter estimates Design matrix Template Kernel Gaussian field theory p <0.05 Statisticalinference
3
Time BOLD signal Time single voxel time series single voxel time series Voxel-wise time series analysis model specification model specification parameter estimation parameter estimation hypothesis statistic SPM
4
Overview A recap of model specification and parameter estimation Hypothesis testing Contrasts and estimability T-tests F-tests Design orthogonality Design efficiency
5
Mass-univariate analysis: voxel-wise GLM = + y y X X Model is specified by 1.Design matrix X 2.Assumptions about e Model is specified by 1.Design matrix X 2.Assumptions about e N: number of scans p: number of regressors N: number of scans p: number of regressors The design matrix embodies all available knowledge about experimentally controlled factors and potential confounds.
6
Parameter estimation = + Ordinary least squares estimation (OLS) (assuming i.i.d. error): Objective: estimate parameters to minimize y X
7
OLS parameter estimation These estimators minimise. They are found solving either The Ordinary Least Squares (OLS) estimators are: Under i.i.d. assumptions, the OLS estimates correspond to ML estimates: or NB: precision of our estimates depends on design matrix!
8
Maximum likelihood (ML) estimation probability density function ( fixed!) likelihood function ( y fixed!) ML estimator For cov(e)= V, the ML estimator is equivalent to a weighted least sqaures (WLS) estimate: For cov(e)= I, the ML estimator is equivalent to the OLS estimator: OLS WLS
9
c = 1 0 0 0 0 0 0 0 0 0 0 ReML- estimates SPM: t-statistic based on ML estimates For brevity:
10
Hypothesis testing “Null hypothesis” H 0 = “there is no effect” c T = 0 This is what we want to disprove. The “alternative hypothesis” H 1 represents the outcome of interest. To test an hypothesis, we construct a “test statistics”. The test statistic T The test statistic summarises the evidence for H 0. Typically, the test statistic is small in magnitude when H 0 is true and large when H 0 is false. We need to know the distribution of T under the null hypothesis. Null Distribution of T
11
Hypothesis testing Observation of test statistic t, a realisation of T: A p-value summarises evidence against H 0. This is the probability of observing t, or a more extreme value, under the null hypothesis: Null Distribution of T Type I Error α : Acceptable false positive rate α. Threshold u α controls the false positive rate t p Null Distribution of T uu The conclusion about the hypothesis: We reject H 0 in favour of H 1 if t > u α
12
One cannot accept the null hypothesis (one can just fail to reject it) Absence of evidence is not evidence of absence! If we do not reject H 0, then all can say is that there is not enough evidence in the data to reject H 0. This does not mean that there is a strong evidence to accept H 0. What does this mean for neuroimaging results based on classical statistics? A failure to find an “activation” in a particular area does not mean we can conclude that this area is not involved in the process of interest.
13
Contrasts A contrast selects a specific effect of interest: a contrast c is a vector of length p c T is a linear combination of regression coefficients Under i.i.d assumptions: We are usually not interested in the whole vector. c T β = 1x 1 + 0x 2 + 0x 3 + 0x 4 + 0x 5 +... c T = [1 0 0 0 0 …] c T β = 0x 1 + -1x 2 + 1x 3 + 0x 4 + 0x 5 +... c T = [0 -1 1 0 0 …] NB: the precision of our estimates depends on design matrix and the chosen contrast !
14
Estimability of a contrast If X is not of full rank then different parameters can give identical predictions. The parameters are therefore ‘non-unique’, ‘non- identifiable’ or ‘non-estimable’. For such models, X T X is not invertible so we must resort to generalised inverses (SPM uses the Moore-Penrose pseudo-inverse). 1 0 1 0 1 1 One-way ANOVA (unpaired two-sample t-test) Rank(X)=2 [1 0 0], [0 1 0], [0 0 1] are not estimable. [1 0 1], [0 1 1], [1 -1 0], [0.5 0.5 1] are estimable. Example: parameters images Factor 1 2 Mean parameter estimability (gray not uniquely specified)
15
c T = 1 0 0 0 0 0 0 0 t = contrast of estimated parameters variance estimate box-car amplitude > 0 ? = 1 = c T > 0 ? 1 2 3 4 5... t-contrasts – SPM{t} Question: Null hypothesis: H 0 : c T =0 Test statistic:
16
t-contrasts in SPM ResMS image con_???? image beta_???? images spmT_???? image SPM{t} For a given contrast c:
17
t-contrast: a simple example Q: activation during listening ? c T = [ 1 0 ] Null hypothesis: Passive word listening versus rest SPMresults: Height threshold T = 3.2057 {p<0.001} Design matrix 0.511.522.5 10 20 30 40 50 60 70 80 1 voxel-level p uncorrected T ( Z ) mm mm mm 13.94 Inf0.000-63 -27 15 12.04 Inf0.000-48 -33 12 11.82 Inf0.000-66 -21 6 13.72 Inf0.000 57 -21 12 12.29 Inf0.000 63 -12 -3 9.89 7.830.000 57 -39 6 7.39 6.360.000 36 -30 -15 6.84 5.990.000 51 0 48 6.36 5.650.000-63 -54 -3 6.19 5.530.000-30 -33 -18 5.96 5.360.000 36 -27 9 5.84 5.270.000-45 42 9 5.44 4.970.000 48 27 24 5.32 4.870.000 36 -27 42
18
Student's t-distribution first described by William Sealy Gosset, a statistician at the Guinness brewery at Dublin t-statistic is a signal-to-noise measure: t = effect / standard deviation t-distribution is an approximation to the normal distribution for small samples t-contrasts are simply combinations of the betas the t-statistic does not depend on the scaling of the regressors or on the scaling of the contrast Unilateral test: vs. -5-4-3-2012345 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 =1 =2 =5 =10 = Probability density function of Student’s t distribution
19
F-test: the extra-sum-of-squares principle Model comparison: Full vs. reduced model Full model ( X 0 + X 1 )? Null Hypothesis H 0 : True model is X 0 (reduced model) X1X1 X0X0 RSS Or reduced model? X0X0 RSS 0 F-statistic: ratio of unexplained variance under X 0 and total unexplained variance under the full model 1 = rank(X) – rank(X 0 ) 2 = N – rank(X)
20
F-test: multidimensional contrasts – SPM{F} Tests multiple linear hypotheses: 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 c T = H 0 : 3 = 4 = = 9 = 0 X 1 ( 3-9 ) X0X0 Full model?Reduced model? H 0 : True model is X 0 X0X0 test H 0 : c T = 0 ? SPM{F 6,322 }
21
F-contrast in SPM ResMS image beta_???? images spmF_???? images SPM{F} ess_???? images ( RSS 0 - RSS )
22
F-test example: movement related effects To assess movement-related activation: There is a lot of residual movement-related artifact in the data (despite spatial realignment), which tends to be concentrated near the boundaries of tissue types. By including the realignment parameters in our design matrix, we can “regress out” linear components of subject movement, reducing the residual error, and hence improve our statistics for the effects of interest.
23
Differential F-contrasts Think of it as constructing 3 regressors from the 3 differences and complement this new design matrix such that data can be fitted in the same exact way (same error, same fitted data).
24
F-test: a few remarks F-tests can be viewed as testing for the additional variance explained by a larger model wrt. a simpler (nested) model model comparison F tests a weighted sum of squares of one or several combinations of the regression coefficients . In practice, partitioning of X into [ X 0 X 1 ] is done by multidimensional contrasts. F-tests are not directional: When testing a uni-dimensional contrast with an F-test, for example 1 – 2, the result will be the same as testing 2 – 1. It will be exactly the square of the t-test, testing for both positive and negative effects. Hypotheses: Null hypothesis H 0 :β 1 = β 2 =... = β p = 0 Alternative hypothesis H 1 :At least one β k ≠ 0
25
True signal and observed signal (--) Model (green, pic at 6sec) TRUE signal (blue, pic at 3sec) Fitting ( 1 = 0.2, mean = 0.11) Test for the green regressor not significant Residual (still contains some signal) Example: a suboptimal model
26
e =+ Y X 1 = 0.22 2 = 0.11 Residual Var.= 0.3 p(Y| b 1 = 0) p-value = 0.1 (t-test) p(Y| b 1 = 0) p-value = 0.2 (F-test) Example: a suboptimal model
27
t-test of the green regressor significant F-test very significant t-test of the red regressor very significant A better model True signal + observed signal Model (green and red) and true signal (blue ---) Red regressor : temporal derivative of the green regressor Total fit (blue) and partial fit (green & red) Adjusted and fitted signal Residual (a smaller variance)
28
e =+ Y X 1 = 0.22 2 = 2.15 3 = 0.11 Residual Var. = 0.2 p(Y| b 1 = 0) p-value = 0.07 (t-test) p(Y| b 1 = 0, b 2 = 0) p-value = 0.000001 (F-test) A better model
29
x1x1 x2x2 x2*x2* y Correlation among regressors When x 2 is orthogonalized with regard to x 1, only the parameter estimate for x 1 changes, not that for x 2 ! Correlated regressors = explained variance is shared between regressors
30
Design orthogonality For each pair of columns of the design matrix, the orthogonality matrix depicts the magnitude of the cosine of the angle between them, with the range 0 to 1 mapped from white to black. The cosine of the angle between two vectors a and b is obtained by: If both vectors have zero mean then the cosine of the angle between the vectors is the same as the correlation between the two variates.
31
True signal Fit (blue : total fit) Residual Model (green and red) Correlated regressors
32
=+ Y X 1 = 0.79 2 = 0.85 3 = 0.06 Residual var. = 0.3 p(Y| b 1 = 0) p-value = 0.08 (t-test) P(Y| b 2 = 0) p-value = 0.07 (t-test) p(Y| b 1 = 0, b 2 = 0) p-value = 0.002 (F-test) Correlated regressors 12 1 2
33
True signal Residuals (do not change) Fit (does not change) Model (green and red) red regressor has been orthogonalised with respect to the green one remove everything that correlates with the green regressor After orthogonalisation
34
=+ Y X Residual var. = 0.3 p(Y| b 1 = 0) p-value = 0.0003 (t-test) p(Y| b 2 = 0) p-value = 0.07 (t-test) p(Y| b 1 = 0, b 2 = 0) p-value = 0.002 (F-test) (0.79) (0.85) (0.06) 1 = 1.47 2 = 0.85 3 = 0.06 12 1 2 After orthogonalisation does change does not change does not change
35
Design efficiency The aim is to minimize the standard error of a t-contrast (i.e. the denominator of a t-statistic). This is equivalent to maximizing the efficiency ε : Noise variance Design variance If we assume that the noise variance is independent of the specific design: This is a relative measure: all we can say is that one design is more efficient than another (for a given contrast). NB: efficiency depends on design matrix and the chosen contrast !
36
Design efficiency X T X is the covariance matrix of the regressors in the design matrix efficiency decreases with increasing covariance but note that efficiency differs across contrasts c T = [1 0]→ ε = 5.26 c T = [1 1]→ ε = 20 c T = [1 -1]→ ε = 1.05
37
Example: working memory A: Response follows each stimulus with (short) fixed delay. B: Jittering the delay between stimuli and responses. C: Requiring a response only for half of all trials (randomly chosen). Stimulus Response Stimulus Response Stimulus Response ABC Time (s) Correlation = -.65 Efficiency ([1 0]) = 29 Correlation = +.33 Efficiency ([1 0]) = 40 Correlation = -.24 Efficiency ([1 0]) = 47
38
Thank you
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.