Download presentation
Presentation is loading. Please wait.
Published byDina Warner Modified over 8 years ago
1
SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France
2
realignment & coregistration smoothing normalisation Corrected p-values images Adjusted data Design matrix Anatomical Reference Spatial filter Random Field Theory Your question: a contrast Statistical Map Uncorrected p-values General Linear Model Linear fit Ü statistical image
3
w Make sure we understand the testing procedures : t- and F-tests Plan w Examples – almost real w REPEAT: model and fitting the data with a Linear Model w But what do we test exactly ?
4
Temporal series fMRI Statistical image (SPM) voxel time course One voxel = One test (t, F,...) amplitude time General Linear Model Üfitting Üstatistical image
5
Regression example… = ++ voxel time series 90 100 110 box-car reference function -10 0 10 90 100 110 Mean value Fit the GLM -2 0 2
6
Regression example… = ++ voxel time series 90 100 110 box-car reference function -2 0 2 0 1 2 Mean value Fit the GLM -2 0 2
7
…revisited : matrix form = + + =++ Y 11 22 f(t)
8
Box car regression: design matrix… =+ = +YX data vector (voxel time series) design matrix parameters error vector
9
What if we believe that there are drifts?
10
Add more reference functions... Discrete cosine transform basis functions
11
…design matrix = = + YX error vector … + data vector
12
…design matrix = + = + YX data vector design matrix parameters error vector = the betas (here : 1 to 9)
13
Fitting the model = finding some estimate of the betas raw fMRI time series adjusted for low Hz effects residuals fitted “high-pass filter” fitted signal the squared values of the residuals s 2 number of time points minus the number of estimated betas fitted drift Raw data Noise Variance
14
Fitting the model = finding some estimate of the betas = + Y = X ∥ Y−X ∥ 2 = Σ i [ y i − X i ] 2 finding the betas = minimising the sum of square of the residuals finding the betas = minimising the sum of square of the residuals when are estimated: let’s call them b when are estimated: let’s call them b when is estimated : let’s call it e when is estimated : let’s call it e estimated SD of : let’s call it s estimated SD of : let’s call it s
15
w We put in our model regressors (or covariates) that represent how we think the signal is varying (of interest and of no interest alike) BUT WHICH ONE TO INCLUDE ? Take home... w Coefficients (= parameters) are estimated by minimizing the fluctuations, - variability – variance – of the noise – the residuals. w Because the parameters depend on the scaling of the regressors included in the model, one should be careful in comparing manually entered regressors
16
w Make sure we understand t and F tests Plan w Make sure we all know about the estimation (fitting) part.... w But what do we test exactly ? w An example – almost real
17
A contrast = a weighted sum of parameters: c´ c’ = 1 0 0 0 0 0 0 0 divide by estimated standard deviation of b T test - one dimensional contrasts - SPM{t} SPM{t} T = contrast of estimated parameters variance estimate T = s 2 c’(X’X) - c c’b > 0 ? Compute 1 x b + 0 x b + 0 x b + 0 x b + 0 x b +... b b b b b ....
18
How is this computed ? (t-test) Y = X + ~ N(0,I) (Y : at one position) b = (X’X) + X’Y (b: estimate of ) -> beta??? images e = Y - Xb (e: estimate of ) s 2 = (e’e/(n - p)) (s: estimate of n: time points, p: parameters) -> 1 image ResMS -> 1 image ResMS Estimation [Y, X] [b, s] Test [b, s 2, c] [c’b, t] Var(c’b) = s 2 c’(X’X) + c (compute for each contrast c, proportional to s 2 ) t = c’b / sqrt(s 2 c’(X’X) + c) c’b -> images spm_con??? compute the t images -> images spm_t??? compute the t images -> images spm_t??? under the null hypothesis H 0 : t ~ Student-t( df ) df = n-p
19
F-test : a reduced model H 0 : 1 = 0 X1X1 X0X0 H 0 : True model is X 0 X0X0 c’ = 1 0 0 0 0 0 0 0 T values become F values. F = T 2 Both “activation” and “deactivations” are tested. Voxel wise p-values are halved. This (full) model ? Or this one? S2S2 S02S02 F ~ ( S 0 2 - S 2 ) / S 2
20
Tests multiple linear hypotheses : Does X1 model anything ? F-test : a reduced model or... This (full) model ? H 0 : True (reduced) model is X 0 X1X1 X0X0 S2S2 Or this one? X0X0 S02S02 F = error variance estimate additional variance accounted for by tested effects F ~ ( S 0 2 - S 2 ) / S 2
21
0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 c’ = tests multiple linear hypotheses. Ex : does drift functions model anything? F-test : a reduced model or... multi-dimensional contrasts ? H 0 : 3-9 = (0 0 0 0...) X1X1 X0X0 H 0 : True model is X 0 X0X0 Or this one? This (full) model ?
22
How is this computed ? (F-test) Error variance estimate additional variance accounted for by tested effects Test [b, s, c] [ess, F] F ~ (s 0 - s) / s 2 -> image spm_ess??? -> image of F : spm_F??? -> image of F : spm_F??? under the null hypothesis : F ~ F( p - p0, n-p) Estimation [Y, X] [b, s] Y = X + ~ N(0, I) Y = X + ~ N(0, I) X : X Reduced
23
w T tests are simple combinations of the betas; they are either positive or negative (b1 – b2 is different from b2 – b1) T and F test: take home... w F tests can be viewed as testing for the additional variance explained by a larger model wrt a simpler model, or w F tests the sum of the squares of one or several combinations of the betas w in testing “single contrast” with an F test, for ex. b1 – b2, the result will be the same as testing b2 – b1. It will be exactly the square of the t-test, testing for both positive and negative effects.
24
w Make sure we understand t and F tests Plan w Make sure we all know about the estimation (fitting) part.... w But what do we test exactly ? Correlation between regressors w An example – almost real
25
« Additional variance » : Again Independent contrasts
26
« Additional variance » : Again correlated regressors, for example green: subject age yellow: subject score Testing for the green
27
« Additional variance » : Again correlated contrasts Testing for the red
28
« Additional variance » : Again Entirely correlated contrasts ? Non estimable ! Testing for the green
29
« Additional variance » : Again If significant ? Could be G or Y ! Entirely correlated contrasts ? Non estimable ! Testing for the green and yellow
30
An example: real Testing for first regressor: T max = 9.8
31
Including the movement parameters in the model Testing for first regressor: activation is gone !
33
Convolution model Design and contrast SPM(t) or SPM(F) Fitted and adjusted data
34
w Make sure we understand t and F tests Plan w Make sure we all know about the estimation (fitting) part.... w But what do we test exactly ? Correlation between regressors w An example – almost real
35
A real example (almost !) Factorial design with 2 factors : modality and category 2 levels for modality (eg Visual/Auditory) 3 levels for category (eg 3 categories of words) Experimental DesignDesign Matrix V A C1 C2 C3 C1 C2 C3 V A C 1 C 2 C 3
36
Asking ourselves some questions... V A C 1 C 2 C 3 Design Matrix not orthogonal Many contrasts are non estimable Interactions MxC are not modelled Test C1 > C2 : c = [ 0 0 1 -1 0 0 ] Test V > A : c = [ 1 -1 0 0 0 0 ] [ 0 0 1 0 0 0 ] Test C1,C2,C3 ? (F) c = [ 0 0 0 1 0 0 ] [ 0 0 0 0 1 0 ] Test the interaction MxC ?
37
Modelling the interactions
38
Asking ourselves some questions... V A V A V A Test C1 > C2 : c = [ 1 1 -1 -1 0 0 0] Test the interaction MxC : [ 1 -1 -1 1 0 0 0] c =[ 0 0 1 -1 -1 1 0] [ 1 -1 0 0 -1 1 0] Design Matrix orthogonal All contrasts are estimable Interactions MxC modelled If no interaction... ? Model is too “big” ! C 1 C 1 C 2 C 2 C 3 C 3 Test V > A : c = [ 1 -1 1 -1 1 -1 0] Test the category effect : [ 1 1 -1 -1 0 0 0] c =[ 0 0 1 1 -1 -1 0] [ 1 1 0 0 -1 -1 0]
39
Asking ourselves some questions... With a more flexible model V A V A V A Test C1 > C2 ? Test C1 different from C2 ? from c = [ 1 1 -1 -1 0 0 0] to c = [ 1 0 1 0 -1 0 -1 0 0 0 0 0 0] [ 0 1 0 1 0 -1 0 -1 0 0 0 0 0] becomes an F test! C 1 C 1 C 2 C 2 C 3 C 3 Test V > A ? c = [ 1 0 -1 0 1 0 -1 0 1 0 -1 0 0] is possible, but is OK only if the regressors coding for the delay are all equal
40
w F tests have to be used when - Testing for >0 and 0 and <0 effects - Testing for more than 2 levels in factorial designs - Conditions are modelled with more than one regressor Toy example: take home... w F tests can be viewed as testing for - the additional variance explained by a larger model wrt a simpler model, or - the sum of the squares of one or several combinations of the betas (here the F test b1 – b2 is the same as b2 – b1, but two tailed compared to a t-test).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.