Presentation is loading. Please wait.

Presentation is loading. Please wait.

SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 

Similar presentations


Presentation on theme: "SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 "— Presentation transcript:

1

2 SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1  C3 P C1C3  Xb C2  C3  Xb Space of X C1 C2 C1  C3 P C1C3  Xb C2  C3  = (X’X) - X’Y = = (X’X) - X’Y = ^  ( ) C 1Y /S 1 2 C 2Y /S 2 2 Jean-Baptiste Poline Orsay SHFJ-CEA A priori complicated … a posteriori very simple !

3 SPM course - 2002 LINEAR MODELS and CONTRASTS The RFT Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal projections) Jean-Baptiste Poline Orsay SHFJ-CEA www.madic.org

4 realignement & coregistration smoothing normalisation Corrected p-values images Adjusted signal Design matrix Anatomical Reference Spatial filter Random Field Theory Your question: a contrast Statistical Map Uncorrected p-values General Linear Model Linear fit Ü statistical image

5 w Make sure we understand the testing procedures : t and F tests w Correlation in our model : do we mind ? Plan w A bad model... And a better one w Make sure we know all about the estimation (fitting) part.... w A (nearly) real example

6 Temporal series fMRI Statistical image (SPM) voxel time course One voxel = One test (t, F,...) amplitude time General Linear Model Üfitting Üstatistical image

7 Regression example… = ++ voxel time series 90 100 110   box-car reference function -10 0 10   90 100 110 Mean value Fit the GLM -2 0 2

8 Regression example… = ++ voxel time series 90 100 110  Mean value   box-car reference function -2 0 2  error -2 0 2

9 …revisited : matrix form =  + + ss =  ++ f(t s )1YsYs error 

10 Box car regression: design matrix…   =+  =  +YX data vector (voxel time series) design matrix parameters error vector  

11 Add more reference functions... Discrete cosine transform basis functions

12 …design matrix =  +  =  + YX data vector design matrix parameters error vector  = the betas (here : 1 to 9)

13 Fitting the model = finding some estimate of the betas = minimising the sum of square of the error S 2 raw fMRI time series adjusted for low Hz effects residuals fitted “high-pass filter” fitted box-car the squared values of the residuals   s 2 number of time points minus the number of estimated betas

14 w We put in our model regressors (or covariates) that represent how we think the signal is varying (of interest and of no interest alike) Summary... w Coefficients (=estimated parameters or betas) are found such that the sum of the weighted regressors is as close as possible to our data w As close as possible = such that the sum of square of the residuals is minimal w These estimated parameters (the “betas”) depend on the scaling of the regressors : keep in mind when comparing ! w The residuals, their sum of square, the tests (t,F), do not depend on the scaling of the regressors

15 w Make sure we understand t and F tests w Correlation in our model : do we mind ? Plan w A bad model... And a better one w Make sure we all know about the estimation (fitting) part.... w A (nearly) real example

16 SPM{t} A contrast = a linear combination of parameters: c´   c’ = 1 0 0 0 0 0 0 0 test H 0 : c´  b > 0 ? T test - one dimensional contrasts - SPM{t} T = contrast of estimated parameters variance estimate T = s 2 c’(X’X) + c c’b box-car amplitude > 0 ? = b  > 0 ? (b  : estimation of    ) = 1 x b  + 0 x b  + 0 x b  + 0 x b  + 0 x b  +... > 0 ? = b  b  b  b  b ....

17 How is this computed ? (t-test) ^^  contrast of estimated parameters variance estimate Y = X  +   ~   N(0,I)  (Y : at one position) b = (X’X) + X’Y (b = estimation of  ) -> beta??? images e = Y - Xb (e = estimation of  ) s 2 = (e’e/(n - p)) (s = estimation of  n: temps, p: paramètres) -> 1 image ResMS -> 1 image ResMS Estimation [Y, X] [b, s] Test [b, s 2, c] [c’b, t] Var(c’b) = s 2 c’(X’X) + c (compute for each contrast c) t = c’b / sqrt(s 2 c’(X’X) + c) (c’b -> images spm_con??? compute the t images -> images spm_t??? ) compute the t images -> images spm_t??? ) under the null hypothesis H 0 : t ~ Student( df ) df = n-p

18 Tests multiple linear hypotheses : Does X1 model anything ? F test (SPM{F}) : a reduced model or... X1X1 X0X0 This model ? H 0 : True model is X 0 S2S2 Or this one ? X0X0 S 0 2 > S 2 F = error variance estimate additional variance accounted for by tested effects F ~ ( S 0 2 - S 2 ) / S 2

19 H 0 :  3-9 = (0 0 0 0...) 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 c’ = c’ = +1 0 0 0 0 0 0 0 SPM{F} tests multiple linear hypotheses. Ex : does HPF model anything? F test (SPM{F}) : a reduced model or... multi-dimensional contrasts ? test H 0 : c´  b = 0 ? X 1 (  3-9 ) X0X0 This model ?Or this one ? H 0 : True model is X 0 X0X0

20 How is this computed ? (F-test) ^^  Error variance estimate additional variance accounted for by tested effects Test [b, s, c] [ess, F] F = (e 0 ’e 0 - e’e)/(p - p 0 ) / s 2 -> image (e 0 ’e 0 - e’e)/(p - p 0 ) : spm_ess??? -> image of F : spm_F??? -> image of F : spm_F??? under the null hypothesis : F ~ F(df1,df2) p - p 0 n-p p - p 0 n-p b 0 = (X  ’X  ) + X  ’Y e 0 = Y - X 0 b 0 (e  = estimation of   ) s 2 0 = (e 0 ’e 0 /(n - p 0 )) (s  = estimation of    n: time, p  : parameters) Estimation [Y, X 0 ] [b 0, s 0 ] (not really like that ) Estimation [Y, X] [b, s] Y = X  +   ~ N(0,   I) Y = X     +     ~ N(0,    I) X  : X Reduced

21 w Make sure we understand t and F tests w Correlation in our model : do we mind ? Plan w A bad model... And a better one w Make sure we all know about the estimation (fitting) part.... w A (nearly) real example

22 A bad model... True signal and observed signal (---) Model (green, pic at 6sec) et TRUE signal (blue, pic at 3sec) Fitting (b1 = 0.2, mean =.11) => Test for the green regressor non significant Noise (still contains some signal)

23  =+ Y X  b 1 = 0.22 b 2 = 0.11 A bad model... Residual Variance = 0.3 P( b1 > 0 ) = 0.1 (t-test) P( b1 = 0 ) = 0.2 (F-test)

24 A « better » model... True signal + observed signal Global fit (blue) and partial fit (green & red) Adjusted and fitted signal => Test of the green regressor significant => Test F very significant => Test of the red regressor very significant Noise (a smaller variance) Model (green and red) and true signal (blue ---) Red regressor : temporal derivative of the green regressor

25 A better model...  =+ Y X  b 1 = 0.22 b 2 = 2.15 b 3 = 0.11 Residual Var = 0.2 P( b1 > 0 ) = 0.07 (t test) P( [b1 b2] [0 0] ) = 0.000001 (F test) =

26 Flexible models : Fourier Basis

27 Flexible models : Gamma Basis

28 w We rather test flexible models if there is little a priori information, and precise ones with a lot a priori information w The residuals should be looked at...(non random structure ?) w In general, use the F-tests to look for an overall effect, then look at the betas or the adjusted signal to characterise the origin of the signal w Interpreting the test on a single parameter (one function) can be very confusing: cf the delay or magnitude situation Summary... (2)

29 w Make sure we understand t and F tests w Correlation in our model : do we mind ? Plan w A bad model... And a better one w Make sure we all know about the estimation (fitting) part.... w A (nearly) real example ?

30 True signal Correlation between regressors Fitting (blue : global fit) Noise Model (green and red)

31  =+ Y X  b 1 = 0.79 b 2 = 0.85 b3 = 0.06 Correlation between regressors Residual var. = 0.3 P( b1 > 0 ) = 0.08 (t test) P( b2 > 0 ) = 0.07 (t test) P( [b1 b2] 0 ) = 0.002 (F test) =

32 true signal Correlation between regressors - 2 Noise Fit Model (green and red) red regressor has been orthogonolised with respect to the green one = remove every thing that can correlate with the green regressor

33  =+ Y X  b 1 = 1.47 b 2 = 0.85 b3 = 0.06 Residual var. = 0.3 P( b1 > 0 ) = 0.0003 (t test) P( b2 > 0 ) = 0.07 (t test) P( [b1 b2] <> 0 ) = 0.002 (F test) See « explore design » Correlation between regressors -2 0.79 0.85 0.06

34 Design orthogonality : « explore design » Beware : when there is more than 2 regressors (C1,C2,C3...), you may think that there is little correlation (light grey) between them, but C1 + C2 + C3 may be correlated with C4 + C5 Black = completely correlated White = completely orthogonal Corr(1,1)Corr(1,2) 12 12 1 2 12 12 1 2

35 Implicit or explicit (decorrelation (or orthogonalisation) Implicit or explicit (   decorrelation (or orthogonalisation) C1 C2 Xb Y Xb e Space of X C1 C2 L C2 : L C1  : test of C2 in the implicit  model test of C1 in the explicit  model  C1 C2 Xb L C1  L C2 C2 C2  See Andrade et al., NeuroImage, 1999 This GENERALISES when testing several regressors (F tests)

36 1 0 1 0 1 1 1 0 1 0 1 1 X = Mean Cond 1 Cond 2 Y = Xb + e C1 C2 Mean = C1+C2 ^^  “completely” correlated... Parameters are not unique in general ! Some contrasts have no meaning: NON ESTIMABLE Example here : c’ = [1 0 0] is not estimable ( = no specific information in the first regressor); c’ = [1 -1 0] is estimable;

37 w We are implicitly testing additional effect only, so we may miss the signal if there is some correlation in the model using t tests Summary... (3) w Orthogonalisation is not generally needed - parameters and test on the changed regressor don’t change w It is always simpler (when possible !) to have orthogonal (uncorrelated) regressors w In case of correlation, use F-tests to see the overall significance. There is generally no way to decide where the « common » part shared by two regressors should be attributed to w In case of correlation and you need to orthogonolise a part of the design matrix, there is no need to re-fit a new model : the contrast only should change.

38 Plan w Make sure we understand t and F tests w Correlation in our model : do we mind ? w A bad model... And a better one w Make sure we all know about the estimation (fitting) part.... w A (nearly) real example

39 A real example (almost !) Factorial design with 2 factors : modality and category 2 levels for modality (eg Visual/Auditory) 3 levels for category (eg 3 categories of words) Experimental DesignDesign Matrix V A C1 C2 C3 C1 C2 C3 V A C 1 C 2 C 3

40 Asking ouselves some questions... V A C 1 C 2 C 3 Design Matrix not orthogonal Many contrasts are non estimable Interactions MxC are not modeled Test C1 > C2 : c = [ 0 0 1 -1 0 0 ] Test V > A : c = [ 1 -1 0 0 0 0 ] Test the modality factor : c = ? Use 2- Test the category factor : c = ? Use 2- Test the interaction MxC ? 2 ways : 1- write a contrast c and test c’b = 0 2- select colons of X for the model under the null hypothesis. The contrast manager allows that

41 Modelling the interactions

42 Asking ouselves some questions... V A V A V A Design Matrix orthogonal All contrasts are estimable Interactions MxC modelled If no interaction... ? Model too “big” Test C1 > C2 : c = [ 1 1 -1 -1 0 0 0] Test V > A : c = [ 1 -1 1 -1 1 -1 0] Test the differences between categories : [ 1 1 -1 -1 0 0 0] c =[ 0 0 1 1 -1 -1 0] C 1 C 1 C 2 C 2 C 3 C 3 Test the interaction MxC ? : [ 1 -1 -1 1 0 0 0] c =[ 0 0 1 -1 -1 1 0] [ 1 -1 0 0 -1 1 0] Test everything in the category factor, leaves out modality : [ 1 1 0 0 0 0 0] c =[ 0 0 1 1 0 0 0] [ 0 0 0 0 1 1 0]

43 Asking ouselves some questions... With a more flexible model V A V A V A Test C1 > C2 ? Test C1 different from C2 ? from c = [ 1 1 -1 -1 0 0 0] to c = [ 1 0 1 0 -1 0 -1 0 0 0 0 0 0] [ 0 1 0 1 0 -1 0 -1 0 0 0 0 0] becomes an F test! C 1 C 1 C 2 C 2 C 3 C 3 Test V > A ? c = [ 1 0 -1 0 1 0 -1 0 1 0 -1 0 0] is possible, but is OK only if the regressors coding for the delay are all equal

44 The RFT Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal projections) We don’t use that one Jean-Baptiste Poline Orsay SHFJ-CEA www.madic.org SPM course - 2002 LINEAR MODELS and CONTRASTS


Download ppt "SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 "

Similar presentations


Ads by Google