SPM short course – May 2009 Linear Models and Contrasts

Slides:



Advertisements
Similar presentations
SPM short course – Mai 2007 Linear Models and Contrasts
Advertisements

Statistical Inference
The SPM MfD course 12th Dec 2007 Elvina Chu
General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN.
1 st Level Analysis: design matrix, contrasts, GLM Clare Palmer & Misun Kim Methods for Dummies
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Outline What is ‘1st level analysis’? The Design matrix
Classical inference and design efficiency Zurich SPM Course 2014
Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Zurich, February 2009.
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics April 2010 Klaas Enno Stephan Laboratory for Social and Neural.
The General Linear Model (GLM)
Statistical Inference
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and.
Classical (frequentist) inference Methods & models for fMRI data analysis 11 March 2009 Klaas Enno Stephan Laboratory for Social and Neural System Research.
Classical (frequentist) inference Methods & models for fMRI data analysis 22 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for.
1st level analysis: basis functions and correlated regressors
SPM short course – May 2003 Linear Models and Contrasts The random field theory Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal.
General Linear Model & Classical Inference
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
Contrasts (a revision of t and F contrasts by a very dummyish Martha) & Basis Functions (by a much less dummyish Iroise!)
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
Classical (frequentist) inference Methods & models for fMRI data analysis November 2012 With many thanks for slides & images to: FIL Methods group Klaas.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
FMRI GLM Analysis 7/15/2014Tuesday Yingying Wang
Contrasts & Statistical Inference
Statistical Inference Christophe Phillips SPM Course London, May 2012.
FMRI Modelling & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Chicago, Oct.
Idiot's guide to... General Linear Model & fMRI Elliot Freeman, ICN. fMRI model, Linear Time Series, Design Matrices, Parameter estimation,
The General Linear Model
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, May 2012.
SPM short – Mai 2008 Linear Models and Contrasts Stefan Kiebel Wellcome Trust Centre for Neuroimaging.
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Contrasts & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, October 2008.
The general linear model and Statistical Parametric Mapping II: GLM for fMRI Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline.
The General Linear Model Christophe Phillips SPM Short Course London, May 2013.
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, October 2012.
SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
General Linear Model & Classical Inference Short course on SPM for MEG/EEG Wellcome Trust Centre for Neuroimaging University College London May 2010 C.
The General Linear Model (GLM)
The General Linear Model (GLM)
General Linear Model & Classical Inference
The general linear model and Statistical Parametric Mapping
The General Linear Model
2nd Level Analysis Methods for Dummies 2010/11 - 2nd Feb 2011
Statistical Inference
Design Matrix, General Linear Modelling, Contrasts and Inference
A very dumb dummy thinks about modelling, contrasts, and basis functions. ?
Statistical Inference
SPM short course at Yale – April 2005 Linear Models and Contrasts
and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline
Statistical Inference
The SPM MfD course 12th Dec 2007 Elvina Chu
The General Linear Model (GLM)
SPM course The Multivariate ToolBox (F. Kherif, JBP et al.)
Contrasts & Statistical Inference
The General Linear Model
The general linear model and Statistical Parametric Mapping
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
Contrasts & Statistical Inference
MfD 04/12/18 Alice Accorroni – Elena Amoruso
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
The General Linear Model
The General Linear Model
Contrasts & Statistical Inference
Presentation transcript:

SPM short course – May 2009 Linear Models and Contrasts Jean-Baptiste Poline Neurospin,I2BM, CEA Saclay, France Among the various tools that SPM uses to anlyse brain images, I will concentrate on two aspects : the linear model that is fitted to the data and the test. In particular, I wish to give you enough information such that you may be able to undezrstand some difficulties that may arise when using linear model.

images Design Adjusted data matrix Your question: a contrast Spatial filter realignment & coregistration General Linear Model Linear fit statistical image Random Field Theory smoothing Statistical Map Uncorrected p-values normalisation Anatomical Reference Corrected p-values

Plan Make sure we know all about the estimation (fitting) part .... Make sure we understand the testing procedures : t- and F-tests But what do we test exactly ? An example – almost real

One voxel = One test (t, F, ...) amplitude General Linear Model fitting statistical image time Statistical image (SPM) Temporal series fMRI voxel time course

Regression example… + + = b2 b1 b1 = 1 voxel time series 90 100 110 box-car reference function -10 0 10 b1 b2 90 100 110 Mean value -2 0 2 + + = Fit the GLM b1 = 1 b2 = 100

Regression example… + + = -2 0 2 b2 b1 b1 = 5 voxel time series 90 100 110 -2 0 2 b1 b2 90 100 110 Mean value -2 0 2 + + = Fit the GLM b1 = 5 b2 = 100 box-car reference function

…revisited : matrix form = b1 b2 + + error Y = b1 ´ f(t) + b2 ´ 1 + es

Box car regression: design matrix… (voxel time series) data vector parameters design matrix error vector b1 b2 a = ´ + m Y = X ´ b + e

Add more reference functions ... Discrete cosine transform basis functions

…design matrix = the betas (here : 1 to 9) = + Y = X ´ b + e parameters error vector design matrix data vector b1 a m b3 b4 b5 b6 b7 b8 b9 b2 = + Y = X ´ b + e

Fitting the model = finding some estimate of the betas = minimising the sum of square of the residuals S2 raw fMRI time series adjusted for low Hz effects fitted box-car fitted “high-pass filter” residuals the squared values of the residuals S = s2 number of time points minus the number of estimated betas

Take home ... We put in our model regressors (or covariates) that represent how we think the signal is varying (of interest and of no interest alike) QUESTION IS : WHICH ONE TO INCLUDE ? Coefficients (= parameters) are estimated using the Ordinary Least Squares (OLS) by minimizing the fluctuations, - variability – variance – of the noise – the residuals Because the parameters depend on the scaling of the regressors included in the model, one should be careful in comparing manually entered regressors The residuals, their sum of squares and the resulting tests (t,F), do not depend on the scaling of the regressors.

Plan Make sure we all know about the estimation (fitting) part .... Make sure we understand t and F tests But what do we test exactly ? An example – almost real

T test - one dimensional contrasts - SPM{t} A contrast = a linear combination of parameters: c´ ´ b c’ = 1 0 0 0 0 0 0 0 box-car amplitude > 0 ? = b1 > 0 ? => b1 b2 b3 b4 b5 .... Compute 1xb1 + 0xb2 + 0xb3 + 0xb4 + 0xb5 + . . . and divide by estimated standard deviation SPM{t} T = contrast of estimated parameters variance estimate s2c’(X’X)+c c’b

How is this computed ? (t-test) contrast of estimated parameters variance estimate How is this computed ? (t-test) Y = X b + e e ~ s2 N(0,I) (Y : at one position) b = (X’X)+ X’Y (b: estimate of b) -> beta??? images e = Y - Xb (e: estimate of e) s2 = (e’e/(n - p)) (s: estimate of s, n: time points, p: parameters) -> 1 image ResMS Estimation [Y, X] [b, s] Test [b, s2, c] [c’b, t] Var(c’b) = s2c’(X’X)+c (compute for each contrast c, proportional to s2) t = c’b / sqrt(s2c’(X’X)+c) (c’b -> images spm_con??? compute the t images -> images spm_t??? ) under the null hypothesis H0 : t ~ Student-t( df ) df = n-p

F-test : a reduced model or ... Tests multiple linear hypotheses : Does X1 model anything ? H0: True (reduced) model is X0 X1 X0 S2 Or this one? X0 S02 F = error variance estimate additional variance accounted for by tested effects F ~ ( S02 - S2 ) / S2 This (full) model ?

F-test : a reduced model or ... multi-dimensional contrasts ? tests multiple linear hypotheses. Ex : does DCT set model anything? H0: b3-9 = (0 0 0 0 ...) X1 (b3-9) X0 This model ? Or this one ? H0: True model is X0 test H0 : c´ ´ b = 0 ? 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 c’ = SPM{F}

How is this computed ? (F-test) additional variance accounted for by tested effects How is this computed ? (F-test) Error variance estimate Estimation [Y, X] [b, s] Y = X b + e e ~ N(0, s2 I) Y = X0 b0 + e0 e0 ~ N(0, s02 I) X0 : X Reduced b0 = (X0’X0)+ X0’Y e0 = Y - X0 b0 (eà: estimate of eà) s20 = (e0’e0/(n - p0)) (sà: estimate of sà, n: time, pà: parameters) Estimation [Y, X0] [b0, s0] Test [b, s, c] [ess, F] F ~ (s0 - s) / s2 -> image spm_ess??? -> image of F : spm_F??? under the null hypothesis : F ~ F(p - p0, n-p)

T and F test: take home ... T tests are simple combinations of the betas; they are either positive or negative (b1 – b2 is different from b2 – b1) F tests can be viewed as testing for the additional variance explained by a larger model wrt a simpler model, or F test the sum of the squares of one or several combinations of the betas in testing “single contrast” with an F test, for ex. b1 – b2, the result will be the same as testing b2 – b1. It will be exactly the square of the t-test, testing for both positive and negative effects.

Plan Make sure we all know about the estimation (fitting) part .... Make sure we understand t and F tests But what do we test exactly ? Correlation between regressors An example – almost real

« Additional variance » : Again Independent contrasts

« Additional variance » : Again Testing for the green correlated regressors, for example green: subject age yellow: subject score

« Additional variance » : Again Testing for the red correlated contrasts

« Additional variance » : Again Testing for the green Entirely correlated contrasts ? Non estimable !

« Additional variance » : Again Testing for the green and yellow If significant ? Could be G or Y ! Entirely correlated contrasts ? Non estimable !

Plan Make sure we all know about the estimation (fitting) part .... Make sure we understand t and F tests But what do we test exactly ? Correlation between regressors An example – almost real

A real example (almost !) Experimental Design Design Matrix Factorial design with 2 factors : modality and category 2 levels for modality (eg Visual/Auditory) 3 levels for category (eg 3 categories of words) V A C1 C2 C3 V A C1 C2 C3

Asking ourselves some questions ... V A C1 C2 C3 Test C1 > C2 : c = [ 0 0 1 -1 0 0 ] Test V > A : c = [ 1 -1 0 0 0 0 ] [ 0 0 1 0 0 0 ] Test C1,C2,C3 ? (F) c = [ 0 0 0 1 0 0 ] [ 0 0 0 0 1 0 ] Test the interaction MxC ? Design Matrix not orthogonal Many contrasts are non estimable Interactions MxC are not modelled

Modelling the interactions

Asking ourselves some questions ... C1 C1 C2 C2 C3 C3 Test C1 > C2 : c = [ 1 1 -1 -1 0 0 0] V A V A V A Test V > A : c = [ 1 -1 1 -1 1 -1 0] Test the category effect : [ 1 1 -1 -1 0 0 0] c = [ 0 0 1 1 -1 -1 0] [ 1 1 0 0 -1 -1 0] Test the interaction MxC : [ 1 -1 -1 1 0 0 0] c = [ 0 0 1 -1 -1 1 0] [ 1 -1 0 0 -1 1 0] Design Matrix orthogonal All contrasts are estimable Interactions MxC modelled If no interaction ... ? Model is too “big” !

Asking ourselves some questions ... With a more flexible model C1 C1 C2 C2 C3 C3 V A V A V A Test C1 > C2 ? Test C1 different from C2 ? from c = [ 1 1 -1 -1 0 0 0] to c = [ 1 0 1 0 -1 0 -1 0 0 0 0 0 0] [ 0 1 0 1 0 -1 0 -1 0 0 0 0 0] becomes an F test! Test V > A ? c = [ 1 0 -1 0 1 0 -1 0 1 0 -1 0 0] is possible, but is OK only if the regressors coding for the delay are all equal

Convolution model Design and contrast SPM(t) or SPM(F) Fitted and adjusted data

Toy example: take home ... F tests have to be used when Testing for >0 and <0 effects Testing for more than 2 levels Conditions are modelled with more than one regressor F tests can be viewed as testing for the additional variance explained by a larger model wrt a simpler model, or the sum of the squares of one or several combinations of the betas (here the F test b1 – b2 is the same as b2 – b1, but two tailed compared to a t-test).

Plan Make sure we all know about the estimation (fitting) part .... Make sure we understand t and F tests But what do we test exactly ? Correlation between regressors A (nearly) real example A bad model ... And a better one

A bad model ... True signal and observed signal (---) Model (green, pic at 6sec) TRUE signal (blue, pic at 3sec) Fitting (b1 = 0.2, mean = 0.11) => Test for the green regressor not significant Residual (still contains some signal)

A bad model ... Residual Variance = 0.3 P(Y| b1 = 0) => p-value = 0.1 (t-test) p-value = 0.2 (F-test) = + Y X b e

A « better » model ... True signal + observed signal Model (green and red) and true signal (blue ---) Red regressor : temporal derivative of the green regressor Global fit (blue) and partial fit (green & red) Adjusted and fitted signal => t-test of the green regressor significant => F-test very significant => t-test of the red regressor very significant Residual (a smaller variance)

A better model ... Residual Var = 0.2 P(Y| b1 = 0) p-value = 0.07 (t-test) P(Y| b1 = 0, b2 = 0) p-value = 0.000001 (F-test) = + Y X b e

Flexible models : Gamma Basis

Summary ... (2) The residuals should be looked at ...! Test flexible models if there is little a priori information In general, use the F-tests to look for an overall effect, then look at the response shape Interpreting the test on a single parameter (one regressor) can be difficult: cf the delay or magnitude situation BRING ALL PARAMETERS AT THE 2nd LEVEL

Lunch ?

? Plan Make sure we all know about the estimation (fitting) part .... Make sure we understand t and F tests A (nearly) real example A bad model ... And a better one Correlation in our model : do we mind ? ?

Correlation between regressors True signal Model (green and red) Fit (blue : global fit) Residual

Correlation between regressors Residual var. = 0.3 P(Y| b1 = 0) p-value = 0.08 (t-test) P(Y| b2 = 0) p-value = 0.07 P(Y| b1 = 0, b2 = 0) p-value = 0.002 (F-test) = + Y X b e

Correlation between regressors - 2 true signal Model (green and red) red regressor has been orthogonalised with respect to the green one  remove everything that correlates with the green regressor Fit Residual

Correlation between regressors -2 0.79*** 0.85 0.06 Residual var. = 0.3 P(Y| b1 = 0) p-value = 0.0003 (t-test) P(Y| b2 = 0) p-value = 0.07 P(Y| b1 = 0, b2 = 0) p-value = 0.002 (F-test) = + Y X b e See « explore design »

Design orthogonality : « explore design » Black = completely correlated White = completely orthogonal 1 2 1 2 Corr(1,1) Corr(1,2) Beware: when there are more than 2 regressors (C1,C2,C3,...), you may think that there is little correlation (light grey) between them, but C1 + C2 + C3 may be correlated with C4 + C5

Summary We implicitly test for an additional effect only, be careful if there is correlation Orthogonalisation = decorrelation : not generally needed Parameters and test on the non modified regressor change It is always simpler to have orthogonal regressors and therefore designs ! In case of correlation, use F-tests to see the overall significance. There is generally no way to decide to which regressor the « common » part should be attributed to Original regressors may not matter: it’s the contrast you are testing which should be as decorrelated as possible from the rest of the design matrix

Conclusion : check your models Check your residuals/model - multivariate toolbox Check group homogeneity - Distance toolbox Check your HRF form - HRF toolbox www.madic.org !

Implicit or explicit (^) decorrelation (or orthogonalisation) Xb Y Xb e Space of X C1 C2 C1 C2 Xb LC1^ LC2 C2^ This generalises when testing several regressors (F tests) LC2 : LC1^ : test of C2 in the implicit ^ model test of C1 in the explicit ^ model cf Andrade et al., NeuroImage, 1999

“completely” correlated ... ^ b ? “completely” correlated ... Y 1 0 1 0 1 1 X = Mean Cond 1 Cond 2 e Y = Xb+e; Space of X C2 Xb Mean = C1+C2 C1 Parameters are not unique in general ! Some contrasts have no meaning: NON ESTIMABLE c = [1 0 0] is not estimable (no specific information in the first regressor); c = [1 -1 0] is estimable;