SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.

Slides:



Advertisements
Similar presentations
SPM short course – Mai 2007 Linear Models and Contrasts
Advertisements

Statistical Inference
SPM Course Zurich, February 2012 Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London.
1 st Level Analysis: design matrix, contrasts, GLM Clare Palmer & Misun Kim Methods for Dummies
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Outline What is ‘1st level analysis’? The Design matrix
Classical inference and design efficiency Zurich SPM Course 2014
Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Zurich, February 2009.
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics April 2010 Klaas Enno Stephan Laboratory for Social and Neural.
LINEAR REGRESSION: Evaluating Regression Models. Overview Standard Error of the Estimate Goodness of Fit Coefficient of Determination Regression Coefficients.
The General Linear Model (GLM)
Statistical Inference
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and.
Classical (frequentist) inference Methods & models for fMRI data analysis 11 March 2009 Klaas Enno Stephan Laboratory for Social and Neural System Research.
Classical (frequentist) inference Methods & models for fMRI data analysis 22 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for.
1st level analysis: basis functions and correlated regressors
Lorelei Howard and Nick Wright MfD 2008
SPM short course – May 2003 Linear Models and Contrasts The random field theory Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal.
1st Level Analysis Design Matrix, Contrasts & Inference
General Linear Model & Classical Inference
The General Linear Model and Statistical Parametric Mapping Stefan Kiebel Andrew Holmes SPM short course, May 2002 Stefan Kiebel Andrew Holmes SPM short.
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
Contrasts (a revision of t and F contrasts by a very dummyish Martha) & Basis Functions (by a much less dummyish Iroise!)
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
Analysis of fMRI data with linear models Typical fMRI processing steps Image reconstruction Slice time correction Motion correction Temporal filtering.
Classical (frequentist) inference Methods & models for fMRI data analysis November 2012 With many thanks for slides & images to: FIL Methods group Klaas.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
FMRI GLM Analysis 7/15/2014Tuesday Yingying Wang
Contrasts & Statistical Inference
Statistical Inference Christophe Phillips SPM Course London, May 2012.
FMRI Modelling & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Chicago, Oct.
Idiot's guide to... General Linear Model & fMRI Elliot Freeman, ICN. fMRI model, Linear Time Series, Design Matrices, Parameter estimation,
The General Linear Model
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, May 2012.
SPM short – Mai 2008 Linear Models and Contrasts Stefan Kiebel Wellcome Trust Centre for Neuroimaging.
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Contrasts & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, October 2008.
SPM and (e)fMRI Christopher Benjamin. SPM Today: basics from eFMRI perspective. 1.Pre-processing 2.Modeling: Specification & general linear model 3.Inference:
The general linear model and Statistical Parametric Mapping II: GLM for fMRI Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline.
The General Linear Model Christophe Phillips SPM Short Course London, May 2013.
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, October 2012.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
General Linear Model & Classical Inference Short course on SPM for MEG/EEG Wellcome Trust Centre for Neuroimaging University College London May 2010 C.
The General Linear Model (GLM)
General Linear Model & Classical Inference
The general linear model and Statistical Parametric Mapping
The General Linear Model
Statistical Inference
Design Matrix, General Linear Modelling, Contrasts and Inference
Statistical Inference
SPM short course at Yale – April 2005 Linear Models and Contrasts
and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline
Statistical Inference
The General Linear Model (GLM)
Contrasts & Statistical Inference
The General Linear Model
The general linear model and Statistical Parametric Mapping
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
SPM short course – May 2009 Linear Models and Contrasts
Contrasts & Statistical Inference
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
The General Linear Model
The General Linear Model
Contrasts & Statistical Inference
Presentation transcript:

SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France

realignment & coregistration smoothing normalisation Corrected p-values images Adjusted data Design matrix Anatomical Reference Spatial filter Random Field Theory Your question: a contrast Statistical Map Uncorrected p-values General Linear Model Linear fit Ü statistical image

w Make sure we understand the testing procedures : t- and F-tests Plan w Examples – almost real w REPEAT: model and fitting the data with a Linear Model w But what do we test exactly ?

Temporal series fMRI Statistical image (SPM) voxel time course One voxel = One test (t, F,...) amplitude time General Linear Model Üfitting Üstatistical image

Regression example… = ++ voxel time series       box-car reference function   Mean value Fit the GLM

Regression example… = ++ voxel time series       box-car reference function   Mean value Fit the GLM

…revisited : matrix form =  + +  =++ Y   11 22  f(t)

Box car regression: design matrix…   =+  =  +YX data vector (voxel time series) design matrix parameters error vector    

What if we believe that there are drifts?

Add more reference functions... Discrete cosine transform basis functions

…design matrix =  =  + YX error vector      … + data vector

…design matrix =  +  =  + YX data vector design matrix parameters error vector  = the betas (here : 1 to 9)  

Fitting the model = finding some estimate of the betas raw fMRI time series adjusted for low Hz effects residuals fitted “high-pass filter” fitted signal the squared values of the residuals   s 2 number of time points minus the number of estimated betas fitted drift Raw data Noise Variance

Fitting the model = finding some estimate of the betas =            + Y = X  ∥ Y−X ∥ 2 = Σ i [ y i − X i ] 2 finding the betas = minimising the sum of square of the residuals finding the betas = minimising the sum of square of the residuals when are estimated: let’s call them b when  are estimated: let’s call them b when is estimated : let’s call it e when  is estimated : let’s call it e estimated SD of : let’s call it s estimated SD of  : let’s call it s

w We put in our model regressors (or covariates) that represent how we think the signal is varying (of interest and of no interest alike) BUT WHICH ONE TO INCLUDE ? Take home... w Coefficients (= parameters) are estimated by minimizing the fluctuations, - variability – variance – of the noise – the residuals. w Because the parameters depend on the scaling of the regressors included in the model, one should be careful in comparing manually entered regressors

w Make sure we understand t and F tests Plan w Make sure we all know about the estimation (fitting) part.... w But what do we test exactly ? w An example – almost real

A contrast = a weighted sum of parameters: c´   c’ = divide by estimated standard deviation of b  T test - one dimensional contrasts - SPM{t} SPM{t} T = contrast of estimated parameters variance estimate T = s 2 c’(X’X) - c c’b   > 0 ? Compute 1 x b  + 0 x b  + 0 x b  + 0 x b  + 0 x b  +... b  b  b  b  b ....

How is this computed ? (t-test) Y = X  +   ~   N(0,I)  (Y : at one position) b = (X’X) + X’Y (b: estimate of  ) -> beta??? images e = Y - Xb (e: estimate of  ) s 2 = (e’e/(n - p)) (s: estimate of  n: time points, p: parameters) -> 1 image ResMS -> 1 image ResMS Estimation [Y, X] [b, s] Test [b, s 2, c] [c’b, t] Var(c’b) = s 2 c’(X’X) + c (compute for each contrast c, proportional to s 2 ) t = c’b / sqrt(s 2 c’(X’X) + c) c’b -> images spm_con??? compute the t images -> images spm_t??? compute the t images -> images spm_t??? under the null hypothesis H 0 : t ~ Student-t( df ) df = n-p

F-test : a reduced model H 0 :  1 = 0 X1X1 X0X0 H 0 : True model is X 0 X0X0 c’ = T values become F values. F = T 2 Both “activation” and “deactivations” are tested. Voxel wise p-values are halved. This (full) model ? Or this one? S2S2 S02S02 F ~ ( S S 2 ) / S 2

Tests multiple linear hypotheses : Does X1 model anything ? F-test : a reduced model or... This (full) model ? H 0 : True (reduced) model is X 0 X1X1 X0X0 S2S2 Or this one? X0X0 S02S02 F = error variance estimate additional variance accounted for by tested effects F ~ ( S S 2 ) / S 2

c’ = tests multiple linear hypotheses. Ex : does drift functions model anything? F-test : a reduced model or... multi-dimensional contrasts ? H 0 :  3-9 = ( ) X1X1 X0X0 H 0 : True model is X 0 X0X0 Or this one? This (full) model ?

How is this computed ? (F-test) Error variance estimate additional variance accounted for by tested effects Test [b, s, c] [ess, F] F ~ (s 0 - s) / s 2 -> image spm_ess??? -> image of F : spm_F??? -> image of F : spm_F??? under the null hypothesis : F ~ F( p - p0, n-p) Estimation [Y, X] [b, s] Y = X  +   ~ N(0,   I) Y = X     +     ~ N(0,    I) X  : X Reduced

w T tests are simple combinations of the betas; they are either positive or negative (b1 – b2 is different from b2 – b1) T and F test: take home... w F tests can be viewed as testing for the additional variance explained by a larger model wrt a simpler model, or w F tests the sum of the squares of one or several combinations of the betas w in testing “single contrast” with an F test, for ex. b1 – b2, the result will be the same as testing b2 – b1. It will be exactly the square of the t-test, testing for both positive and negative effects.

w Make sure we understand t and F tests Plan w Make sure we all know about the estimation (fitting) part.... w But what do we test exactly ? Correlation between regressors w An example – almost real

« Additional variance » : Again Independent contrasts

« Additional variance » : Again correlated regressors, for example green: subject age yellow: subject score Testing for the green

« Additional variance » : Again correlated contrasts Testing for the red

« Additional variance » : Again Entirely correlated contrasts ? Non estimable ! Testing for the green

« Additional variance » : Again If significant ? Could be G or Y ! Entirely correlated contrasts ? Non estimable ! Testing for the green and yellow

An example: real Testing for first regressor: T max = 9.8

Including the movement parameters in the model Testing for first regressor: activation is gone !

Convolution model Design and contrast SPM(t) or SPM(F) Fitted and adjusted data

w Make sure we understand t and F tests Plan w Make sure we all know about the estimation (fitting) part.... w But what do we test exactly ? Correlation between regressors w An example – almost real

A real example (almost !) Factorial design with 2 factors : modality and category 2 levels for modality (eg Visual/Auditory) 3 levels for category (eg 3 categories of words) Experimental DesignDesign Matrix V A C1 C2 C3 C1 C2 C3 V A C 1 C 2 C 3

Asking ourselves some questions... V A C 1 C 2 C 3 Design Matrix not orthogonal Many contrasts are non estimable Interactions MxC are not modelled Test C1 > C2 : c = [ ] Test V > A : c = [ ] [ ] Test C1,C2,C3 ? (F) c = [ ] [ ] Test the interaction MxC ?

Modelling the interactions

Asking ourselves some questions... V A V A V A Test C1 > C2 : c = [ ] Test the interaction MxC : [ ] c =[ ] [ ] Design Matrix orthogonal All contrasts are estimable Interactions MxC modelled If no interaction... ? Model is too “big” ! C 1 C 1 C 2 C 2 C 3 C 3 Test V > A : c = [ ] Test the category effect : [ ] c =[ ] [ ]

Asking ourselves some questions... With a more flexible model V A V A V A Test C1 > C2 ? Test C1 different from C2 ? from c = [ ] to c = [ ] [ ] becomes an F test! C 1 C 1 C 2 C 2 C 3 C 3 Test V > A ? c = [ ] is possible, but is OK only if the regressors coding for the delay are all equal

w F tests have to be used when - Testing for >0 and 0 and <0 effects - Testing for more than 2 levels in factorial designs - Conditions are modelled with more than one regressor Toy example: take home... w F tests can be viewed as testing for - the additional variance explained by a larger model wrt a simpler model, or - the sum of the squares of one or several combinations of the betas (here the F test b1 – b2 is the same as b2 – b1, but two tailed compared to a t-test).