FMRI GLM Analysis 7/15/2014Tuesday Yingying Wang

Slides:



Advertisements
Similar presentations
General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN.
Advertisements

1 st Level Analysis: design matrix, contrasts, GLM Clare Palmer & Misun Kim Methods for Dummies
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Outline What is ‘1st level analysis’? The Design matrix
The General Linear Model Or, What the Hell’s Going on During Estimation?
Classical inference and design efficiency Zurich SPM Course 2014
Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Zurich, February 2009.
Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research Institute for Empirical Research in Economics University of Zurich With.
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics April 2010 Klaas Enno Stephan Laboratory for Social and Neural.
The General Linear Model (GLM) Methods & models for fMRI data analysis in neuroeconomics November 2010 Klaas Enno Stephan Laboratory for Social & Neural.
The General Linear Model (GLM)
The General Linear Model (GLM) SPM Course 2010 University of Zurich, February 2010 Klaas Enno Stephan Laboratory for Social & Neural Systems Research.
Statistical Inference
Classical (frequentist) inference Methods & models for fMRI data analysis in neuroeconomics 14 October 2009 Klaas Enno Stephan Laboratory for Social and.
Classical (frequentist) inference Methods & models for fMRI data analysis 11 March 2009 Klaas Enno Stephan Laboratory for Social and Neural System Research.
Classical (frequentist) inference Methods & models for fMRI data analysis 22 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for.
Multiple comparison correction Methods & models for fMRI data analysis 29 October 2008 Klaas Enno Stephan Branco Weiss Laboratory (BWL) Institute for Empirical.
Topic 3: Regression.
1st level analysis: basis functions and correlated regressors
Lorelei Howard and Nick Wright MfD 2008
SPM short course – May 2003 Linear Models and Contrasts The random field theory Hammering a Linear Model Use for Normalisation T and F tests : (orthogonal.
General Linear Model & Classical Inference
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
Objectives of Multiple Regression
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2014 C. Phillips, Cyclotron Research Centre, ULg, Belgium
Classical (frequentist) inference Methods & models for fMRI data analysis November 2012 With many thanks for slides & images to: FIL Methods group Klaas.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
General Linear Model. Y1Y2...YJY1Y2...YJ = X 11 … X 1l … X 1L X 21 … X 2l … X 2L. X J1 … X Jl … X JL β1β2...βLβ1β2...βL + ε1ε2...εJε1ε2...εJ Y = X * β.
Contrasts & Statistical Inference
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
The General Linear Model (for dummies…) Carmen Tur and Ashwani Jha 2009.
Statistical Inference Christophe Phillips SPM Course London, May 2012.
FMRI Modelling & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Chicago, Oct.
Idiot's guide to... General Linear Model & fMRI Elliot Freeman, ICN. fMRI model, Linear Time Series, Design Matrices, Parameter estimation,
The General Linear Model
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, May 2012.
SPM short – Mai 2008 Linear Models and Contrasts Stefan Kiebel Wellcome Trust Centre for Neuroimaging.
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Contrasts & Statistical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course London, October 2008.
The General Linear Model Christophe Phillips SPM Short Course London, May 2013.
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, October 2012.
SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
General Linear Model & Classical Inference London, SPM-M/EEG course May 2016 Sven Bestmann, Sobell Department, Institute of Neurology, UCL
General Linear Model & Classical Inference Short course on SPM for MEG/EEG Wellcome Trust Centre for Neuroimaging University College London May 2010 C.
The General Linear Model (GLM)
The General Linear Model (GLM)
General Linear Model & Classical Inference
The general linear model and Statistical Parametric Mapping
The General Linear Model
Statistical Inference
Design Matrix, General Linear Modelling, Contrasts and Inference
Statistical Inference
SPM short course at Yale – April 2005 Linear Models and Contrasts
Statistical Inference
The General Linear Model (GLM)
Contrasts & Statistical Inference
The General Linear Model
The general linear model and Statistical Parametric Mapping
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
SPM short course – May 2009 Linear Models and Contrasts
Contrasts & Statistical Inference
The General Linear Model
The General Linear Model (GLM)
Statistical Inference
The General Linear Model
The General Linear Model
Contrasts & Statistical Inference
Presentation transcript:

FMRI GLM Analysis 7/15/2014Tuesday Yingying Wang

Realignment Smoothing Normalization General linear model Statistical parametric map (SPM) Image time-series Parameter estimates Design matrix Template Kernel Random field theory p <0.05 Statisticalinference GLM 2

Parametric statistics one sample t-test two sample t-test paired t-test Anova AnCova correlation linear regression multiple regression F-tests etc… all cases of the General Linear Model 3

The General Linear Model (GLM) T-tests, correlations and Fourier analysis work for simple designs and were common in the early days of imaging. The General Linear Model (GLM) is now available in many software packages and tends to be the analysis of choice. Why is the GLM so great? the GLM is an overarching tool that can do anything that the simpler tests do you can examine any combination of contrasts (e.g., intact - scrambled, scrambled - baseline) with one GLM rather than multiple correlations the GLM allows much greater flexibility for combining data within subjects and between subjects 4

The General Linear Model (GLM) it also makes it much easier to counterbalance orders and discard bad sections of data the GLM allows you to model things that may account for variability in the data even though they aren’t interesting in and of themselves (e.g., head motion) as we will see later in the course, the GLM also allows you to use more complex designs (e.g., factorial designs) 5

A mass-univariate approach Time 6

7

General Linear Model Equation for single voxel j: y j = x j1  1 + … x jL  L +  j  j ~ N(0,  2 ) y j : data for scan, j = 1…J x jl : explanatory variables / covariates / regressors, l = 1…L  l : parameters / regression slopes / fixed effects  j : residual errors, independent & identically distributed (“iid”) (Gaussian, mean of zero and standard deviation of σ) Equivalent matrix form: y = X  +  X : “design matrix” / model 8

GLM 9

The Design Matrix 10

Estimation of the parameters i.i.d. assumptions: OLS estimates: 11

OLS (Ordinary Least Squares) This is a scalar and the transpose of a scalar is a scalar The goal is to minimize the quadratic error between data and model 12 You find the extremum (max or min) of a function by taking its derivative and setting it to zero

13

What are the problems? 1.BOLD responses have a delayed and dispersed form. 2.The BOLD signal includes substantial amounts of low- frequency noise. 3.The data are serially correlated (temporally auto correlated)  this violates the assumptions of the noise model in the GLM Design Error 14

The response of a linear time-invariant (LTI) system is the convolution of the input with the system's response to an impulse (delta function). Problem 1: Shape of BOLD response 15

Solution: Convolution model of the BOLD response expected BOLD response = input function  impulse response function (HRF)  HRF blue = data green = predicted response, taking convolved with HRF red = predicted response, NOT taking into account the HRF 16

Problem 2: Low frequency noise blue = data black = mean + low-frequency drift green = predicted response, taking into account low-frequency drift red = predicted response, NOT taking into account low-frequency drift Linear model 17

discrete cosine transform (DCT) set Solution 2: High pass filtering 18

Problem 3: Serial correlations non-identity non-independence t1 t2 t1 t2 i.i.d n: number of scans n n 19

Problem 3: Serial correlations n n auto covariance function with 1 st order autoregressive process: AR(1) n: number of scans 20

Solution 3: Pre-whitening 1. Use an enhanced noise model with multiple error covariance components, i.e. e ~ N(0,  2 V) instead of e ~ N(0,  2 I). 2. Use estimated serial correlation to specify filter matrix W for whitening the data. This is i.i.d 21

How do we define W ? Enhanced noise model Remember linear transform for Gaussians Choose W such that error covariance becomes spherical Conclusion: W is a simple function of V  so how do we estimate V ? 22

Linear Model effects estimate error estimate statistic  GLM includes all known experimental effects and confounds Convolution with a canonical HRF High-pass filtering to account for low-frequency drifts Estimation of multiple variance components (e.g. to account for serial correlations) 23

Contrasts in the GLM 24 We can examine whether a single predictor is significant (compared to the baseline) We can also examine whether a single predictor is significantly greater than another predictor

Implementation of GLM in SPM 25

Contrasts [ ] [ ] 26

Hypothesis Testing - Introduction Is the mean of a measurement different from zero? Mean of several measurements Many experiments Ratio of effect vs. noise  t-statistic Null Distribution of T Null distribution What distribution of T would we get for  = 0? 0     Exp A     Exp B 27

Hypothesis Testing  Null Hypothesis H 0 Typically what we want to disprove (no effect).  The Alternative Hypothesis H A expresses outcome of interest. To test an hypothesis, we construct “test statistics”.  Test Statistic T The test statistic summarises evidence about H 0. Typically, test statistic is small in magnitude when the hypothesis H 0 is true and large when false.  We need to know the distribution of T under the null hypothesis. Null Distribution of T 28

Hypothesis Testing  p-value: A p-value summarises evidence against H 0. This is the chance of observing a value more extreme than t under the null hypothesis. Null Distribution of T  Significance level α : Acceptable false positive rate α.  threshold u α Threshold u α controls the false positive rate t p-value Null Distribution of T  uu  Conclusion about the hypothesis: We reject the null hypothesis in favour of the alternative hypothesis if t > u α t 29

c T = T = contrast of estimated parameters variance estimate effect of interest > 0 ? = amplitude > 0 ? =  1 = c T  > 0 ?  1  2  3  4  5... T-test - one dimensional contrasts – SPM{t} Question: Null hypothesis: H 0 : c T  =0 Test statistic: 30

T-contrast in SPM con_???? image ResMS image spmT_???? image SPM{t}  For a given contrast c: beta_???? images 31

T-test: a simple example Q: activation during listening ? Null hypothesis:  1 = 0  Passive word listening versus rest SPMresults: Threshold T = {p<0.001} voxel-level p uncorrected T ( Z  ) Mm mm mm Inf Inf Inf Inf Inf

T-test: summary T-test is a signal-to-noise measure (ratio of estimate to standard deviation of estimate).  T-contrasts are simple combinations of the betas; the T- statistic does not depend on the scaling of the regressors or the scaling of the contrast. H0:H0:vs H A :  Alternative hypothesis: 33

Scaling issue The T-statistic does not depend on the scaling of the regressors. [ ]  Be careful of the interpretation of the contrasts themselves (eg, for a second level analysis): sum ≠ average  The T-statistic does not depend on the scaling of the contrast. / 4 Subject 1 [1 1 1 ] Subject 5  Contrast depends on scaling. / 3 34

F-test - the extra-sum-of-squares principle Model comparison: Full model ? X1X1 X0X0 or Reduced model? X0X0 Test statistic: ratio of explained variability and unexplained variability (error) 1 = rank(X) – rank(X 0 ) 2 = N – rank(X) RSS RSS 0 Null Hypothesis H0: True model is X0 (reduced model) 35

F-test - multidimensional contrasts – SPM{F} Test multiple linear hypotheses: Full model ? Null Hypothesis H0:  3 =  4 =  5 =  6 =  7 =  8 = 0 X 1 (  3-8 ) X0X c T = c T  = 0 Is any of  3-8 not equal 0? 36

F-contrast in SPM ResMS image spmF_???? images SPM{F} ess_???? images ( RSS 0 - RSS ) beta_???? images 37

F-test example: movement related effects Design matrix contrast(s) Design matrix contrast(s) 38

F-test: summary. F-tests can be viewed as testing for the additional variance explained by a larger model wrt a simpler (nested) model  model comparison.  In testing uni-dimensional contrast with an F-test, for example  1 –  2, the result will be the same as testing  2 –  1. It will be exactly the square of the t-test, testing for both positive and negative effects.  F tests a weighted sum of squares of one or several combinations of the regression coefficients .  In practice, we don’t have to explicitly separate X into [X 1 X 2 ] thanks to multidimensional contrasts.  Hypotheses: 39

Orthogonal regressors Variability in Y 40

Correlated regressors Shared variance Variability in Y 41

Correlated regressors Variability in Y 42

Correlated regressors Variability in Y 43

Correlated regressors Variability in Y 44

Correlated regressors Variability in Y 45

Correlated regressors Variability in Y 46

Correlated regressors Variability in Y 47

Design orthogonality For each pair of columns of the design matrix, the orthogonality matrix depicts the magnitude of the cosine of the angle between them, with the range 0 to 1 mapped from white to black.  If both vectors have zero mean then the cosine of the angle between the vectors is the same as the correlation between the two variates. 48

Correlated regressors: summary We implicitly test for an additional effect only. When testing for the first regressor, we are effectively removing the part of the signal that can be accounted for by the second regressor:  implicit orthogonalisation. Orthogonalisation = decorrelation. Parameters and test on the non modified regressor change. Rarely solves the problem as it requires assumptions about which regressor to uniquely attribute the common variance.  change regressors (i.e. design) instead, e.g. factorial designs.  use F-tests to assess overall significance. Original regressors may not matter: it’s the contrast you are testing which should be as decorrelated as possible from the rest of the design matrix x1x1 x2x2 x1x1 x2x2 x1x1 x2x2 xx xx x  = x 2 – x 1.x 2 x 1 49

Design efficiency  The aim is to minimize the standard error of a t-contrast (i.e. the denominator of a t-statistic).  This is equivalent to maximizing the efficiency e: Noise variance Design variance  If we assume that the noise variance is independent of the specific design:  This is a relative measure: all we can really say is that one design is more efficient than another (for a given contrast). 50

Design efficiency AB A+B A-B  High correlation between regressors leads to low sensitivity to each regressor alone.  We can still estimate efficiently the difference between them. 51

Types of error Reality H1H1 H0H0 H0H0 H1H1 True negative (TN) True positive (TP) False positive (FP)  False negative (FN) specificity: 1- = TN / (TN + FP) = proportion of actual negatives which are correctly identified sensitivity (power): 1- = TP / (TP + FN) = proportion of actual positives which are correctly identified Decision 52

Correction for Multiple Comparisons With conventional probability levels (e.g., p <.05) and a huge number of comparisons (e.g., 64 x 64 x 12 = 49,152), a lot of voxels will be significant purely by chance e.g.,.05 * 49,152 = 2458 voxels significant due to chance How can we avoid this?  need to do correction for multiple comparisons 53

(1) Bonferroni correction Divide desired p value by number of comparisons Example: desired p value: p <.05 number of voxels: 50,000 required p value: p <.05 / 50,000  p < quite conservative can use less stringent values e.g., Brain Voyager can use the number of voxels in the cortical surface the luxury of a localizer task is that it allows you to correct for only those voxels you are interested in (i.e., the voxels that fall within your ROI) 54 Source: Jody Culham’s web slidesweb slides

(2) Cluster correction falsely activated voxels should be randomly dispersed set minimum cluster size to be large enough to make it unlikely that a cluster of that size would occur by chance assumes that data from adjacent voxels are uncorrelated (not true) 55

(3) Test-retest reliability Perform statistical tests on each half of the data The probability of a given voxel appearing in both purely by chance is the square of the p value used in each half e.g.,.001 x.001 = Alternatively, use the first half to select an ROI and evaluate your hypothesis in the second half. 56

(4) Consistency between subjects Though seldom discussed, consistency between subjects provides further encouragement that activation is not spurious. Present group and individual data. 57 (5) Screen saver scan You can test how many voxels really do light up when really nothing is happening.

ROI VoxelField FWE FDR * voxels/volume Height Extent ROI VoxelField Height Extent There is a multiple testing problem (‘voxel’ or ‘blob’ perspective). More corrections needed as.. Detect an effect of unknown extent & location 58

Book – must read 59 Statistical Parametric Mapping: The Analysis of Functional Brain Images. Elsevier, 2007.

Home Work ual.pdf#Chap:data:auditoryhttp:// ual.pdf#Chap:data:auditory Walk through SPM manual chapter 28 Sample data can be found in the general folder If you have any question, ask me or Danielle. Bonus for those who finish the homework by next training session. 60