Download presentation
Presentation is loading. Please wait.
Published byMarjory Hood Modified over 9 years ago
1
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May 2013
2
Contrast c Random Field Theory
3
Statistical Parametric Maps mm time mm time frequency 3D M/EEG source reconstruction, fMRI, VBM 2D time-frequency 2D+t scalp-time 1D time time
4
ERP example Random presentation of ‘faces’ and ‘scrambled faces’ 70 trials of each type 128 EEG channels Question: is there a difference between the ERP of ‘ faces ’ and ‘ scrambled faces ’ ?
5
ERP example: channel B9 compares size of effect to its error standard deviation Focus on N170
6
Data modelling = + + Error FacesScrambledData = ++ XX XX Y
7
Design matrix =+ = +XY Data vector Design matrix Parameter vector Error vector
8
General Linear Model = + y y X X Model is specified by 1.Design matrix X 2.Assumptions about e Model is specified by 1.Design matrix X 2.Assumptions about e N: number of scans p: number of regressors N: number of scans p: number of regressors The design matrix embodies all available knowledge about experimentally controlled factors and potential confounds.
9
one sample t-test two sample t-test paired t-test Analysis of Variance (ANOVA) Analysis of Covariance (ANCoVA) correlation linear regression multiple regression GLM: a flexible framework for parametric analyses
10
Parameter estimation Ordinary least squares estimation (OLS) (assuming i.i.d. error): Objective: estimate parameters to minimize =+
11
A geometric perspective on the GLM y e Design space defined by X x1x1 x2x2 Smallest errors (shortest error vector) when e is orthogonal to X Ordinary Least Squares (OLS)
12
Mass-univariate analysis: voxel-wise GLM Evoked response image Time 1.Transform data for all subjects and conditions 2.SPM: Analyse data at each voxel Sensor to voxel transform
13
Hypothesis Testing Null Hypothesis H 0 Typically what we want to disprove (no effect). The Alternative Hypothesis H A expresses outcome of interest. To test an hypothesis, we construct “test statistics”. Test Statistic T The test statistic summarises evidence about H 0. Typically, test statistic is small in magnitude when the hypothesis H 0 is true and large when false. We need to know the distribution of T under the null hypothesis. Null Distribution of T
14
Hypothesis Testing p-value: A p-value summarises evidence against H 0. This is the chance of observing value more extreme than t under the null hypothesis. Null Distribution of T Significance level α : Acceptable false positive rate α. threshold u α Threshold u α controls the false positive rate t p-value Null Distribution of T uu Conclusion about the hypothesis: We reject the null hypothesis in favour of the alternative hypothesis if t > u α
15
Contrast : specifies linear combination of parameter vector: ERP: faces < scrambled ? = t = contrast of estimated parameters variance estimate Contrast & t-test c T = -1 +1 SPM-t over time & space Test H 0 :
16
T-test: summary T-test is a signal-to-noise measure (ratio of estimate to standard deviation of estimate). T-contrasts are simple combinations of the betas; the T- statistic does not depend on the scaling of the regressors or the scaling of the contrast. H0:H0:vs H A : Alternative hypothesis:
17
Model comparison: Full vs. Reduced model? Null Hypothesis H 0 : True model is X 0 (reduced model) Test statistic: ratio of explained and unexplained variability (error) 1 = rank(X) – rank(X 0 ) 2 = N – rank(X) RSS RSS 0 Full model ? X1X1 X0X0 Or reduced model? X0X0 Extra-sum-of-squares & F-test
18
F-test & multidimensional contrasts Tests multiple linear hypotheses: H 0 : True model is X 0 Full or reduced model? X 1 ( 3-4 ) X0X0 X0X0 0 0 1 0 0 0 0 1 c T = H 0 : 3 = 4 = 0 test H 0 : c T = 0 ?
19
F-test: summary. F-tests can be viewed as testing for the additional variance explained by a larger model wrt a simpler (nested) model model comparison. In testing uni-dimensional contrast with an F-test, for example 1 – 2, the result will be the same as testing 2 – 1. It will be exactly the square of the t-test, testing for both positive and negative effects. F tests a weighted sum of squares of one or several combinations of the regression coefficients . In practice, we don’t have to explicitly separate X into [X 1 X 2 ] thanks to multidimensional contrasts. Hypotheses:
20
Orthogonal regressors Variability in Y
21
Correlated regressors Shared variance Variability in Y
22
Correlated regressors Variability in Y
23
Correlated regressors Variability in Y
24
Correlated regressors Variability in Y
25
Correlated regressors Variability in Y
26
Correlated regressors Variability in Y
27
Correlated regressors Variability in Y
28
Summary Mass-univariate GLM: –Fit GLMs with design matrix, X, to data at different points in space to estimate local effect sizes, –GLM is a very general approach (one-sample, two-sample, paired t-tests, ANCOVAs, …) Hypothesis testing framework –Contrasts –t-tests –F-tests
29
Multiple covariance components = 1 + 2 Q1Q1 Q2Q2 Estimation of hyperparameters with ReML (Restricted Maximum Likelihood). V enhanced noise model at voxel i error covariance components Q and hyperparameters
30
Weighted Least Squares (WLS) Let Then where WLS equivalent to OLS on whitened data and design WLS equivalent to OLS on whitened data and design
31
stimulus function 1.Decompose data into effects and error 2.Form statistic using estimates of effects and error Make inferences about effects of interest Why? How? data statistic Modelling the measured data linear model linear model effects estimate error estimate
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.