Covariance components II autocorrelation & nonsphericity

Slides:



Advertisements
Similar presentations
Hierarchical Models and
Advertisements

Spatial autoregressive methods
2nd level analysis in fMRI
Within Subjects Designs
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
Covariance, autocorrelation & non-sphericity Methods for Dummies.
The General Linear Model (GLM) Methods & models for fMRI data analysis in neuroeconomics November 2010 Klaas Enno Stephan Laboratory for Social & Neural.
The General Linear Model (GLM)
The General Linear Model (GLM) SPM Course 2010 University of Zurich, February 2010 Klaas Enno Stephan Laboratory for Social & Neural Systems Research.
Group analyses of fMRI data Methods & models for fMRI data analysis 28 April 2009 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
Group analyses of fMRI data Methods & models for fMRI data analysis 26 November 2008 Klaas Enno Stephan Laboratory for Social and Neural Systems Research.
1 Overview of Hierarchical Modeling Thomas Nichols, Ph.D. Assistant Professor Department of Biostatistics Mixed Effects.
General Linear Model & Classical Inference Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM M/EEGCourse London, May.
2nd Level Analysis Jennifer Marchant & Tessa Dekker
With many thanks for slides & images to: FIL Methods group, Virginia Flanagin and Klaas Enno Stephan Dr. Frederike Petzschner Translational Neuromodeling.
ANOVA Greg C Elvers.
Chapter 14: Repeated-Measures Analysis of Variance.
7/16/2014Wednesday Yingying Wang
SPM Course Zurich, February 2015 Group Analyses Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London With many thanks to.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
Repeated Measurements Analysis. Repeated Measures Analysis of Variance Situations in which biologists would make repeated measurements on same individual.
Group analyses of fMRI data Methods & models for fMRI data analysis November 2012 With many thanks for slides & images to: FIL Methods group, particularly.
1 G Lect 14a G Lecture 14a Examples of repeated measures A simple example: One group measured twice The general mixed model Independence.
Wellcome Dept. of Imaging Neuroscience University College London
Spatial smoothing of autocorrelations to control the degrees of freedom in fMRI analysis Keith Worsley Department of Mathematics and Statistics, McGill.
The General Linear Model (for dummies…) Carmen Tur and Ashwani Jha 2009.
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, May 2012.
The general linear model and Statistical Parametric Mapping I: Introduction to the GLM Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B.
Variance components Wellcome Dept. of Imaging Neuroscience Institute of Neurology, UCL, London Stefan Kiebel.
The general linear model and Statistical Parametric Mapping II: GLM for fMRI Alexa Morcom and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline.
Bayesian Inference in SPM2 Will Penny K. Friston, J. Ashburner, J.-B. Poline, R. Henson, S. Kiebel, D. Glaser Wellcome Department of Imaging Neuroscience,
The General Linear Model Christophe Phillips SPM Short Course London, May 2013.
The General Linear Model Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM fMRI Course London, October 2012.
SPM short course – Mai 2008 Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
Don't Sweat the Simple Stuff (But it's not all Simple Stuff)
The General Linear Model (GLM)
Group Analyses Guillaume Flandin SPM Course London, October 2016
The General Linear Model (GLM)
The general linear model and Statistical Parametric Mapping
The General Linear Model
Linear Mixed Models in JMP Pro
2nd Level Analysis Methods for Dummies 2010/11 - 2nd Feb 2011
Introduction to Permutation for Neuroimaging HST
Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation
Random Effects Analysis
Working Independence versus modeling correlation Longitudinal Example
and Stefan Kiebel, Rik Henson, Andrew Holmes & J-B Poline
The General Linear Model (GLM)
Contrasts & Statistical Inference
Wellcome Dept. of Imaging Neuroscience University College London
Reasoning in Psychology Using Statistics
The General Linear Model
I. Statistical Tests: Why do we use them? What do they involve?
Reasoning in Psychology Using Statistics
The general linear model and Statistical Parametric Mapping
SPM2: Modelling and Inference
The General Linear Model
Hierarchical Models and
The General Linear Model (GLM)
Wellcome Dept. of Imaging Neuroscience University College London
Variance components and Non-sphericity
Contrasts & Statistical Inference
Bayesian Inference in SPM2
Linear Hierarchical Models
The General Linear Model
The General Linear Model (GLM)
WellcomeTrust Centre for Neuroimaging University College London
The General Linear Model
The General Linear Model
Contrasts & Statistical Inference
Presentation transcript:

Covariance components II autocorrelation & nonsphericity Alexa Morcom Oct. 2003

Methods by blondes vs. mullets?

Nonsphericity - what is it and why do we care? Need to know expected behaviour of parameters under H0 - less intrinsic variability means fewer df, so liberal inference Null distribution assumed normal Further assumed to be ‘iid’ - errors are identical and independently distributed “Estimates of variance components are used to compute statistics and variability in these estimates determine the statistic’s d.f.”

An illustration... y = X* b + e y1 = X* b1 + e1 y2 b2 e2 A GLM with just 2 observations y = X* b + e y1 = X* b1 + e1 y2 b2 e2 e ~ N(0, s) iid iid assumptions e ~ N(0, C e) error covariance matrix C e

Spherical e2 e1 C e = 1 0 0 1

Non-identical e2 e1 C e = 4 0 0 1

Non-independent e2 e1 C e = 1 3 0.5 5

Varieties of nonsphericity in fMRI Temporal autocorrelation - 1st level Correlated repeated measures - 2nd level Unequal variances between groups - 2nd level Unequal within-subject variances - 1st level* Unbalanced designs at 1st level* (Spatial ‘nonsphericity’ or smoothness)

A traditional psychology example Repeated measures of RT across subjects RTs to levels 2 & 3 may be more highly correlated than those to levels 1 & 2

Sphericity Compound symmetry s11 s12 … s1k s21 s22 … s2k … … … ... sk1 sk2 … skk s2 rs2 … rs2 rs2 s2 … rs2 … … … ... rs2 rs2 … s2 n subjects k treatments sij = sample var/ cov Variance of difference between pair of levels constant Not easy to see! By inspection: Treatment variances equal, treatment covariances equal

The traditional psychology solution Sphericity - most liberal condition for SS to be distributed as F ratio A measure of departure from sphericity: e SS but approx. by F with Greenhouse-Geisser corrected d.f. (based on Satterthwaite approx): A fudge in SPSS because e must be estimated, and this is imprecise (later…) so correction slightly liberal F [(k-1)e, (n-1)(k-1)e]

A more general GLM y = X*b + e OLS Wy = WX*b + We W/GLS Weighting by W to render Cov(We) iid or known

A more general GLM y = X*b + e OLS Wy = WX*b + We W/GLS Weighting by W to render Cov(We) iid or known bw = (WX)-y Cb = (WX)- WCe W T(WX)-T i.e. covariance of parameter estimates depends on both the design and the error structure ... ^ ^

A more general GLM y = X*b + e OLS Wy = WX*b + We W/GLS Weighting by W to render Cov(We) iid or known bw = (WX)-y Cb = (WX)- WCe W T(WX)-T i.e. covariance of parameter estimates depends on both the design and the error structure ... If Ce is iid with var = s 2, then W = I; Cb Ce = s 2I ^ ^ ^

A more general GLM y = X*b + e OLS Wy = WX*b + We W/GLS Weighting by W to render Cov(We) iid or known bw = (WX)-y Cb = (WX)- WCe W T(WX)-T i.e. covariance of parameter estimates depends on both the design and the error structure ... If Ce is iid with var = s 2, then W = I; Cb Ce = s 2I If single covariance component, direct estimation Otherwise iterative, or determine Ce first ... ^ ^ ^

Colouring & whitening... Imposed ‘ temporal smoothing ’ W=S (SPM99) Sy = SX*b + Se Cb = (SX)- SCe S T(SX)-T S is known and Ce assumed ‘swamped’ Resulting d.f. adjustment = Satterthwaite (but better than Greenhouse-Geisser) ^

Colouring & whitening... Imposed ‘ temporal smoothing ’ W=S (SPM99) Sy = SX*b + Se Cb = (SX)- SCe S T(SX)-T S is known and Ce assumed ‘swamped’ Resulting d.f. adjustment = Satterthwaite (but better than Greenhouse-Geisser) Prewhitening: if Ce is assumed known, premultiply by W = Ce½ (SPM2) b by OLS then is best estimator & Cb = (XT Ce-1X)-1 ^ ^ ^

Effects on statistics t = cT b (cTCbc )½ Estimation is better - increased precision of b Minimum covariance of estimator maximises t as Cb is in denominator (& depends on X & Ce: compare S, ‘bigger’ denominator) Precise determination of d.f. as function of W (i.e. Ce) & design matrix X (if S, fewer) t = cT b (cTCbc )½

Estimating multiple covariance components Doing this at every voxel would require ReML at every voxel (my contract is too short…) As in SPSS, such estimation of Ce would be imprecise, and inference ultimately too liberal: Ce = rrT + X Cb XT (critical ‘circularity’… ) To avoid this, SPM2 uses spatial (cross-voxel) pooling of covariance estimation This way, Ce estimate is precise & (prewhitened OLS) estimation proceeds noniteratively

1st level nonsphericity Model Ce as linear combination of bfs: C(l)e = Si (l1Q1 + l 2Q2) Timeseries autocorrelations in fMRI (Low freq. 1/f removed by high-pass filter) White noise is Q1 Lag 1 autoregressive AR(1) is Q2

Estimated Ce Q1 Q2

2nd level nonsphericity Here model unequal variance across measures, &/or unequal covariance between measures C(l)e = Si (l1Q1 + l 2Q2 … + … l iQi) No. of bfs depends on no. of measures & options selected Nonsphericity? Correlated repeated measures?

Variance for each measure for all subjects Covariance of each pair of measures for all subjects 3 measures: 3 diagonals Q1- Q3 3 off-diag Q4- Q6

What difference does it make? SPM99 OLS method (applied incorrectly) & assuming iid - big t, lots of df, liberal Worsley & Friston’s SPM99 method with Satterthwaite df correction - smaller t, fewer df, valid but not ideal (cons) SPM2 Gauss-Markov (ideal) estimator with prewhitening - full no. of df along with correct t value

Limitations of 2 level approach y = X(1)b(1) + e(1) b(1) = X(2) b(2) + e(2) y = X(1)X(2)b(2) + X(1)e(2) + e(1) Cov(y) = X(1)Ce(2)X(1)T + Ce(1) (into ReML) 2-stage ‘summary statistic’ approach assumes ‘mixed effects’ covariance components are separable at the 2 levels Specifically, assumes design X & variance same for all subjects/ sessions, even if nonsphericity modelled at each level