Download presentation
Presentation is loading. Please wait.
Published byFlora Nichols Modified over 9 years ago
1
Statistical Analysis of M/EEG Sensor- and Source-Level Data Jason Taylor MRC Cognition and Brain Sciences Unit (CBU) Cambridge Centre for Ageing and Neuroscience (CamCAN) Cambridge, UK jason.taylor@mrc-cbu.cam.ac.uk | http://imaging.mrc-cbu.cam.ac.uk/meg (wiki) 14 May 2012 | London | Thanks to Rik Henson and Dan Wakeman @ CBU Practical aspects of…
2
The SPM approach to M/EEG Statistics: A mass-univariate statistical approach to inference regarding effects in space/time/frequency (using replications across trials or subjects). In Sensor Space: 2D Time-Frequency 3D Topography-by-Time In Source Space: 3D Time-Frequency contrast images 4-ish-D space by (factorised) time Alternative approaches: SnPM, PPM, etc. Overview
3
Example Datasets
4
CTF Multimodal Face-Evoked Dataset | SPM8 Manual (Rik Henson) CTF MEG/ActiveTwo EEG: 275 MEG 128 EEG 3T fMRI (with nulls) 1mm 3 sMRI 2 sessions Stimuli: ~160 face trials/session ~160 scrambled trials/session Group: N=12 subjects (Henson et al, 2009 a,b,c) Chapter 37 SPM8 Manual
5
Neuromag Faces Dataset | Dan Wakeman & Rik Henson @ CBU Biomag 2010 Award-Winning Dataset! Neuromag MEG & EEG Data: 102 Magnetometers 204 Planar Gradiometers 70 EEG HEOG, VEOG, ECG 1mm 3 sMRI 6 sessions Stimuli: faces & scrambled-face images (480 trials total) Group: N=18 (Henson et al, 2011) 400- 600ms 800- 1000ms 1700ms
6
Neuromag Lexical Decision Dataset | Taylor & Henson @ CBU Neuromag Lexical Decision Neuromag MEG Data: 102 Magnetometers 204 Planar Gradiometers HEOG, VEOG, ECG 1mm 3 sMRI 4 sessions Stimuli: words & pseudowords presented visually (480 trials total) Group: N=18 Taylor & Henson (submitted)
7
The Multiple Comparison Problem
8
Where is the effect?: The curse of too much data
9
The Multiple Comparison Problem The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true. * Must consider Familywise (vs. per-comparison) Error Rate Comparisons are often made implicitly, e.g., by viewing (“eye- balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume. -> When is there an effect in time e.g., GFP (1D)? time
10
The Multiple Comparison Problem The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true. * Must consider Familywise (vs. per-comparison) Error Rate Comparisons are often made implicitly, e.g., by viewing (“eye- balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume. -> When/at what frequency is there an effect an effect in time/frequency (2D)? time frequency
11
The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true. * Must consider Familywise (vs. per-comparison) Error Rate Comparisons are often made implicitly, e.g., by viewing (“eye- balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume. -> When/where is there an effect in sensor-topography space/time (3D)? The Multiple Comparison Problem x y time
12
The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true. * Must consider Familywise (vs. per-comparison) Error Rate Comparisons are often made implicitly, e.g., by viewing (“eye- balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume. -> When/where is there an effect in source space/time (4-ish-D)? The Multiple Comparison Problem y x time z
13
The Multiple Comparison Problem The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true. * Must consider Familywise (vs. per-comparison) Error Rate Comparisons are often made implicitly, e.g., by viewing (“eye- balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume. Random Field Theory (RFT) is a method for correcting for multiple statistical comparisons with N-dimensional spaces (for parametric statistics, e.g., Z-, T-, F- statistics). * Takes smoothness of images into account Worsley Et Al (1996). Human Brain Mapping, 4:58-73
14
GLM: Condition Effects after removing variance due to confounds Each trial Each trial-type (6) beta_00* image volumes reflect (adjusted) condition effects Confounds (4) Henson et al., 2008, NImage W/in Subject (1 st -level) model -> one image per trial -> one covariate value per trial Also: Group Analysis (2 nd -level) -> one image per subject per condition -> one covariate value per subject per condition
15
Sensor-space analyses
16
Faces Scrambled Faces > Scrambled Where is an effect in time-frequency space? Kilner et al., 2005, Nsci Letters CTF Multimodal Faces 1 subject (1 st -level analysis) 1 MEG channel Morlet wavelet projection 1 t-x-f image per trial
17
Where is an effect in time-frequency space? Neuromag Faces MEG (Mag) Faces > Scrambled EEG 100-220ms 8-18Hz Freq (Hz) 6 90 -500 +1000 Time (ms) Freq (Hz) 6 90 -500 +1000 Time (ms) 18 subjects 1 channel (2 nd -level analysis)
18
Where is an effect in sensor-time space? 3D Topography- x-Time Image volume (EEG) 1 subject ALL EEG channels Each trial, Topographic projection (2D) for each time point (1D) = topo-time image volume (3D) Faces vs Scrambled CTF Multimodal Faces x y time
19
Without transformation to Device Space With transformation to Device Space Where is an effect in sensor-time space? Analysis over subjects (2 nd Level) NOTE for MEG: Complicated by variability in head-position SOLUTION(?): Virtual transformation to same position (sessions, subjects) - using Signal Space Separation (SSS; Taulu et al, 2005; Neuromag systems) Taylor & Henson (2008) Biomag Neuromag Lexical Decision Stats over 18 subjects, RMS of planar gradiometers Improved: (i.e., more blobs, larger T values) w/ head-position transform Other evidence: ICA ‘splitting’ with movement
20
Where is an effect in sensor-time space? Analysis over subjects (2 nd Level): Magnetometers: Pseudowords vs Words PPM (P>.95 effect>0) Effect Size 18 -18 fT Taylor & Henson (submitted) Neuromag Lexical Decision
21
Source-space analyses
22
Where is an effect in source space (3D)? STEPS: Estimate evoked/induced energy (RMS) at each dipole for a certain time-frequency window of interest. - e.g., 100-220ms, 8-18 Hz - For each condition (Faces, Scrambled) - For each sensor type OR fused modalities Write data to 3D image - in MNI space - smooth along 2D surface Smooth by 3D Gaussian Submit to GLM Henson et al., 2007, NImage
23
Where is an effect in source space (3D)? Henson et al, 2011 Neuromag Faces MEG sensor fusion E+MEG EEG RESULTS: Faces > Scrambled fMRI
24
Where is an effect in source space (3D)? with fMRI priors EEG+MEG+fMRI fMRI RESULTS: Faces > Scrambled with group optimised fMRI priors EEG+MEG+fMRI Henson et al, 2011 Neuromag Faces mean stats
25
Taylor & Henson, submitted Neuromag Lexical Decision Condition x Time-window Interactions Factorising time allows you to infer (rather than simply describe) when effects emerge or disappear. We've used a data-driven method (hierarchical cluster analysis) in sensor space to define time- windows of interest for source-space analyses. * high-dimensional distance between topography vectors * cut the cluster tree where the solution is stable over longest range of distances Where and When do effects emerge/disappear in source space (4-ish-D: time factorised)?
26
Taylor & Henson, submitted Neuromag Lexical Decision Condition x Time-window Interactions Factorising time allows you to infer (rather than simply describe) when effects emerge or disappear. We've used a data-driven method (hierarchical cluster analysis) in sensor space to define time-windows of interest for source-space analyses. * estimate sources in each sub-time-window * submit to GLM with conditions & time-windows as factors (here: Cond effects per t-win) Where and When do effects emerge/disappear in source space (4-ish-D)? source timecourses
27
Taylor & Henson, submitted Neuromag Lexical Decision Condition x Time-window Interactions Factorising time allows you to infer (rather than simply describe) when effects emerge or disappear. We've used a data-driven method (hierarchical cluster analysis) in sensor space to define time-windows of interest for source-space analyses. * estimate sources in each sub- time-window * submit to GLM with conditions & time-windows as factors: Cond x TW interactions Where and When do effects emerge/disappear in source space (4-ish-D)?
28
Alternative Approaches
29
Classical SPM approach: Caveats Inverse operator induces long-range error correlations (e.g., similar gain vectors from non-adjacent dipoles with similar orientation), making RFT correction conservative Distributions over subjects/voxels may not be Gaussian (e.g., if sparse, as in MSP) Alternative Approaches (Taylor & Henson, submitted; Biomag 2010)
30
Alternative Approaches p<.05 FWE Non-Parametric Approach (SnPM) Robust to non-Gaussian distributions Less conservative than RFT when dfs<20 Caveats: No idea of effect size (e.g., for power, future expts) Exchangeability difficult for more complex designs (Taylor & Henson, Biomag 2010) SnPM Toolbox by Holmes & Nichols: http://go.warwick.ac.uk/tenichols/software/snpm/ CTF Multimodal Faces
31
Alternative Approaches p>.95 (γ>1SD) Posterior Probability Maps (PPMs) Bayesian Inference No need for RFT (no MCP) Threshold on posterior probabilty of an effect greater than some size Can show effect size after thresholding Caveats: Assume Gaussian distribution (e.g., of mean over voxels) CTF Multimodal Faces
32
Future Directions Extend RFT to 2D cortical surfaces (+1D time)? (e.g., Pantazis et al., 2005, NImage) Novel approaches to controlling for multiple comparisons (e.g., 'unique extrema' in sensors: Barnes et al., 2011, NImage) Go multivariate? To identify linear combinations of spatial (sensor or source) effects in time and space (e.g., Carbonnell, 2004, NImage; but see Barnes et al., 2011) To detect spatiotemporal patterns in 3D images (e.g., Duzel, 2004, NImage; Kherif, 2003, NImage)
33
-- The end -- Thanks for listening Acknowledgements: Rik Henson (MRC CBU) Dan Wakeman (MRC CBU / now at MGH) Vladimir, Karl, and the FIL Methods Group More info: http://imaging.mrc-cbu.cam.ac.uk/meg (wiki) jason.taylor@mrc-cbu.cam.ac.uk
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.