Jonathan Taylor, Stanford Keith Worsley, McGill Hierarchical statistical analysis of fMRI data across runs/sessions/subjects/studies using BRAINSTAT/FMRISTAT Jonathan Taylor, Stanford Keith Worsley, McGill
What is BRAINSTAT / FMRISTAT ? FMRISTAT is a Matlab fMRI stats analysis package BRAINSTAT is a Python version Main components: FMRILM: Linear model, AR(p) errors, bias correction, smoothing of autocorrelation to boost degrees of freedom* MULTISTAT: Mixed effects linear model, ReML estimation, EM algorithm, smoothing of random/fixed effects sd to boost degrees of freedom* Key idea: IN: effect, sd, df, fwhm, OUT: effect, sd, df, fwhm STAT_SUMMARY: best of Bonferroni, non-isotropic random field theory, DLM (Discrete Local Maxima)* *new theoretical results Treats magnitudes and delays in the same way
FMRILM: smoothing of temporal autocorrelation Variability in acor lowers df Df depends on contrast Smoothing acor brings df back up: dfacor = dfresidual(2 + 1) 1 1 2 acor(contrast of data)2 dfeff dfresidual dfacor FWHMacor2 3/2 FWHMdata2 = + Hot stimulus Hot-warm stimulus FWHMdata = 8.79 Residual df = 110 Residual df = 110 100 100 Target = 100 df Target = 100 df 50 Contrast of data, acor = 0.61 50 Contrast of data, acor = 0.79 dfeff dfeff 10 20 30 10 20 30 FWHM = 10.3mm FWHM = 12.4mm FWHMacor FWHMacor
MULTISTAT: smoothing of random/fixed FX sd dfratio = dfrandom(2 + 1) 1 1 1 dfeff dfratio dffixed FWHMratio2 3/2 FWHMdata2 e.g. dfrandom = 3, dffixed = 4 110 = 440, FWHMdata = 8mm: = + 20 40 Infinity 100 200 300 400 fixed effects analysis, dfeff = 440 dfeff FWHM = 19mm Target = 100 df random effects analysis, dfeff = 3 FWHMratio
High FWHM: use Random Field Theory Low FWHM: use Bonferroni STAT_SUMMARY High FWHM: use Random Field Theory Low FWHM: use Bonferroni In between: use Discrete Local Maxima (DLM) 0.12 Gaussian T, 20 df T, 10 df 0.1 Bonferroni Random Field Theory 0.08 DLM can ½ P-value when FWHM ~3 voxels P-value 0.06 0.04 True Discrete Local Maxima 0.02 Bonferroni, N=Resels 1 2 3 4 5 6 7 8 9 10 FWHM of smoothing kernel (voxels)
High FWHM: use Random Field Theory Low FWHM: use Bonferroni STAT_SUMMARY High FWHM: use Random Field Theory Low FWHM: use Bonferroni In between: use Discrete Local Maxima (DLM) 4.7 Bonferroni 4.6 4.5 True T, 10 df 4.4 Random Field Theory 4.3 T, 20 df Gaussianized threshold 4.2 Discrete Local Maxima (DLM) 4.1 Gaussian 4 3.9 Bonferroni, N=Resels 3.8 3.7 1 2 3 4 5 6 7 8 9 10 FWHM of smoothing kernel (voxels)
STAT_SUMMARY example: single run, hot-warm Detected by BON and DLM but not by RFT Detected by DLM, but not by BON or RFT
Estimating the delay of the response Delay or latency to the peak of the HRF is approximated by a linear combination of two optimally chosen basis functions: delay -5 5 10 15 20 25 -0.4 -0.2 0.2 0.4 0.6 t (seconds) basis1 basis2 HRF shift HRF(t + shift) ~ basis1(t) w1(shift) + basis2(t) w2(shift) Convolve bases with the stimulus, then add to the linear model
Example: FIAC data 16 subjects 4 runs per subject 4 conditions 2 runs: event design 2 runs: block design 4 conditions Same sentence, same speaker Same sentence, different speaker Different sentence, same speaker Different sentence, different speaker 3T, 200 frames, TR=2.5s
Response Events Blocks Beginning of block/run
Design matrix for block expt B1, B2 are basis functions for magnitude and delay:
1st level analysis Motion and slice time correction (using FSL) 5 conditions Smoothing of temporal autocorrelation to control the effective df (new!) 3 contrasts Beginning of block/run Same sent, same speak Same sent, diff speak Diff sent, same speak Diff sent, diff speak Sentence -0.5 0.5 Speaker Interaction 1 -1
Efficiency Sd of contrasts (lower is better) for a single run, assuming additivity of responses For the magnitudes, event and block have similar efficiency For the delays, event is much better.
2nd level analysis 3rd level analysis Analyse events and blocks separately Register contrasts to Talairach (using FSL) Bad registration on 2 subjects - dropped Combine 2 runs using fixed FX Combine remaining 14 subjects using random FX 3 contrasts × event/block × magnitude/delay = 12 Threshold using best of Bonferroni, random field theory, and discrete local maxima (new!) 3rd level analysis
Part of slice z = -2 mm
Event Block Magnitude Delay
Events vs blocks for delays in different – same sentence Events: 0.14±0.04s; Blocks: 1.19±0.23s Both significant, P<0.05 (corrected) (!?!) Answer: take a look at blocks: Greater magnitude Different sentence (sustained interest) Best fitting block Same sentence (lose interest) Greater delay
SPM BRAINSTAT
Magnitude increase for Sentence, Event Sentence, Block Sentence, Combined Speaker, Combined at (-54,-14,-2)
Magnitude decrease for Sentence, Block Sentence, Combined at (-54,-54,40)
Delay increase for Sentence, Event at (58,-18,2) inside the region where all conditions are activated
Conclusions Greater %BOLD response for Greater latency for different – same sentences (1.08±0.16%) different – same speaker (0.47±0.0.8%) Greater latency for different – same sentences (0.148±0.035 secs)
The main effects of sentence repetition (in red) and of speaker repetition (in blue). 1: Meriaux et al, Madic; 2: Goebel et al, Brain voyager; 3: Beckman et al, FSL; 4: Dehaene-Lambertz et al, SPM2. z=-12 z=2 z=5 3 1,4 2 1 Brainstat: combined block and event, threshold at T>5.67, P<0.05.