Introduction: The lesion-centered view on MS __________________________________________________________________ RRI/TUD/StanU – HH Kitzler Specific Aims:

Slides:



Advertisements
Similar presentations
Corpus Callosum Damage Predicts Disability Progression and Cognitive Dysfunction in Primary-Progressive MS After Five Years.
Advertisements

VBM Voxel-based morphometry
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Analysis of variance and statistical inference.
Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
Lecture 10 F-tests in MLR (continued) Coefficients of Determination BMTRY 701 Biostatistical Methods II.
Journal Club: mcDESPOT with B0 & B1 Inhomogeneity.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
OverviewOverview Motion correction Smoothing kernel Spatial normalisation Standard template fMRI time-series Statistical Parametric Map General Linear.
Linear regression models
More MR Fingerprinting
1 G Lect 5W Regression assumptions Inferences about regression Predicting Day 29 Anxiety Analysis of sets of variables: partitioning the sums of.
Relaxometry & Image Processing Technical Update Clinical Findings/Product Need Competitive Info Recommendations T1 (DESPOT1 and LL) parameter fitting via.
Classical inference and design efficiency Zurich SPM Course 2014
Some Terms Y =  o +  1 X Regression of Y on X Regress Y on X X called independent variable or predictor variable or covariate or factor Which factors.
LINEAR REGRESSION: Evaluating Regression Models Overview Assumptions for Linear Regression Evaluating a Regression Model.
MSmcDESPOT A look at the road behind and ahead October 30, 2009.
LINEAR REGRESSION: Evaluating Regression Models. Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Standard Error of the Estimate Goodness of Fit Coefficient of Determination Regression Coefficients.
ISMRM 2010 Quantitative Imaging and MS. N. D. Gai and J. A. Butman, NIH T1 Error Analysis for Double Angle Technique and Comparison to Inversion Recovery.
MSmcDESPOT: Follow-Ups November 1, Where We Are Baseline cross-section conclusions: – DVF is sensitive to early stages of MS where other measures.
Declaration of Conflict of Interest or Relationship I have no conflicts of interest to disclose with regard to the subject matter of this presentation.
Linear Methods for Regression Dept. Computer Science & Engineering, Shanghai Jiao Tong University.
Lesson #32 Simple Linear Regression. Regression is used to model and/or predict a variable; called the dependent variable, Y; based on one or more independent.
Multiple Regression MARE 250 Dr. Jason Turner.
Lecture 11 Multivariate Regression A Case Study. Other topics: Multicollinearity  Assuming that all the regression assumptions hold how good are our.
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Alzheimer’s Disease and Myelin February 2012 Jason Su.
Review Guess the correlation. A.-2.0 B.-0.9 C.-0.1 D.0.1 E.0.9.
Whole Brain Myelin Imaging with mcDESPOT in Multiple Sclerosis
Composite MRI scores improve correlation with EDSS in multiple sclerosis by Poonawalla et al. Review by Jason Su.
Simple Linear Regression
Simple Linear Regression Models
7.1 - Motivation Motivation Correlation / Simple Linear Regression Correlation / Simple Linear Regression Extensions of Simple.
7T Thalamus and MS Studies Jason Su Sep 16, 2013.
Variable selection and model building Part II. Statement of situation A common situation is that there is a large set of candidate predictor variables.
Selecting Variables and Avoiding Pitfalls Chapters 6 and 7.
Multiple Regression Selecting the Best Equation. Techniques for Selecting the "Best" Regression Equation The best Regression equation is not necessarily.
STA302/ week 911 Multiple Regression A multiple regression model is a model that has more than one explanatory variable in it. Some of the reasons.
Brain Mapping Unit The General Linear Model A Basic Introduction Roger Tait
MARE 250 Dr. Jason Turner Multiple Regression. y Linear Regression y = b 0 + b 1 x y = dependent variable b 0 + b 1 = are constants b 0 = y intercept.
J OURNAL C LUB : Lankford and Does. On the Inherent Precision of mcDESPOT. Jul 23, 2012 Jason Su.
MSmcDESPOT A Brief Summary April 2, The Technique mcDESPOT (multi-component driven equilibrium single pulse observation of T1/T2) is a quantitative.
Discussion of time series and panel models
Simple Linear Regression (SLR)
Simple Linear Regression (OLS). Types of Correlation Positive correlationNegative correlationNo correlation.
J OURNAL C LUB : Deoni et al. One Component? Two Components? Three? The Effect of Including a Nonexchanging ‘‘Free’’ Water Component in mcDESPOT. Jan 14,
INTRODUCTION TO Machine Learning 3rd Edition
Chapter 22: Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
Optimal Design with Automatic Differentiation: Exploring Unbiased Steady-State Relaxometry Jan 13, 2014 Jason Su.
ALISON BOWLING MAXIMUM LIKELIHOOD. GENERAL LINEAR MODEL.
1 Statistics 262: Intermediate Biostatistics Regression Models for longitudinal data: Mixed Models.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Information criteria What function fits best? The more free parameters a model has the higher will be R 2. The more parsimonious a model is the lesser.
MPRAGEpre – Image Quality Quality is fairly consistent throughout subjects but there are a couple notable outliers: P003 & P025.
P025 MPRAGE Pre-Contrast. P025 MPRAGE w/ Z-Score < -4.
J OURNAL C LUB : S Magon, et al. University Hospital Basel, Switzerland “Label-Fusion-Segmentation and Deformation-Based Shape Analysis of Deep Gray Matter.
Remember the equation of a line: Basic Linear Regression As scientists, we find it an irresistible temptation to put a straight line though something that.
Model selection and model building. Model selection Selection of predictor variables.
Multiple Regression.
Reasoning in Psychology Using Statistics
Statistics in MSmcDESPOT
ISMRM 2011 E-Poster #4643 mcDESPOT-Derived MWF Improves EDSS Prediction in MS Patients Compared to Atrophy Measures Alone J.Su1, H.H.Kitzler2, M.Zeineh1,
Multiple Regression.
Prepared by Lee Revere and John Large
Linear Model Selection and regularization
Cross-validation for the selection of statistical models
ISMRM 2012 Prelim. Abstracts Oct 17, 2011 – Jason Su
Contrasts & Statistical Inference
Presentation transcript:

Introduction: The lesion-centered view on MS __________________________________________________________________ RRI/TUD/StanU – HH Kitzler Specific Aims: To derive myelin water fraction (MWF) maps using a new multi-component relaxometric imaging method (mcDESPOT) in a cohort of MS patients, and To test the hypothesis that MWF in normal appearing white matter (NAWM) correlates with disability in MS

Material – MS Patients & Healthy Controls __________________________________________________________________ RRI/TUD/StanU – HH Kitzler Case-controlled study design Explorative whole-brain mcDESPOT in clinically relevant time: Clinically definite MS Subtypes and Clinically Isolated Syndrome (CIS) MS/CIS patients (n=26) vs. healthy controls (n=26) Expanded Disability Status Scale (EDSS) registered low-risk CIS(n=5) high-risk CIS(n=5) RRMSRelapsing-Remitting MS (n=5) SPMSSecondary Progressive MS (n=6) PPMSPrimary Progressive MS (n=5) MS patients Healthy controls mean age/SD 47 ± 13 years 42 ± 13 years gender F:M 2.3 : 11.6 : 1 EDSS4.0 ± 2.0

Methods - mcDESPOT __________________________________________________________________ RRI/TUD/StanU – HH Kitzler Non-linear co-registration to MNI standard brain space (2mm2 MNI152 T1 template) * Deoni SC, Rutt BK, et al. MRM. 60: , Multi-component Driven Equilibrium Single Pulse Observation of T1/T2 (mcDESPOT)* MR Data Acquisition* 1.5T (GE Signa HDx), 8-ch.RF mcDESPOT: 2mm 3 isotropic covering whole brain, TA: ~15min SPGR: TE/TR = 2.1/6.7ms, α = {3,4,5,6,7,8,11,13,18}° bSSFP: TE/TR = 1.8/3.6ms, α = {11,14,20,24,28,34,41,51,67}° FLAIR at 0.86 mm 2 in-plane and 3mm slice resolution MPRAGE pre/post Gd contrast at 1mm 3

Postprocessing – Compartment-specific demyelination __________________________________________________________________ Z-score based WM Tissue Segmentation Probabilistic WM map WM compartments MWF map Demyelination map Compartment-specific demyelination map Conventional MR-Data + whole-brain isotropic MWF maps  MNI standard space RRI/TUD/StanU – HH Kitzler

Jason Su

 Present (almost) final results  Judge figures, how to improve their readability and presentation for publication  Discussion of further analysis

 Testing at p < 0.05 level is typical  Patients vs. Normals ◦ DV brain : p << ◦ PVF: p = 0.01  Low-Risk CIS vs. Normals ◦ DV brain : p = ◦ PVF: p = 0.37 X  High-Risk CIS vs. Normals ◦ DV brain : p = ◦ PVF: p = 0.81 X  CIS vs. Normals ◦ DV brain : p << ◦ PVF: p = 0.68 X

 RRMS vs. Normals ◦ DV brain : p = ◦ PVF: p = 0.76 X  SPMS vs. Normals ◦ DV brain : p = ◦ PVF: p =  PPMS vs. Normals ◦ DV brain : p = ◦ PVF: p =  RRMS vs. SPMS ◦ DV brain : p = X ◦ DV nawm : 0.03 ◦ PVF: p = 0.004

 Y = X*a ◦ a = pinv(X)*Y, LS solution, pinv(X) = inv(X’X)X’ ◦ X is a matrix with columns of predictors  The outcome is linear in a predictor after accounting for all the others  Same assumptions from simple lin. reg. ◦ Inde. normal-dist. residuals, constant variance  Adding even random noise to X improves R^2 ◦ Adjusted R^2, instead of sum of square error, use mean square error: favors simpler models

 As suggested by Adjusted R^2, what we really want is a parsimonious model ◦ One that predicts the outcome well with only a few predictors  This is a combinatorially hard problem  Models are evaluated with a criterion ◦ Adjusted R^2 ◦ Mallow’s Cp – estimated predictive power of model ◦ Akaike information criterion (AIC) – related to Cp ◦ Bayesian information criterion (BIC) ◦ Cross validation with MSE

 If the model is small enough, can search all ◦ In MSmcDESPOT this is probably feasible, our predictors are: age, PVF, log(DV), gender, PP, SP, RR, High-Risk CIS ◦ 127 possibilities  Stepwise ◦ This is a popular search method where the algorithm is giving a starting point then adds or removes predictors one at a time until there is no improvement in the criterion

 Exhaustive search with Mallow’s Cp criterion ◦ leaps() in R ◦ Chooses a model with Age+SPMS+PPMS (Intercept) Age PPMS1 SPMS ◦ Consolation prize: models with DV rather than PVF generally had an improved Cp but still not the best  F-test of Age+PVF+DV and Age+PVF ◦ Works on nested models, used in ANOVA ◦ Tests if the coefficient for DV is non-zero, i.e. if it is a significantly better fit with DV ◦ p = 0.004, DV should be included