Download presentation
Presentation is loading. Please wait.
1
Multiple Regression (sec VIII)
2
Multiple Regression - Overview
Multiple Regression in statistics is the science and art of creating an equation that relates an outcome Y, to one or more predictors, X1, X2, X3, .. Xk. Linear Regression Y = a + b1X1 + b2 X2 + b3 X bk Xk + e = Ŷ + e where "e" is the residual error between the observed Y and the prediction (Ŷ). In linear regression, bi is the average change in Y for a single unit change in Xi. “Performance” stats: R2, SDe
3
Logistic Regression Poisson Regression
In multiple logistic regression, Y is 0 or 1 with mean P, the “risk”. The logit of P, not P, is assumed a linear function of the Xs Logit(P) = ln(P/(1-P)) = a + b1 X1 + b2 X2 + b3 X bk Xk (“Logit” = log of the odds since P/(1-P) is the odds) odds=exp(a + b1 X1 + b2 X2 + b3 X bk Xk) risk = P = odds/(odds+1) “Performance” stats: Sensitivity, Specificity, Accuracy, Concordance (C), mean deviance Poisson Regression When Y is a positive integer (0,1,2,3…), we model the log of Y so Y can never be negative. This is the multiple Poisson regression model. ln(mean Y) = a + b1 X1 + b2 X2 + b3 X bk Xk mean Y = exp(a + b1 X1 + b2 X2 + b3 X bk Xk) mean Y cannot be negative. “Performance” stats: R2, mean deviance
4
Logit function: P vs ln(P/(1-P))
5
Logistic regression Predictors of in hospital infection
Characteristic Odds Ratio (95% CI) p value Incr APACHE score ( ) <.001 Transfusion (y/n) ( ) <.001 Increasing age (yr) ( ) <.001 Malignancy ( ) <.001 Max Temperature ( ) <.001 Adm to treat>7 d ( ) Female (y/n) ( ) *APACHE = Acute Physiology & Chronic Health Evaluation Score
6
Multiple Proportional Hazards Regr (Cox model)
For time dependent outcomes (ie time to death), we model the hazard rate, h , the event rate per unit time (for death, it is the mortality rate). Since h > 0, we model the log of the hazard as a linear function of the Xs so h can never be zero (similar to Poisson regression) . ln(h) = a + b1 X1 + b2 X2 + b3 X bk Xk so h = exp(a + b1 X1 + b2 X2 + b3 X bk Xk) > 0 If h0=exp(a) is the ‘baseline’ hazard, (that is, a=log(h0)) the hazard ratio is HR = h/h0 = exp(b1 X1 + b2 X2 + b3 X bk Xk) no intercept ‘a’. If S0(t) is the ‘baseline’ survival curve corresponding to the baseline hazard, then the survival curve for a given combination of X1, X2, … Xk is given by S(t) = S0(t)HR where HR is computed with the equation above. exp(bi) is the hazard rate ratio for a one unit change in Xi. “Performance” stats: Harrell’s Concordance (C) (0.5 < C < 1.0)
7
Cox regr-HR for patient mortality- Busuttil et al 2005
8
Cox HRs for donor age (Busuttil 2005)
95% CI p value 1-18 1.00 (ref) -- 18-32 1.23 0.20 32-48 1.40 0.03 48-55 1.51 0.04 55-60 2.29 < 0.001 60+ 1.61 0.01 Harrell C= 0.70
9
Regression coeff interpretation
Outcome (Y) Regression interpretation continuous Linear b is the average change in Y per one unit increase in X, the rate of change Binary (P=proportion) Logistic exp(b)=eb=odds ratio (OR) for a one unit increase in X Low Positive integers (0,1,2,3..) Poisson exp(b)= mean ratio (MR) for a one unit increase in X Hazard rate (h=events/time) Cox exp(b)=hazard rate ratio (HR) for a one unit increase in X S(t) = S0(t)HR
10
Multiple Linear Regression Example
Consider predictors of Y=Bilirubin (mg/dl) in liver transplant candidates. Two predictors are X1=Prothombin time (PT) in seconds X2=ALT (alanine aminotransferase in U/L). A multiple regression equation (on the log scale) is Ŷ = (predicted) log Bilirubin = log PT log ALT
11
Regression output - equation
(Equation) Parameter estimates term estimate SE t ratio p value Intercept < log PT < log ALT equation: Log Bili= log PT log ALT
12
Regression-analysis of variance table
Source df sum squares mean square F Model Error =SDe2 -- Total F = 18.88/ >Screening F test that none of the predictors are related to Y. R2 = 37.76/84.32 = = 44.8% R2 = model sum squares/total sum squares
13
Residual error plot Residual Log Bilirubin by Predicted
When the model is valid, this plot should look like a circular cloud if the errors have constant variance. The example above is a “good” result.
14
Example of a “good” residual error histogram errors have a Gaussian distribution about zero
Quantiles - errors 100.0% maximum % % % quartile % median % quartile % % % minimum Moments - errors Mean Std Dev (SDe) Std Err Mean n 366
15
Normal Quantile Plot (Should be approximately a straight line if the residual error data are Gaussian) residual error (e)
16
Interpreting multiple regression coefficients (cont.)
The multiple regression coefficients will not in general be the same as the individual regression coefficient for each variable one at a time, even though the same Y is being modeled. Simple regression Multiple (simultaneous) variable (one Y, one X) (b1X1+b2X2) Log PT Log ALT Log Bilirubin = log PT, R2 = 0.425 Log Bilirubin = log ALT, R2 = 0.049 Log Bilirubin = log PT log ALT , R2 = 0.448 Log PT coefficients don’t match
17
Orthogonally (vs collinearity)
In general, regression coefficients from simple and multiple regression are not the same ↔ controlling for covariates does not give the same answer as ignoring covariates. Only when all the X variables have correlation zero with each other will the simple and multiple regression coefficients be the same orthogonally (Collinearity is when Xs are strongly correlated. It is the “opposite” of orthogonality).
18
Log PT (X1) vs log ALT (X2) since correlation is low, unadjusted and adjusted regression results are similar r12 = 0.111, R2 =
19
Interaction Effects & subgroups
The model Y = 0 + 1X1 + 2X2 + implies that change in Y due to X1 (=1) is the same (constant) for all values of X2. An ADDITIVE model. In the model Y = 0 + 1X1 + 2X2 + 3 X1X2 + the 3 term is an interaction term. Change in Y for a unit change in X1 is (1+3X2) and is therefore not constant. Positive 3 is often termed a “synergism” Negative 3 is often termed an “antagonism” Additive only if β3=0 How to implement? Make new variable W = X1X2.
20
Interaction example Response: Y= log HOMA IR (MESA study)
R2=0.280, Root Mean Square Error=SDe=0.623 Mean Response= 0.395, n= 6782 Parameter Estimates Term Estimate Std Error t Ratio p value Intercept <.0001 Gender <.0001 BMI <.0001 gender*BMI <.0001 Predicted log HOMA IR = -1.39 – gender BMI gender * BMI (gender is coded 0 for female and 1 for male)
21
gender x BMI interaction- non additivity
In Females Log HOMA IR = BMI In Males Log HOMA IR = BMI
22
Gender x BMI interaction relation is different in males vs females
23
Hierarchically well formulated (WWF) regression models
HWF Rule – To correctly evaluate the X1*X2 interaction, must also have X1 and X2 in the model. In general, one must include the lower order terms in order to correctly evaluate the higher order terms.
24
HWF example- NON HWF p value changes just because coding changes!
Model:chol = a0 + a1 smoke x age = SMOKEAGE 0, 1 (dummy) coding: smoke=0 or 1, smokeage = smoke x age Variable DF Estimate std error t p value INTERCEPT SMOKEAGE -1, 1 (effect) coding: smoke=-1 or 1, smokeage =smoke x age Variable DF Estimate std error t p value INTERCEPT SMOKEAGE p value changes just because coding changes!
25
HWF example (cont)- HWF
HWF:Model: chol = b0 + b1 smoke + b2 age + b3 smoke x age For HWF models, significance is the same regardless of coding 0, 1 (dummy) coding: smoke=0 or 1, smokeage = smoke x age Variable DF Estimate std error t p value INTERCEPT SMOKE AGE SMOKEAGE -1, 1 (effect) coding:smoke=-1 or 1, smokeage=smoke x age Variable DF Estimate std error t p value INTERCEPT SMOKE AGE SMOKEAGE
26
Regression assumptions
Regression can simultaneously evaluate all factors and thus reduce confounding. But must check two critical assumptions. If X is continuous/interval, check if relation of X & Y is linear on some scale. (otherwise polychotomize X) 2. Check if effects of X1, X2, X3 … are additive by adding interaction terms. (Ex: X4 = X1 x X2). Not additive if interactions are significant.
27
3 Also, in linear regression, prefer residual errors (e) to have a Gaussian distribution with a constant variance that is independent of Y. But additivity and linearity are more important since lack of additivity and linearity lead to bias and is more misleading.
28
Nonlinear Regression is a nonlinear model in terms of PT and ALT
Log(Bilirubin)= log(PT) log(ALT) is a nonlinear model in terms of PT and ALT but is a linear model in terms of log PT, log ALT and the regression coefficients b0=-3.96, b1=3.47 and b2=0.211. Consider model of the form: Ŷ= Drug concentration = b1 10 b2 X This is nonlinear in b2 but can be made linear with a transformation: log10(concentration)=log10(b1) + b2 x What about: Drug concentration = b0+ b1 10 b2 X This model is nonlinear in b2 and cannot be transformed. Nonlinear regression software is needed to estimate b0, b1, & b2.
29
Nonlinear Example Compartmental drug models
Model of how drug (or any chemical) is metabolized by an organism. Y1=concentration in serum, Y2=concentration in organ, x=time d (Y1)/dx = -b1 Y1 d (Y2)/dx = b1Y1 - b2Y2 b1 > b2 > 0 solutions: Y1 = constant e -b1 x Y2 = (b1/(b1-b2)) [e -b2x – e -b1x] < - model Y2 takes on a maximum value when x = ln(b1/b2)/[b1-b2] Y2 is zero when x=0 or x is very large The constants b1 and b2 are rates. They are in units of 1/x (i.e 1/time). Y1 serum Y2 organ
30
Nonlinear equation Y = (b1/(b1-b2)) [e -b2x – e -b1x]
Y= [0.0967/( )]*[exp( *t)-exp( *t)] at peak, t = 14, Ŷ= 0.49
31
Residual diagnostics & “model criticism”
Assumptions of linear regression: 1. Linear relation between Y and each X except for random “noise” (but can transform X). 2. Effect of each X is additive (but can make interaction terms) 3. Errors (e) have constant variance and come from a Gaussian distribution 4. All observations from the same population 5. All observations independent (usually ok) A plot of Ŷ versus e, called a residual error (diagnostic) plot, can help verify if these assumptions are met.
32
“good” residual plot
33
Residual plots – diagnostics- “bad” residual plots
34
Regression diagnostics
Problem-outliers Solution – Find on residual plot & eliminate Problem-Curvilinear (non linear) Solution- try nonlinear trans formation (x2, 1/x, log(x), ex) Problem-Errors not Gaussian Solution-Robust regression (future class) Problem-Non constant SDe or Var(e) Solution-Weights (future class)
35
Adjusted means
36
Ex: Meditation & change in pct body fat
Overweight persons chose a meditation program or a “sham” (lectures) as part of a weight loss effort. They were NOT randomized. Change in percent body fat by treatment group (mediation or sham) over three months Unadjusted Means Level n Mean pct body SEM Mean dietary fat (gm) fat change (before study start) 1-meditate % % g 2-sham % % g Unadjusted Mean difference (sham - meditation) = 8.85% SE of the difference = SEd = √ = 0.59% t = mean diff/SEd = 8.85% / 0.59% = 15.1, p < Overall unweighted dietary fat = 49.9 g
37
Result via “regression”
Y= change in body fat, X = 1 if sham, 0 if meditation Y = a + b X + error = a + b sham + error term estimate SE t p value a < 0.001 b < 0.001 Ŷ = sham
38
Regr-control for dietary fat
pct body fat= Y = a + b1 X1 + b2 X2 + error X1= 1 if sham, 0 if meditation X2 = dietary fat in g term estimate SE t p value a < 0.001 b b <0.001 body fat chg= sham diet fat
39
Adjusted means Plug into equation for X1=1=sham or X1=0.
Hold X2 = diet fat= 49.9 g – overall mean X2 Med: (0)+0.213(49.9)=-3.84% Sham: (1)+0.213(49.9)=-2.33% Adj mean difference =2.33-(-3.84) =1.51% (Unadjusted was 8.85%)
40
General procedure Adjusted means for X1,controlling for (confounders) X2, X3, X4 … Estimate model regression coefficients Plug in different values for X1, and values for all other Xs held constant at their overall means. Gives adjusted means and their SEs. Assumes ????
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.