Download presentation
Presentation is loading. Please wait.
Published byAbraham Norris Modified over 9 years ago
1
Logistic regression A quick intro
2
Why Logistic Regression? Big idea: dependent variable is a dichotomy (thought can use for more than 2 categories i.e. multinomial logistic regression) Why would we use? One thing to use a t-test (or multivariate counterpart) to say groups are different, however it may be the research goal to predict group membership Clinical/Medical context Schizophrenic or not Clinical depression or not Cancer or not Social/Cognitive context Vote yes or no Preference A over B Graduate or not
3
X2 X1 X3 X4 Categorical Y Basic Model (Same as MR)
4
Questions Can the cases be accurately classified given a set of predictors? Can the solution generalize to predicting new cases? Comparison of equation with predictors plus intercept to a model with just the intercept What is the relative importance of each predictor? How does each variable affect the outcome? Does a predictor make the solution better or worse or have no effect? Are there interactions among predictors? Does adding interactions among predictors (continuous or categorical) significantly improve the model? Can parameters be accurately estimated? What is the strength of association between the outcome variable and a set of predictors?
5
Why logistic regression? Why not? Goal: To assess likelihood of falling into 1 of the DV categories, given a set of predictors. Does not require assumptions of linearity, homoscedasticity & normality that we had in MR, though outcome categories must be exclusive and exhaustive and there are LR counterparts to those assumptions Differs from DFA differ in that DFA focuses on loadings where logreg focuses on odds ratios of how likely it is that an individual will fall into the highest outcome category, given a 1-unit change in a predictor. While logreg is more flexible in terms of assumptions usually requires larger samples due to using maximum likelihood estimation. If your DFA meets its assumptions, it might be the better (more statistically powerful) alternative Furthermore, one can assess different linear combinations on which groups may be classified if there are more than two groups in the DV
6
Multiple regression approach With MR, we used a method to minimize the squared deviations from our predicted values Can’t really pull off with dichotomous variable Only two outcome values to produce residuals Can’t meet normality or homoscedasticity assumptions While it could produce what are essentially predicted probabilities of belonging to a particular group, those probabilities are not bounded by zero and 1 Logistic regression will allow us to go about the prediction/explanation process in a similar manner, but without the problems
7
Assumptions The only “real” limitation with logistic regression is that the outcome must be discrete. If the distributional assumptions are met for it then discriminant function analysis may be more powerful, although it has been shown to overestimate the association using discrete predictors. If the outcome is continuous then multiple regression is more powerful given that the assumptions are met
8
Assumptions Ratio of cases to variables: using discrete variables requires that there are enough responses in every given category to allow for reasonable estimation of parameters/predictive power Due to the maximum likelihood approach, some suggest even 50 cases per predictor as a rule of thumb Linearity in the logit – the IVs should have a linear relationship with the logit form of the DV There is no assumption about the predictors being linearly related to each other
9
Assumptions Absence of collinearity among predictors No outliers Independence of errors Assumes categories are mutually exclusive
10
Model fit Significance Test: Log-Likelihood (LL) 2 test between Model (M) with predictors + intercept, vs. Intercept (I) only model. If Likelihood 2 test is significant, predictors model is best. Goodness-of-fit statistics help you to determine whether the model adequately describes the data Here statistical significance is not desired More like a badness of fit really, and problematic since one can’t accept the null due to non-significance Best used descriptively perhaps Pseudo r-squared statistics In this dichotomous situation we will have trouble with devising an r 2
11
Coefficients In interpreting coefficients we’re now thinking about a particular case’s tendency toward some outcome The problem with probabilities is that they are non-linear Going from.10 to.20 doubles the probability, but going from.80 to.90 only increases the probability somewhat With logistic regression we start to think about the odds Odds are just an alternative way of expressing the likelihood (probability) of an event. Probability is the expected number of the event divided by the total number of possible outcomes Odds are the expected number of the event divided by the expected number of non-event occurrences. Expresses the likelihood of occurrence relative to likelihood of non-occurrence
12
Odds Let's begin with probability. Let's say that the probability of success is.8, thus p =.8 Then the probability of failure is q = 1 - p =.2 The odds of success are defined as odds(success) = p/q =.8/.2 = 4, that is, the odds of success are 4 to 1. We can also define the odds of failure odds(failure) = q/p =.2/.8 =.25, that is, the odds of failure are 1 to 4.
13
Odds Ratio Next, let's compute the odds ratio by OR = odds(success)/odds(failure) = 4/.25 = 16 The interpretation of this odds ratio would be that the odds of success are 16 times greater than for failure. Now if we had formed the odds ratio the other way around with odds of failure in the numerator, we would have gotten OR = odds(failure)/odds(success) =.25/4 =.0625 Here the interpretation is that the odds of failure are one-sixteenth the odds of success.
14
Logit Logit Natural log (e) of an odds Often called a log odds The logit scale is linear Logits are continuous and are centered on zero (kind of like z-scores) p = 0.50, odds = 1, then logit = 0 p = 0.70, odds = 2.33, then logit = 0.85 p = 0.30, odds =.43, then logit = -0.85
15
Logit So conceptually putting things in our standard regression form: Log odds = b o + b 1 X Now a one unit change in X leads to a b 1 change in the log odds In terms of odds: In terms of probability: Thus the logit, odds and probability are different ways of expressing the same thing
16
Coefficients The raw coefficients* for our predictor variables in our output are the amount of increase in the log odds given a one unit increase in that predictor The coefficients are determined through an iterative process that finds the coefficients that best match the data at hand Maximum likelihood Starts with a set of coefficients (e.g. ordinary least squares estimates) and then proceeds to alter until almost no change in fit
17
Coefficients We also receive a different type of coefficient expressed in odds Anything above 1 suggests an increase in odds of an event, less than, a decrease in the odds For example, if 1.14, moving on the predictor variable 1 unit increases the odds of the event by a factor of 1.14 Essentially it is the odds ratio for one value of X vs. the next value of X More intuitively it refers to the percentage increase (or decrease) of becoming a member of group such and such with a one unit increase in the predictor variable
18
Example Example: predicting art museum visitation by education, age, income, and political views Gss93 dataset Key things to look for Model fit: Pseudo-R 2 Coefficients Classification accuracy Performing a logistic regression is no different than multiple regression Once the appropriate function/menu is selected one selects variables in the same fashion and may do sequential, stepwise etc.
19
Model fit Cox & Snell’s value would not reach 1.0 even for a perfect fit Nagelkerke is a version of C&S that would* Probably preferred but may be a little optimistic (just like our regular R- square) The Hosmer and Lemeshow GOF suggests we’re ok too**
20
Coefficients Would appear age is the only one that doesn’t contribute significantly Note the odds ratio of 1.00 Polview (1 extreme lib, 7 extreme cons) isn’t perhaps doing much either More conservative less likely to go to museum Education More education more likely** to visit Income Higher income more likely to visit *
21
Classification Classification table Here we get a good sense of how well we’re able to predict the outcome. 69% overall compared to 58.7% if we just guessed the more prevelent class ‘no’*
22
Other measures regarding classification Measure Calculation Prevalence(a + c)/N Overall Diagnostic Power(b + d)/N Correct Classification Rate(a + d)/N Sensitivitya/(a + c) Specificityd/(b + d) False Positive Rateb/(b + d) False Negative Ratec/(a + c) Positive Predictive Powera/(a + b) Negative Predictive Powerd/(c + d) Misclassification Rate(b + c)/N Odds-ratio(ad)/(cb) Kappa (a + d) - (((a + c)(a + b) + (b + d)(c + d))/N) N - (((a + c)(a + b) + (b + d)(c + d))/N) NMI n(s) 1 - -a.ln(a)-b.ln(b)-c.ln(c)-d.ln(d)+(a+b).ln(a+b)+(c+d).ln(c+d) N.lnN - ((a+c).ln(a+c) + (b+d).ln(b+d)) Actual +Actual - Predicted +ab Predicted -cd The classification stats from DFA would apply here as well
23
Doing a much better logreg in R attach(Dataset) #more output using the design library; the x and y part will allow us to validate later by #freeing up the predictors and outcome for bootstrapping library(Design) GLM.2 <- lrm(formula?, x=T, y=T, data=Dataset) GLM.2 #this part is required for the design library to do effects summaries for your predictors; #the 'options' line isn't necessary unless you put this code before fitting the model ddist=datadist(pred1,pred2,pred3...) options(datadist=‘ddist’) #the actual summaries including odds ratios and CIs for them summary(GLM.2) plot(GLM.2) #validate the model so as to get a bias-corrected R 2 and other metrics validate(GLM.2, method="boot", B=100)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.