1 Logistic Regression EPP 245 Statistical Analysis of Laboratory Data.

Slides:



Advertisements
Similar presentations
SPH 247 Statistical Analysis of Laboratory Data. Generalized Linear Models The type of predictive model one uses depends on a number of issues; one is.
Advertisements

Linear Regression.
Prof. Navneet Goyal CS & IS BITS, Pilani
Brief introduction on Logistic Regression
2013/12/10.  The Kendall’s tau correlation is another non- parametric correlation coefficient  Let x 1, …, x n be a sample for random variable x and.
Logistic Regression.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
11 Simple Linear Regression and Correlation CHAPTER OUTLINE
What is Interaction for A Binary Outcome? Chun Li Department of Biostatistics Center for Human Genetics Research September 19, 2007.
Repeated Measures, Part 3 May, 2009 Charles E. McCulloch, Division of Biostatistics, Dept of Epidemiology and Biostatistics, UCSF.
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
Lecture 9 Today: Ch. 3: Multiple Regression Analysis Example with two independent variables Frisch-Waugh-Lovell theorem.
1Prof. Dr. Rainer Stachuletz Limited Dependent Variables P(y = 1|x) = G(  0 + x  ) y* =  0 + x  + u, y = max(0,y*)
x – independent variable (input)
In previous lecture, we highlighted 3 shortcomings of the LPM. The most serious one is the unboundedness problem, i.e., the LPM may make the nonsense predictions.
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
1 Multiple Regression EPP 245/298 Statistical Analysis of Laboratory Data.
1 Logistic Regression EPP 245/298 Statistical Analysis of Laboratory Data.
BIOST 536 Lecture 3 1 Lecture 3 – Overview of study designs Prospective/retrospective  Prospective cohort study: Subjects followed; data collection in.
1 Review of Correlation A correlation coefficient measures the strength of a linear relation between two measurement variables. The measure is based on.
Log-linear and logistic models
In previous lecture, we dealt with the unboundedness problem of LPM using the logit model. In this lecture, we will consider another alternative, i.e.
Interpreting Bi-variate OLS Regression
1 Logistic Regression EPP 245 Statistical Analysis of Laboratory Data.
1 Regression and Calibration EPP 245 Statistical Analysis of Laboratory Data.
1 Multivariate Analysis and Discrimination EPP 245 Statistical Analysis of Laboratory Data.
Prelude of Machine Learning 202 Statistical Data Analysis in the Computer Age (1991) Bradely Efron and Robert Tibshirani.
Regression Model Building Setting: Possibly a large set of predictor variables (including interactions). Goal: Fit a parsimonious model that explains variation.
BINARY CHOICE MODELS: LOGIT ANALYSIS
Generalized Linear Models
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 12: Multiple and Logistic Regression Marshall University.
Simple Linear Regression
Methods Workshop (3/10/07) Topic: Event Count Models.
1 The Receiver Operating Characteristic (ROC) Curve EPP 245 Statistical Analysis of Laboratory Data.
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function is the cumulative standardized normal distribution.
Midterm Review Rao Vemuri 16 Oct Posing a Machine Learning Problem Experience Table – Each row is an instance – Each column is an attribute/feature.
What is the MPC?. Learning Objectives 1.Use linear regression to establish the relationship between two variables 2.Show that the line is the line of.
ALISON BOWLING THE GENERAL LINEAR MODEL. ALTERNATIVE EXPRESSION OF THE MODEL.
Logistic Regression STA2101/442 F 2014 See last slide for copyright information.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Average Arithmetic and Average Quadratic Deviation.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
© Department of Statistics 2012 STATS 330 Lecture 20: Slide 1 Stats 330: Lecture 20.
Evolutionary Algorithms for Finding Optimal Gene Sets in Micro array Prediction. J. M. Deutsch Presented by: Shruti Sharma.
CROSS-VALIDATION AND MODEL SELECTION Many Slides are from: Dr. Thomas Jensen -Expedia.com and Prof. Olga Veksler - CS Learning and Computer Vision.
Multiple Logistic Regression STAT E-150 Statistical Methods.
Logistic Regression. Linear regression – numerical response Logistic regression – binary categorical response eg. has the disease, or unaffected by the.
The dangers of an immediate use of model based methods The chronic bronchitis study: bronc: 0= no 1=yes poll: pollution level cig: cigarettes smokes per.
Heart Disease Example Male residents age Two models examined A) independence 1)logit(╥) = α B) linear logit 1)logit(╥) = α + βx¡
Logistic Regression Analysis Gerrit Rooks
Dates Presentations Wed / Fri Ex. 4, logistic regression, Monday Dec 7 th Final Tues. Dec 8 th, 3:30.
Review of statistical modeling and probability theory Alan Moses ML4bio.
Conditional Logistic Regression Epidemiology/Biostats VHM812/802 Winter 2016, Atlantic Veterinary College, PEI Raju Gautam.
Exact Logistic Regression
Birthweight (gms) BPDNProp Total BPD (Bronchopulmonary Dysplasia) by birth weight Proportion.
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
Direct method of standardization of indices. Average Values n Mean:  the average of the data  sensitive to outlying data n Median:  the middle of the.
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 13: Multiple, Logistic and Proportional Hazards Regression.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
LOGISTIC REGRESSION. Purpose  Logistical regression is regularly used when there are only two categories of the dependent variable and there is a mixture.
BINARY LOGISTIC REGRESSION
Chapter 7. Classification and Prediction
CHAPTER 7 Linear Correlation & Regression Methods
Generalized Linear Models
Introduction to Logistic Regression
What is Regression Analysis?
Logistic Regression.
Presentation transcript:

1 Logistic Regression EPP 245 Statistical Analysis of Laboratory Data

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 2 Generalized Linear Models The type of predictive model one uses depends on a number of issues; one is the type of response. Measured values such as quantity of a protein, age, weight usually can be handled in an ordinary linear regression model Patient survival, which may be censored, calls for a different method (next quarter)

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 3 If the response is binary, then can we use logistic regression models If the response is a count, we can use Poisson regression Other forms of response can generate other types of generalized linear models

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 4 Generalized Linear Models We need a linear predictor of the same form as in linear regression βx In theory, such a linear predictor can generate any type of number as a prediction, positive, negative, or zero We choose a suitable distribution for the type of data we are predicting (normal for any number, gamma for positive numbers, binomial for binary responses, Poisson for counts) We create a link function which maps the mean of the distribution onto the set of all possible linear prediction results, which is the whole real line (-∞, ∞). The inverse of the link function takes the linear predictor to the actual prediction

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 5 Ordinary linear regression has identity link (no transformation by the link function) and uses the normal distribution If one is predicting an inherently positive quantity, one may want to use the log link since e x is always positive. An alternative to using a generalized linear model with an log link, is to transform the data using the log or maybe glog. This is a device that works well with measurement data but may not be usable in other cases

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 6 ∞ 0 Possible Means -∞ ∞ 0 Predictors Link = Log

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 7 ∞ 0 Possible Means -∞ ∞ 0 Predictors Inverse Link = e x

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 8 Logistic Regression Suppose we are trying to predict a binary variable (patient has ovarian cancer or not, patient is responding to therapy or not) We can describe this by a 0/1 variable in which the value 1 is used for one response (patient has ovarian cancer) and 0 for the other (patient does not have ovarian cancer We can then try to predict this response

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 9 For a given patient, a prediction can be thought of as a kind of probability that the patient does have ovarian cancer. As such, the prediction should be between 0 and 1. Thus ordinary linear regression is not suitable The logit transform takes a number which can be anything, positive or negative, and produces a number between 0 and 1. Thus the logit link is useful for binary data

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data Possible Means -∞ ∞ 0 Predictors Link = Logit

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data Possible Means -∞ ∞ 0 Predictors Inverse Link = inverse logit

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 12

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 13

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 14

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 15 Analyzing Tabular Data with Logistic Regression Response is hypertensive y/n Predictors are smoking (y/n), obesity (y/n), snoring (y/n) [coded as 0/1 for Stata, R does not care] How well can these 3 factors explain/predict the presence of hypertension? Which are important?

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 16 input smokingobesity snoring hyp fw end

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 17. do hypertension-in. input smoking obesity snoring hyp fw smoking obesity snoring hyp fw end. end of do-file

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 18. logistic hyp smoking obesity snoring [fweight=fw] Logistic regression Number of obs = 433 LR chi2(3) = Prob > chi2 = Log likelihood = Pseudo R2 = hyp | Odds Ratio Std. Err. z P>|z| [95% Conf. Interval] smoking | obesity | snoring | logit Logistic regression Number of obs = 433 LR chi2(3) = Prob > chi2 = Log likelihood = Pseudo R2 = hyp | Coef. Std. Err. z P>|z| [95% Conf. Interval] smoking | obesity | snoring | _cons |

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 19 Juul's IGF data Description: The 'juul' data frame has 1339 rows and 6 columns. It contains a reference sample of the distribution of insulin-like growth factor (IGF-I), one observation per subject in various ages with the bulk of the data collected in connection with school physical examinations. Variables: age a numeric vector (years). menarche a numeric vector. Has menarche occurred (code 1: no, 2: yes)? sex a numeric vector (1: boy, 2: girl). igf1 a numeric vector. Insulin-like growth factor ($mu$g/l). tanner a numeric vector. Codes 1-5: Stages of puberty a.m. Tanner. testvol a numeric vector. Testicular volume (ml).

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 20. clear. insheet using "C:\TD\CLASS\K \juul.csv". summarize age menarch sex igf1 tanner testvol Variable | Obs Mean Std. Dev. Min Max age | menarche | sex | igf1 | tanner | testvol | keep if age > 8 (237 observations deleted). keep if age < 20 (153 observations deleted)

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 21. summarize age menarch sex igf1 tanner testvol Variable | Obs Mean Std. Dev. Min Max age | menarche | sex | igf1 | tanner | testvol | keep if menarche < 100 (430 observations deleted). summarize age menarch sex igf1 tanner testvol Variable | Obs Mean Std. Dev. Min Max age | menarche | sex | igf1 | tanner | testvol | 0.

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 22. generate men1 = menarche -1. logistic men1 age Logistic regression Number of obs = 519 LR chi2(1) = Prob > chi2 = Log likelihood = Pseudo R2 = men1 | Odds Ratio Std. Err. z P>|z| [95% Conf. Interval] age | logistic men1 age tanner Logistic regression Number of obs = 436 LR chi2(2) = Prob > chi2 = Log likelihood = Pseudo R2 = men1 | Odds Ratio Std. Err. z P>|z| [95% Conf. Interval] age | tanner |

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 23. generate tan1 = tanner == 1. generate tan2 = tanner == 2. generate tan3 = tanner == 3. generate tan4 = tanner ==4. generate tan5 = tanner == 5. logistic men1 age tan2 tan3 tan4 tan5 Logistic regression Number of obs = 519 LR chi2(5) = Prob > chi2 = Log likelihood = Pseudo R2 = men1 | Odds Ratio Std. Err. z P>|z| [95% Conf. Interval] age | tan2 | tan3 | tan4 | tan5 |

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 24 Class prediction from expression arrays One common use of omics data is to try to develop predictions for classes of patients, such as –cancer/normal –type of tumor –grading or staging of tumors –many other disease/healthy or diagnosis of disease type

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 25 Two-class prediction Linear regression Logistic regression Linear or quadratic discriminant analysis Partial least squares Fuzzy neural nets estimated by genetic algorithms and other buzzwords Many such methods require fewer variables than cases, so dimension reduction is needed

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 26 Dimension Reduction Suppose we have 20,000 variables and wish to predict whether a patient has ovarian cancer or not and suppose we have 50 cases and 50 controls We can only use a number of predictors much smaller than 50 How do we do this?

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 27 Two distinct ways are selection of genes and selection of “supergenes” as linear combinations We can choose the genes with the most significant t-tests or other individual gene criteria We can use forward stepwise logistic regression, which adds the most significant gene, then the most significant addition, and so on, or other ways of picking the best subset of genes

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 28 Supergenes are linear combinations of genes. If g 1, g 2, g 3, …, g p are the expression measurements for the p genes in an array, and a 1, a 2, a 3, …, a p are a set of coefficients then g 1 a 1 + g 2 a 2 + g 3 a 3 + …+ g p a p is a supergene. Methods for construction of supergenes include PCA and PLS

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 29 Choosing Subsets of Supergenes Suppose we have 50 cases and 50 controls and an array of 20,000 gene expression values for each of the 100 observations In general, any arbitrary set of 100 genes will be able to predict perfectly in the data if a logistic regression is fit to the 100 genes Most of these will predict poorly in future samples

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 30 This is a mathematical fact A statistical fact is that even if there is no association at all between any gene and the disease, often a few genes will produce apparently excellent results, that will not generalize at all We must somehow account for this, and cross validation is the usual way

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 31 Consequences of many variables If there is no effect of any variable on the classification, it is still the case that the number of cases correctly classified increases in the sample that was used to derive the classifier as the number of variables increases But the statistical significance is usually not there

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 32 If the variables used are selected from many, the apparent statistical significance and the apparent success in classification is greatly inflated, causing end-stage delusionary behavior in the investigator This problem can be improved using cross validation or other resampling methods

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 33 Overfitting When we fit a statistical model to data, we adjust the parameters so that the fit is as good as possible and the errors are as small as possible Once we have done so, the model may fit well, but we don’t have an unbiased estimate of how well it fits if we use the same data to assess as to fit

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 34 Training and Test Data One way to approach this problem is to fit the model on one dataset (say half the data) and assess the fit on another This avoids bias but is inefficient, since we can only use perhaps half the data for fitting We can get more by doing this twice in which each half serves as the training set once and the test set once This is two-fold cross validation

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 35 It may be more efficient to use , or 20-fold cross validation depending on the size of the data set Leave-out-one cross validation is also popular, especially with small data sets With 10-fold CV, one can divide the set into 10 parts, pick random subsets of size 1/10, or repeatedly divide the data

November 2, 2006EPP 245 Statistical Analysis of Laboratory Data 36 Stepwise Logistic Regression Another way to select variables is stepwise This can be better than individual variable selection, which may choose many highly correlated predictors that are redundent A generic function stepwise can be used for many kinds of predictor functions in stata