ASSUMPTION CHECKING In regression analysis with Stata

Slides:



Advertisements
Similar presentations
Week 13 November Three Mini-Lectures QMM 510 Fall 2014.
Advertisements

AMMBR - final stuff xtmixed (and xtreg) (checking for normality, random slopes)
STATA Introductory manual. QUIZ What are the main OLS assumptions? 1.On average right 2.Linear 3.Predicting variables and error term uncorrelated 4.No.
AMMBR from xtreg to xtmixed (+checking for normality, random slopes)
The Multiple Regression Model.
Toolkit + “show your skills” AMMBR from xtreg to xtmixed (+checking for normality, and random slopes, and cross-classified models, and then we are almost.
Inference for Regression
Prediction, Correlation, and Lack of Fit in Regression (§11. 4, 11
Advanced Methods and Models in Behavioral Research – 2014 Been there / done that: Stata Logistic regression (……) Conjoint analysis Coming up: Multi-level.
AMMBR from xtreg to xtmixed (+checking for normality, and random slopes, and cross-classified models, and then we are done in terms of theory)
Multiple Linear Regression Model
Statistics for Managers Using Microsoft® Excel 5th Edition
Statistics for Managers Using Microsoft® Excel 5th Edition
1 BA 275 Quantitative Business Methods Residual Analysis Multiple Linear Regression Adjusted R-squared Prediction Dummy Variables Agenda.
Multivariate Data Analysis Chapter 4 – Multiple Regression.
REGRESSION MODEL ASSUMPTIONS. The Regression Model We have hypothesized that: y =  0 +  1 x +  | | + | | So far we focused on the regression part –
Lecture 20 Simple linear regression (18.6, 18.9)
Regression Diagnostics Checking Assumptions and Data.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 11 th Edition.
Finding help. Stata manuals You have all these as pdf! Check the folder /Stata12/docs.
Regression Model Building Setting: Possibly a large set of predictor variables (including interactions). Goal: Fit a parsimonious model that explains variation.
Multiple Linear Regression A method for analyzing the effects of several predictor variables concurrently. - Simultaneously - Stepwise Minimizing the squared.
Copyright ©2011 Pearson Education 15-1 Chapter 15 Multiple Regression Model Building Statistics for Managers using Microsoft Excel 6 th Global Edition.
Forecasting Revenue: An Example of Regression Model Building Setting: Possibly a large set of predictor variables used to predict future quarterly revenues.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS & Updated by SPIROS VELIANITIS.
Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall 15-1 Chapter 15 Multiple Regression Model Building Statistics for Managers using Microsoft.
Typical paper follow-ups Paper is wrong (in the sense of a real mistake) There is an alternative explanation for the analytical results. You test that.
Forecasting Revenue: An Example of Regression Model Building Setting: Possibly a large set of predictor variables used to predict future quarterly revenues.
Advanced Methods and Models in Behavioral Research – 2010/2011 AMMBR course design CONTENT METHOD Y is 0/1 conjoint analysis logistic regression multi-level.
1 1 Slide © 2007 Thomson South-Western. All Rights Reserved Chapter 13 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better.
2 Multicollinearity Presented by: Shahram Arsang Isfahan University of Medical Sciences April 2014.
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Lesson Multiple Regression Models. Objectives Obtain the correlation matrix Use technology to find a multiple regression equation Interpret the.
McGraw-Hill/Irwin Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. A PowerPoint Presentation Package to Accompany Applied Statistics.
Dr. C. Ertuna1 Issues Regarding Regression Models (Lesson - 06/C)
Maths Study Centre CB Open 11am – 5pm Semester Weekdays
Week 5Slide #1 Adjusted R 2, Residuals, and Review Adjusted R 2 Residual Analysis Stata Regression Output revisited –The Overall Model –Analyzing Residuals.
Assumption checking in “normal” multiple regression with Stata.
 Relationship between education level, income, and length of time out of school  Our new regression equation: is the predicted value of the dependent.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
Advanced Methods and Models in Behavioral Research – 2009/2010 AMMBR course design CONTENT METHOD Y is 0/1 conjoint analysis logistic regression multi-level.
Applied Quantitative Analysis and Practices LECTURE#30 By Dr. Osman Sadiq Paracha.
Statistical Data Analysis 2010/2011 M. de Gunst Lecture 10.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 10 th Edition.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
More on regression Petter Mostad More on indicator variables If an independent variable is an indicator variable, cases where it is 1 will.
Assumptions & Requirements.  Three Important Assumptions 1.The errors are normally distributed. 2.The errors have constant variance (i.e., they are homoscedastic)
Data Screening. What is it? Data screening is very important to make sure you’ve met all your assumptions, outliers, and error problems. Each type of.
Regression Analysis Part A Basic Linear Regression Analysis and Estimation of Parameters Read Chapters 3, 4 and 5 of Forecasting and Time Series, An Applied.
Quantitative Methods Residual Analysis Multiple Linear Regression C.W. Jackson/B. K. Gordor.
Chapter 12 REGRESSION DIAGNOSTICS AND CANONICAL CORRELATION.
Chapter 15 Multiple Regression Model Building
Regression Analysis.
Kakhramon Yusupov June 15th, :30pm – 3:00pm Session 3
Regression Diagnostics
Multiple Regression.
Chapter 12: Regression Diagnostics
Fundamentals of regression analysis
Residuals and Diagnosing the Quality of a Model
Stats Club Marnie Brennan
BA 275 Quantitative Business Methods
Multiple Regression A curvilinear relationship between one variable and the values of two or more other independent variables. Y = intercept + (slope1.
Multiple Regression Models
LESSON 4.4. MULTIPLE LINEAR REGRESSION. Residual Analysis
Chapter 4, Regression Diagnostics Detection of Model Violation
Regression Forecasting and Model Building
Chapter 13 Additional Topics in Regression Analysis
Multicollinearity What does it mean? A high degree of correlation amongst the explanatory variables What are its consequences? It may be difficult to separate.
Presentation transcript:

ASSUMPTION CHECKING In regression analysis with Stata In multi-level analysis with Stata (not much extra) In logistic regression analysis with Stata NOTE: THIS WILL BE EASIER IN STATA THAN IT WAS IN SPSS

Assumption checking in “normal” multiple regression with Stata

Assumptions in regression analysis No multi-collinearity All relevant predictor variables included Homoscedasticity: all residuals are from a distribution with the same variance Linearity: the “true” model should be linear. Independent errors: having information about the value of a residual should not give you information about the value of other residuals Errors are distributed normally

FIRST THE ONE THAT LEADS TO NOTHING NEW IN STATA (NOTE: SLIDE TAKEN LITERALLY FROM MMBR) Independent errors: having information about the value of a residual should not give you information about the value of other residuals Detect: ask yourself whether it is likely that knowledge about one residual would tell you something about the value of another residual. Typical cases: -repeated measures -clustered observations (people within firms / pupils within schools) Consequences: as for heteroscedasticity Usually, your confidence intervals are estimated too small (think about why that is!). Cure: use multi-level analyses

In Stata: Example: the Stata “auto.dta” data set sysuse auto corr (correlation) vif (variance inflation factors) ovtest (omitted variable test) hettest (heterogeneity test) predict e, resid swilk (test for normality)

Finding the commands “help regress”  “regress postestimation” and you will find most of them (and more) there

A strong correlation between two or more of your predictor variables Multi-collinearity A strong correlation between two or more of your predictor variables You don’t want it, because: It is more difficult to get higher R’s The importance of predictors can be difficult to establish (b-hats tend to go to zero) The estimates for b-hats are unstable under slightly different regression attempts (“bouncing beta’s”) Detect: Look at correlation matrix of predictor variables calculate VIF-factors while running regression Cure: Delete variables so that multi-collinearity disappears, for instance by combining them into a single variable

Stata: calculating the correlation matrix (“corr”) and VIF statistics (“vif”)

Misspecification tests (replaces: all relevant predictor variables included)

Homoscedasticity: all residuals are from a distribution with the same variance Consequences: Heteroscedasticiy does not necessarily lead to biases in your estimated coefficients (b-hat), but it does lead to biases in the estimate of the width of the confidence interval, and the estimation procedure itself is not efficient.

Testing for heteroscedasticity in Stata Your residuals should have the same variance for all values of Y  hettest Your residuals should have the same variance for all values of X  hettest, rhs

Errors distributed normally Errors are distributed normally (just the errors, not the variables themselves!) Detect: look at the residual plots, test for normality Consequences: rule of thumb: if n>600, no problem. Otherwise confidence intervals are wrong. Cure: try to fit a better model, or use more difficult ways of modeling instead (ask an expert).

Errors distributed normally First calculate the errors: predict e, resid Then test for normality swilk e

Assumption checking in multi-level multiple regression with Stata

In multi-level Test all that you would test for multiple regression – poor man’s test: do this using multiple regression! (e.g. “hettest”) Add: xttest0 (see last week) Add (extra): Test visually whether the normality assumption holds, but do this for the random 

Note: extra material (= not on the exam, bonus points if you know how to use it) tab school, gen(sch_) reg y sch2 – sch28 gen coefs = . for num 2/28: replace coefs =_b[schX] if _n==X swilk coefs

Assumption checking in multi-level multiple regression with Stata

Assumptions Y is 0/1 Ratio of cases to variables should be “reasonable” No cases where you have complete separation (Stata will remove these cases automatically) Linearity in the logit (comparable to “the true model should be linear” in multiple regression) Independence of errors (as in multiple regression)

Further things to do: Check goodness of fit and prediction for different groups (as done in the do-file you have) Check the correlation matrix for strong correlations between predictors (corr) Check for outliers using regress and diag (but don’t tell anyone I suggested this)