Stat 112: Lecture 9 Notes Homework 3: Due next Thursday

Slides:



Advertisements
Similar presentations
Class 21: Tues., Nov. 23 Today: Multicollinearity, One-way analysis of variance Schedule: –Tues., Nov. 30 th – Review, Homework 8 due –Thurs., Dec. 2 nd.
Advertisements

Class 18 – Thursday, Nov. 11 Omitted Variables Bias
Welcome to Econ 420 Applied Regression Analysis
Chap 12-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 12 Simple Regression Statistics for Business and Economics 6.
Stat 112: Lecture 7 Notes Homework 2: Due next Thursday The Multiple Linear Regression model (Chapter 4.1) Inferences from multiple regression analysis.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
Lecture 28 Categorical variables: –Review of slides from lecture 27 (reprint of lecture 27 categorical variables slides with typos corrected) –Practice.
Inference for Regression
LECTURE 3 Introduction to Linear Regression and Correlation Analysis
Lecture 23: Tues., Dec. 2 Today: Thursday:
Statistics for Managers Using Microsoft® Excel 5th Edition
Class 15: Tuesday, Nov. 2 Multiple Regression (Chapter 11, Moore and McCabe).
Chapter 12 Simple Regression
Statistics for Managers Using Microsoft® Excel 5th Edition
Lecture 23: Tues., April 6 Interpretation of regression coefficients (handout) Inference for multiple regression.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 13-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Lecture 6 Notes Note: I will homework 2 tonight. It will be due next Thursday. The Multiple Linear Regression model (Chapter 4.1) Inferences from.
Stat 112: Lecture 8 Notes Homework 2: Due on Thursday Assessing Quality of Prediction (Chapter 3.5.3) Comparing Two Regression Models (Chapter 4.4) Prediction.
Lecture 24: Thurs. Dec. 4 Extra sum of squares F-tests (10.3) R-squared statistic (10.4.1) Residual plots (11.2) Influential observations (11.3,
Lecture 24 Multiple Regression (Sections )
Lecture 24: Thurs., April 8th
Lecture 23 Multiple Regression (Sections )
Lecture 16 – Thurs, Oct. 30 Inference for Regression (Sections ): –Hypothesis Tests and Confidence Intervals for Intercept and Slope –Confidence.
Stat 217 – Day 25 Regression. Last Time - ANOVA When?  Comparing 2 or means (one categorical and one quantitative variable) Research question  Null.
Introduction to Probability and Statistics Linear Regression and Correlation.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 11 th Edition.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Stat 112: Lecture 16 Notes Finish Chapter 6: –Influential Points for Multiple Regression (Section 6.7) –Assessing the Independence Assumptions and Remedies.
Review for Final Exam Some important themes from Chapters 9-11 Final exam covers these chapters, but implicitly tests the entire course, because we use.
Copyright ©2011 Pearson Education 15-1 Chapter 15 Multiple Regression Model Building Statistics for Managers using Microsoft Excel 6 th Global Edition.
Objectives of Multiple Regression
Copyright © 2011 Pearson Education, Inc. Multiple Regression Chapter 23.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Correlation and Linear Regression
Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall 15-1 Chapter 15 Multiple Regression Model Building Statistics for Managers using Microsoft.
1 Least squares procedure Inference for least squares lines Simple Linear Regression.
STA302/ week 911 Multiple Regression A multiple regression model is a model that has more than one explanatory variable in it. Some of the reasons.
Introduction to Linear Regression
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
Applied Quantitative Analysis and Practices LECTURE#22 By Dr. Osman Sadiq Paracha.
EQT 373 Chapter 3 Simple Linear Regression. EQT 373 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value.
Managerial Economics Demand Estimation & Forecasting.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
Lesson Multiple Regression Models. Objectives Obtain the correlation matrix Use technology to find a multiple regression equation Interpret the.
Regression Analysis A statistical procedure used to find relations among a set of variables.
Stat 112 Notes 16 Today: –Outliers and influential points in multiple regression (Chapter 6.7)
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 13 Multiple Regression Section 13.3 Using Multiple Regression to Make Inferences.
Stat 112 Notes 9 Today: –Multicollinearity (Chapter 4.6) –Multiple regression and causal inference.
Lecture 10: Correlation and Regression Model.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
Week 101 ANOVA F Test in Multiple Regression In multiple regression, the ANOVA F test is designed to test the following hypothesis: This test aims to assess.
I271B QUANTITATIVE METHODS Regression and Diagnostics.
Stat 112 Notes 6 Today: –Chapter 4.1 (Introduction to Multiple Regression)
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 15-1 Chapter 15 Multiple Regression Model Building Basic Business Statistics 10 th Edition.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
Stat 112 Notes 6 Today: –Chapters 4.2 (Inferences from a Multiple Regression Analysis)
Introduction Many problems in Engineering, Management, Health Sciences and other Sciences involve exploring the relationships between two or more variables.
Stat 112 Notes 8 Today: –Chapters 4.3 (Assessing the Fit of a Regression Model) –Chapter 4.4 (Comparing Two Regression Models) –Chapter 4.5 (Prediction.
Stats Methods at IC Lecture 3: Regression.
Chapter 13 Simple Linear Regression
Chapter 15 Multiple Regression Model Building
Chapter 14 Introduction to Multiple Regression
Chapter 4 Basic Estimation Techniques
Inference for Least Squares Lines
Analysis of Variance in Matrix form
Multiple Regression Analysis and Model Building
Chapter 11 Simple Regression
Simple Linear Regression
Presentation transcript:

Stat 112: Lecture 9 Notes Homework 3: Due next Thursday Prediction Intervals for Multiple Regression (Chapter 4.5) Multicollinearity (Chapter 4.6).

Summary of F tests Partial F tests are used to test whether a subset of the slopes in multiple regression are zero. The whole model F test (test of the useful of the model) tests whether the slopes on all variables in multiple regression are zero, i.e., it tests whether the multiple regression is more useful for prediction than just ignoring the X’s and using to predict Y. For testing whether one slope in multiple regression is zero, we can use the t-test. But in fact, the partial F test for one slope being zero is equivalent to the t-test (it gives the same p-values and the same decisions). Why use the F test to test whether two or more slopes are not both equal to zero rather than two t-tests? The F test is more powerful. This will be illustrated later in the lecture.

Prediction in Automobile Example The design team is planning a new car with the following characteristics: horsepower = 200, weight = 4000 lb, cargo = 18 ft3, seating = 5 adults. What is a 95% prediction interval for the GPM1000 of this car?

Prediction with Multiple Regression Equation Prediction interval for individual with x1,…,xK:

Finding Prediction Interval in JMP Enter a line with the independent variables x1,…,xK for the new individual. Do not enter a y for the new individual. Fit the model. Because the new individual does not have a y, JMP will not include the new individual when calculating the least squares fit. Click red triangle next to response, click Save Columns: To find , click Predicted Values. Creates column with To find 95% PI, click Indiv Confid Interval. Creates column with lower and upper endpoints of 95% PI.

Prediction in Automobile Example The design team is planning a new car with the following characteristics: horsepower = 200, weight = 4000 lb, cargo = 18 ft3, seating = 5 adults. From JMP, 95% prediction interval: (37.86, 52.31)

Multicollinearity DATA: A real estate agents wants to develop a model to predict the selling price of a home. The agent takes a random sample of 100 homes that were recently sold and records the selling price (y), the number of bedrooms (x1), the size in square feet (x2) and the lot size in square feet (x3). Data is in houseprice.JMP.

Note: These results illustrate how the F test is more powerful for testing whether a group of slopes in multiple regression are all zero than individual t tests.

Multicollinearity Multicollinearity: Explanatory variables are highly correlated with each other. It is often hard to determine their individual regression coefficients. There is very little information in the data set to find out what would happen if we fix house size and change lot size.

Since house size and lot size are highly correlated, for fixed house size, lot sizes do not change much. The standard error for estimating the coefficient of lot sizes is large. Consequently the coefficient may not be significant. Similarly for the coefficient of house size. So, while it seems that at least one of the coefficients is significant (See ANOVA) you can not tell which one is the useful one.

Consequences of Multicollinearity Standard errors of regression coefficients are large. As a result t statistics for testing the population regression coefficients are small. Regression coefficient estimates are unstable. Signs of coefficients may be opposite of what is intuitively reasonable (e.g., negative sign on lot size). Dropping or adding one variable in the regression causes large change in estimates of coefficients of other variables.

Detecting Multicollinearity Pairwise correlations between explanatory variables are high. Large overall F-statistic for testing usefulness of predictors but small t statistics. Variance inflation factors

Using VIFs To obtain VIFs, after Fit Model, go to Parameter Estimates, right click, click Columns and click VIFs. Detecting multicollinearity with VIFs: Any individual VIF greater than 10 indicates multicollinearity. Average of all VIFs considerably greater than 1 also indicates multicollinearity.

Multicollinearity and Prediction If interest is in predicting y, as long as pattern of multicollinearity continues for those observations where forecasts are desired (e.g., house size and lot size are either both high, both medium or both small), multicollinearity is not particularly problematic. If interest is in predicting y for observations where pattern of multicollinearity is different than that in sample (e.g., large house size, small lot size), no good solution (this would be extrapolation).

Problems caused by multicollinearity If interest is in predicting y, as long as pattern of multicollinearity continues for those observations where forecasts are desired (e.g., house size and lot size are either both high, both medium or both small), multicollinearity is not particularly problematic. If interest is in obtaining individual regression coefficients, there is no good solution in face of multicollinearity. If interest is in predicting y for observations where pattern of multicollinearity is different than that in sample (e.g., large house size, small lot size), no good solution (this would be extrapolation).

Dealing with Multicollinearity Suffer: If prediction within the range of the data is the only goal, not the interpretation of the coefficients, then leave the multicollinearity alone. Combine: In some cases, it may be possible to combine variables to reduce multicollinearity (see next slide) Omit a variable. Multicollinearity can be reduced by removing one of the highly correlated variables. However, if one wants to estimate the partial slope of one variable holding fixed the other variables, omitting a variable is not an option, as it changes the interpretation of the slope.

Combining horsepower and weight in cars data

Multiple Regression Example: California Test Score Data The California Standardized Testing and Reporting (STAR) data set californiastar.JMP contains data on test performance, school characteristics and student demographic backgrounds from 1998-1999. Average Test Score is the average of the reading and math scores for a standardized test administered to 5th grade students. One interesting question: What would be the causal effect of decreasing the student-teacher ratio by one student per teacher?

Multiple Regression and Causal Inference Goal: Figure out what the causal effect on average test score would be of decreasing student-teacher ratio and keeping everything else in the world fixed. Lurking variable: A variable that is associated with both average test score and student-teacher ratio. In order to figure out whether a drop in student-teacher ratio causes higher test scores, we want to compare mean test scores among schools with different student-teacher ratios but the same values of the lurking variables, i.e. we want to hold the value of the lurking variable fixed. If we include all of the lurking variables in the multiple regression model, the coefficient on student-teacher ratio represents the change in the mean of test scores that is caused by a one unit increase in student-teacher ratio.

Omitted Variables Bias Schools with many English learners tend to have worst resources. The multiple regression that shows how mean test score changes when student teacher ratio changes but percent of English learners is held fixed gives a better idea of the causal effect of the student-teacher ratio than the simple linear regression that does not hold percent of English learners fixed. Omitted variables bias: bias in estimating the causal effect of a variable from omitting a lurking variable from the multiple regression. Omitted variables bias of omitting percentage of English learners = -2.28-(-1.10)=-1.28.

Key Warning About Multiple Regression Even if we have included many lurking variables in the multiple regression, we may have failed to include one or not have enough data to include one. There will then be omitted variables bias. The best way to study causal effects is to do a randomized experiment.