Download presentation
Presentation is loading. Please wait.
Published byMichael Bradford Modified over 9 years ago
1
Chapter 17 Understanding Residuals © 2010 Pearson Education 1
2
2 17.1 Examining Residuals for Groups Consider the following study of the Sugar content vs. the Calorie content of breakfast cereals: There is no obvious departure from the linearity assumption.
3
© 2010 Pearson Education 3 17.1 Examining Residuals for Groups The histogram of residuals looks fairly normal…
4
© 2010 Pearson Education 4 17.1 Examining Residuals for Groups The mean Calorie content may depend on some factor besides sugar content. …but the distribution shows signs of being a composite of three groups of cereal types.
5
© 2010 Pearson Education 5 17.1 Examining Residuals for Groups Examining the residuals of groups… …suggests factors other than sugar content that may be important in determining Calorie content. Puffing: replacing cereal with “air” lowers the Calorie content, even for high-sugar cereals Fat/oil: Fats add to the Calorie content, even for low-sugar cereals Puffed cereals (high air content per serving) Cereals with fruits and/or nuts (high fat/oil content per serving) All others
6
© 2010 Pearson Education 6 17.1 Examining Residuals for Groups Conclusion: It may be better to report three regressions, one for puffed cereals, one for high-fat cereals, and one for all others.
7
© 2010 Pearson Education 7 17.2 Extrapolation and Prediction Extrapolating – predicting a y value by extending the regression model to regions outside the range of the x -values of the data.
8
© 2010 Pearson Education 8 17.2 Extrapolation and Prediction Why is extrapolation dangerous? It introduces the questionable and untested assumption that the relationship between x and y does not change.
9
© 2010 Pearson Education 9 17.2 Extrapolation and Prediction Cautionary Example: Oil Prices in Constant Dollars Model Prediction (Extrapolation): On average, a barrel of oil will increase $7.39 per year from 1983 to 1998.
10
© 2010 Pearson Education 10 17.2 Extrapolation and Prediction Cautionary Example: Oil Prices in Constant Dollars Actual Price Behavior Extrapolating the 1971-1982 model to the ’80s and ’90s lead to grossly erroneous forecasts.
11
© 2010 Pearson Education 11 17.2 Extrapolation and Prediction Remember: Linear models ought not be trusted beyond the span of the x -values of the data. If you extrapolate far into the future, be prepared for the actual values to be (possibly quite) different from your predictions.
12
© 2010 Pearson Education 12 17.3 Unusual and Extraordinary Observations In regression, an outlier can stand out in two ways. It can have… 1). a large residual:
13
© 2010 Pearson Education 13 17.3 Unusual and Extraordinary Observations In regression, an outlier can stand out in two ways. It can have… 2). a large distance from : “High-leverage point” A high leverage point is influential if omitting it gives a regression model with a very different slope.
14
© 2010 Pearson Education 14 17.3 Unusual and Extraordinary Observations Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.
15
© 2010 Pearson Education 15 17.3 Unusual and Extraordinary Observations Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential. Not high-leverage Large residual Not very influential
16
© 2010 Pearson Education 16 17.3 Unusual and Extraordinary Observations Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.
17
© 2010 Pearson Education 17 17.3 Unusual and Extraordinary Observations Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential. High-leverage Small residual Not very influential
18
© 2010 Pearson Education 18 17.3 Unusual and Extraordinary Observations Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential.
19
© 2010 Pearson Education 19 17.3 Unusual and Extraordinary Observations Tell whether the point is a high-leverage point, if it has a large residual, and if it is influential. High-leverage Medium residual Very influential (omitting the red point will change the slope dramatically!)
20
© 2010 Pearson Education 20 17.3 Unusual and Extraordinary Observations What should you do with a high-leverage point? Sometimes, these points are important. They can indicate that the underlying relationship is in fact nonlinear. Other times, they simply do not belong with the rest of the data and ought to be omitted. When in doubt, create and report two models: one with the outlier and one without.
21
© 2010 Pearson Education 21 17.3 Unusual and Extraordinary Observations WARNING: Influential points do not necessarily have high residuals! So, use scatterplots rather than residual plots to identify high-leverage outliers. (Residual plots work well of course for identifying high-residual outliers.)
22
© 2010 Pearson Education 22 17.4 Working with Summary Values Scatterplots of summarized (averaged) data tend to show less variability than the un-summarized data. Example: Wind speeds at two locations, collected at 6AM, noon, 6PM, and midnight. Raw data: Daily-averaged data: Monthly-averaged data: R 2 = 0.736 R 2 = 0.844 R 2 = 0.942
23
© 2010 Pearson Education 23 17.4 Working with Summary Values WARNING: Be suspicious of conclusions based on regressions of summary data. Regressions based on summary data may look better than they really are! In particular, the strength of the correlation will be misleading.
24
© 2010 Pearson Education 24 17.5 Autocorrelation Time-series data are sometimes autocorrelated, meaning points near each other in time will be related. First-order autocorrelation: Adjacent measurements are related Second-order autocorrelation: Every other measurement is related etc… Autocorrelation violates the independence condition. Regression analysis of autocorrelated data can produce misleading results.
25
© 2010 Pearson Education 25 17.5 Autocorrelation Autocorrelation can sometimes be detected by plotting residuals versus time. Don’t rely on plots to detect autocorrelation. Rather, use the Durbin-Watson statistic.
26
© 2010 Pearson Education 26 17.5 Autocorrelation The value of D will always be between 0 and 4, inclusive. D = 0perfect positive autocorrelation ( e t = e t–1 for all points) D = 2no autocorrelation D = 4perfect negative autocorrelation ( e t = –e t–1 for all points) Durbin-Watson Statistic – estimates the first-order autocorrelation.
27
© 2010 Pearson Education 27 17.5 Autocorrelation Whether the calculated Durbin-Watson statistic D indicates significant autocorrelation depends on the sample size, n, and the number of predictors in the regression model, k. Table W of Appendix C provides critical values for the Durbin-Watson statistic ( d L and d U ) based on n and k.
28
© 2010 Pearson Education 28 17.5 Autocorrelation Testing for positive first-order autocorrelation: If D < d L, then there is evidence of positive autocorrelation If d L < D < d U, then test is inconclusive If D > d U, then there is no evidence of positive autocorrelation Testing for negative first-order autocorrelation: If D > 4 – d L, then there is evidence of negative autocorrelation If 4 – d L < D < 4 – d U, then test is inconclusive If D < 4 – d U, then there is no evidence of negative autocorrelation
29
© 2010 Pearson Education 29 17.5 Autocorrelation Dealing with autocorrelation: Time series methods (Chapter 20) attempt to deal with the problem by modeling the errors. Or, look for a predictor variable (Chapter 19) that removes the dependence in the residuals. A simple solution: sample from the time series to minimize first-order autocorrelation (sampling may do nothing to minimize higher-order autocorrelation, though).
30
© 2010 Pearson Education 30 17.6 Linearity Some data show departures from linearity. Example: Auto Weight vs. Fuel Efficiency Linearity condition is not satisfied.
31
© 2010 Pearson Education 31 17.6 Linearity In cases involving upward bends of negatively-correlated data, try analyzing –1/ y (negative reciprocal of y ) vs. x instead. Linearity condition now appears satisfied.
32
© 2010 Pearson Education 32 17.7 Transforming (Re-expressing) Data The auto weight vs. fuel economy example (17.6) illustrates the principle of transforming data. There is nothing sacred about the way x -values or y -values are measured. From the standpoint of measurement, all of the following may be equally-reasonable: x vs. y x vs. –1/ y x 2 vs. y x vs. log ( y) One or more of these transformations may be useful for making data more linear, more normal, etc.
33
© 2010 Pearson Education 33 17.7 Transforming (Re-expressing) Data Goals of Re-expression Goal 1 Make the distribution of a variable more symmetric.
34
© 2010 Pearson Education 34 17.7 Transforming (Re-expressing) Data Goals of Re-expression Goal 2 Make the spread of several groups more alike. We’ll see methods later in the book that can be applied only to groups with a common standard deviation.
35
© 2010 Pearson Education 35 17.7 Transforming (Re-expressing) Data Goals of Re-expression Goal 3 Make the form of a scatterplot more nearly linear.
36
© 2010 Pearson Education 36 17.7 Transforming (Re-expressing) Data Goals of Re-expression Goal 4 Make the scatter in a scatterplot or residual plot spread out evenly rather than following a fan shape.
37
© 2010 Pearson Education 37 17.8 The Ladder of Powers Ladder of Powers – a collection of frequently-useful re- expressions.
38
© 2010 Pearson Education 38 17.8 The Ladder of Powers Ladder of Powers – a collection of frequently-useful re- expressions.
39
© 2010 Pearson Education 39 17.8 The Ladder of Powers Ladder of Powers – a collection of frequently-useful re- expressions.
40
© 2010 Pearson Education 40 17.8 The Ladder of Powers You want to model the relationship between prices for various items in Paris and Hong Kong. The scatterplot of Hong Kong prices vs. Paris prices shows a generally straight pattern with a a small amount of scatter. What re-expression (if any) of the Hong Kong prices might you start with?
41
© 2010 Pearson Education 41 17.8 The Ladder of Powers You want to model the relationship between prices for various items in Paris and Hong Kong. The scatterplot of Hong Kong prices vs. Paris prices shows a generally straight pattern with a a small amount of scatter. What re-expression (if any) of the Hong Kong prices might you start with? No re-expression is needed to strengthen the linearity assumption. More information is needed to decide whether re-expression might strengthen the normality assumption or the equal-variance assumption.
42
© 2010 Pearson Education 42 17.8 The Ladder of Powers You want to model the population growth of the United States over the past 200 years with a percentage growth that’s nearly constant. The scatterplot shows a strongly upwardly curves pattern. What re-expression (if any) of the Hong Kong prices might you start with?
43
© 2010 Pearson Education 43 17.8 The Ladder of Powers You want to model the population growth of the United States over the past 200 years with a percentage growth that’s nearly constant. The scatterplot shows a strongly upwardly curves pattern. What re-expression (if any) of the Hong Kong prices might you start with? Try a “Power 0” (logarithmic) re-expression of the population values. This should strengthen the linearity assumption.
44
© 2010 Pearson Education 44 What Can Go Wrong? Make sure the relationship is straight enough to fit a regression model. Be alert for extreme residuals and what they have to say about the data. Be on guard for data that is a composite of values from different groups. If you find data subsets that behave differently, consider fitting a different model to each group. Beware of extrapolating. Be particularly wary of extrapolating far into the future. Look for unusual points: points with large residuals and high-leverage points.
45
© 2010 Pearson Education 45 What Can Go Wrong? Beware of high-leverage points, especially those that are influential. Consider setting aside outliers and re-running the regression. Treat unusual points honestly. You must not eliminate points simply to “get a good fit”. Be alert for autocorrelation. A Durbin-Watson test can be useful for revealing first-order autocorrelation.
46
© 2010 Pearson Education 46 What Can Go Wrong? Watch out when dealing with data that are summaries. These tend to inflate the impression of the strength of the correlation. Re-express your data when necessary.
47
© 2010 Pearson Education 47 What Have We Learned? Watch out for more than one group hiding in your regression analysis. The Linearity Condition says that the relationship should be reasonably linear to fit a regression. The satisfaction of this condition is best assessed after performing the regression and examining the residuals. The Outlier Condition refers to two kinds of points: those with large residuals and those with high leverage. It’s a good idea to perform the regression analysis both with them and without them.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.