Let’s Get It Straight! Re-expressing Data Curvilinear Regression Chapter 9 Let’s Get It Straight! Re-expressing Data Curvilinear Regression
Straight to the Point We cannot use a linear model unless the relationship between the two variables is (approximately) linear. Often re-expression is necessary to straighten curved relationships so that we can fit and use a simple linear model. Ways to re-express data involve using logarithms, powers, and reciprocals. Re-expressions can be seen in everyday life—everybody does it.
Straight to the Point (cont.) The relationship between fuel efficiency (in miles per gallon) and weight (in pounds) for late model cars looks fairly linear at first:
Straight to the Point (cont.) A look at the residuals plot shows a problem:
Straight to the Point (cont.) We can re-express fuel efficiency as gallons per hundred miles (a reciprocal) and eliminate the bend in the original scatterplot:
Straight to the Point (cont.) A look at the residuals plot for the new model seems more reasonable:
Goals of Re-expression Goal 1: Make the distribution of a variable (as seen in its histogram, for example) more symmetric.
Goals of Re-expression (cont.) Goal 2: Make the spread of several groups (as seen in side-by-side boxplots) more alike, even if their centers differ.
Goals of Re-expression (cont.) Goal 3: Make the form of a scatterplot more nearly linear.
Goals of Re-expression (cont.) Goal 4: Make the scatter in a scatterplot spread out evenly rather than thickening at one end. This can be seen in the two scatterplots we just saw with Goal 3:
The Ladder of Powers There is a family of simple re-expressions that move data toward our goals in a consistent way. This collection of re-expressions is called the Ladder of Powers. The Ladder of Powers orders the effects that the re-expressions have on data.
The Ladder of Powers -1 -1/2 “0” ½ 1 2 Comment Name Power Ratios of two quantities (e.g., mph) often benefit from a reciprocal. The reciprocal of the data -1 An uncommon re-expression, but sometimes useful. Reciprocal square root -1/2 Measurements that cannot be negative often benefit from a log re-expression. We’ll use logarithms here “0” Counts often benefit from a square root re-expression. Square root of data values ½ Data with positive and negative values and no bounds are less likely to benefit from re-expression. Raw data 1 Try with unimodal distributions that are skewed to the left. Square of data values 2 Comment Name Power
Tukey’s Rule of Thumb for Re-Expression
Example
Plan B: Attack of the Logarithms When none of the data values is zero or negative, logarithms can be a helpful ally in the search for a useful model. Try taking the logs of both the x- and y-variable. Then re-express the data using some combination of x or log(x) vs. y or log(y).
Plan B: Attack of the Logarithms (cont.)
Power: log(y) vs log(x) Size matters: size of mammals and their metabolic rate Slope < 1. Indicates that the nonlinear effect of mass on metabolic rate lessens as mass increases. Average percentage increase is about 70%
Multiple Benefits We often choose a re-expression for one reason and then discover that it has helped other aspects of an analysis. For example, a re-expression that makes a histogram more symmetric might also straighten a scatterplot or stabilize variance.
Why Not Just a Curve? If there’s a curve in the scatterplot, why not just fit a curve to the data?
Why Not Just a Curve? (cont.) The mathematics and calculations for “curves of best fit” are considerably more difficult than “lines of best fit.” Besides, straight lines are easy to understand. We know how to think about the slope and the y-intercept.
What Can Go Wrong? Don’t expect your model to be perfect. Don’t choose a model based on R2 alone:
What Can Go Wrong? (cont.) Beware of multiple modes. Re-expression cannot pull separate modes together. Watch out for scatterplots that turn around. Re-expression can straighten many bent relationships, but not those that go up and down.
What Can Go Wrong? (cont.) Watch out for negative data values. It’s impossible to re-express negative values by any power that is not a whole number on the Ladder of Powers or to re-express values that are zero or negative powers. Watch for data far from 1. Data values that are all very far from 1 may not be much affected by re-expression unless the range is very large. If all the data values are large (e.g., years), consider subtracting a constant to bring them back near 1. Don’t stray too far from the ladder.
What have we learned? When the conditions for regression are not met, a simple re-expression of the data may help. A re-expression may make the: Distribution of a variable more symmetric. Spread across different groups more similar. Form of a scatterplot straighter. Scatter around the line in a scatterplot more consistent.
What have we learned? (cont.) Taking logs is often a good, simple starting point. To search further, the Ladder of Powers or the log-log approach can help us find a good re-expression. Our models won’t be perfect, but re-expression can lead us to a useful model.
Re-expressing Data Curvilinear Regression (aka Polynomial Regression) Chapter 9 (cont.) Let’s Get It Straight! Re-expressing Data Curvilinear Regression (aka Polynomial Regression) Previous slides Next
Polynomial Regression Model To model this behavior we include additional terms that have higher powers of the explanatory variable x Second-Order Model Third-Order Model We will not go beyond degree 3
Example: Fast Food Revenue You are asked to develop a regression model for a fast food restaurant. The primary market is middle-income families and their children, particularly those between the ages of 5 and 12. Response variable —gross restaurant revenue Explanatory variable — family income (median family income in “neighborhood” of restaurant)
Analysis of variance table for regression model: Parameter Estimate Simple linear regression results: Dependent Variable: Revenue (000's) Independent Variable: Income (000's) Revenue (000's) = 804.18115 + 11.627225 Income (000's) Sample size: 25 R (correlation coefficient) = 0.435466 R-sq = 0.18963064 Estimate of error standard deviation: 119.61676 Parameter estimates: Analysis of variance table for regression model: Parameter Estimate Std. Err. Alternative DF T-Stat P-value Intercept 804.18115 123.62402 ≠ 0 23 6.5050557 <0.0001 Slope 11.627225 5.0118656 2.3199395 0.0296 Source DF SS MS F-stat P-value Model 1 77008.275 5.3821195 0.0296 Error 23 329087.88 14308.169 Total 24 406096.16
Residual Plot (Oh-Oh)
Scatterplot Indicates 2nd Order Term Needed
Regression Statistics Excel Output Regression Statistics Multiple R 0.896068433 R Square 0.802938636 Adjusted R Sq 0.785023967 Standard Error 60.31201568 Observations 25 ANOVA df SS MS F Significance F Regression 2 326070.2968 163035 44.8202 1.74027E-08 Residual 22 80025.86318 3637.5 Total 24 406096.16 Coefficients t Stat P-value Lower 95% Upper 95% Intercept -1454.52099 279.9927688 -5.195 3.3E-05 -2035.190454 -873.8515 Income (000's) 209.8148127 24.08410136 8.7118 1.4E-08 159.8674435 259.76218 Income sq -4.17050304 0.504009277 -8.275 3.4E-08 -5.215754302 -3.125252
Residual Plots We improved, but can we do even better?
Scatterplots
Expanded Model Where x1 is the median family income in the “neighborhood” x2 is the average child age in the “neighborhood” Should we include the interaction term in the model? When in doubt, it’s probably best to include it
Regression Statistics Multiple R 0.95212131 R Square 0.90653499 Adjusted R Sq. 0.88193893 Standard Error 44.6953328 Observations 25 ANOVA df SS MS F Significance F Regression 5 368140.3772 73628 36.85692 3.86193E-09 Residual 19 37955.78279 1997.7 Total 24 406096.16 Coefficients t Stat P-value Lower 95% Upper 95% Intercept -1133.9813 320.0193142 -3.543 0.00217 -1803.789382 -464.1731 Income (000's) 173.203169 28.20399481 6.1411 6.66E-06 114.1715291 232.23481 Income sq -3.7261288 0.54215586 -6.873 1.48E-06 -4.860874066 -2.591384 Age 23.5499634 32.23447166 0.7306 0.473947 -43.91756117 91.017488 Age sq -3.8687072 1.179054451 -3.281 0.003928 -6.336496532 -1.400918 (Income)( Age) 1.96726822 0.944081682 2.0838 0.050921 -0.008717454 3.9432539 Multicollinearity
Residual Plot