Use of Weighted Least Squares
In fitting models of the form y i = f(x i ) + i i = 1………n, least squares is optimal under the condition 1 ………. n are i.i.d. N(0, 2 ) and is a reasonable fitting method when this condition is at least approximately satisfied. (Most importantly we require here that there should be no significant outliers).
In the case where we have instead 1 ………. n are independent N(0, i 2 ), it is natural to use instead weighted least squares: choose f from within the permitted class of functions f to minimise w i (y i -f(x i )) 2 Where we take w i proportional to 1/ i 2 (clearly only relative weights matter) ^ ^
For the hill races data, it is natural to assume greater variability in the times for the longer races, with the variability perhaps proportional to the distance. We therefore try refitting the quadratic model with weights proportional to 1/distance 2 > model2w = lm(time ~ -1 + dist +I(dist^2)+ climb + I(climb^2),data = hills[-18,], weights=1/dist^2)
The fitted model is now time=4.94*distance *(distance) * climb *(climb) 2 + ’ Note that the residual summary above is on a “reweighted” scale, and cannot be directly compared with the earlier residual summaries. While the coefficients here appear to have changed somewhat from those in the earlier, unweighted, fit of Model 2, the fitted model is not really very different.
This is confirmed by the plot of the residuals from the weighted fit against those from the unweighted fit, produced by >plot(resid(model2w)~resid(model2))
Resistant Regression
As already observed, least squares fitting is very sensitive to outlying observations. However, there are also a large number of resistant fitting techniques available. One such is least trimmed squares: choose f from within the permitted class of functions f to minimise:- ^
Example: phones data. The R dataset phones in the package MASS gives the annual number of phone calls (millions) in Belgium over the period Consider the model calls = a + b*year The following two graphs plot the data and shows the result of fitting the model by least squares and then fitting the same model by least trimmed squares.
These graphs are achieved by the following code: > plot(calls~year) > phonesls=lm(calls~year) > abline(phonesls) > plot(calls~year) > library(lqs) > phoneslts=lqs(calls~year) > abline(phoneslts)
The explanation for the data is that for a period of time total length of all phone calls in each year was accidentally recorded instead.
Nonparametric Regression
Sometimes we simply wish to fit a smooth model without specifying any particular functional form for f. Again there are very many techniques here. One such is called loess. This constructs the fitted value f(x i ) for each observation i by performing a local regression using only those observations with x values in the neighbourhood of x i (and attaching most weight to the closest observations). ^
Example: cars data. The R data frame cars (in the base package) records 50 observations of speed (mph) and stopping distance (ft). These observations were collected in the 1920s! We treat stopping distance as the response variable and seek to model its dependence on speed.
We try to fit a model using loess. Possible R code is > data(cars) > attach(cars) > plot(cars) > library(modreg) > carslo=loess(dist~speed) > lines(fitted(carslo)~speed)
An optional argument span can be increased from its default value of 0:75 to give more smoothing: > plot(cars) > carslo2=loess(dist~speed, span=1) > lines(fitted(carslo2)~speed)
More robust and resistant fits can be given by specifying the further optional argument family="symmetric"
Models with Qualitative Explanatory Variables (Factors) Data: n = 22 pairs (x i, y i ) where y is the response; the data arise under two different sets of conditions (type = 1 or 2) and are presented below sorted by x within type.
Row y x type
Distinguishing the two types (an appropriate R command will do this)
We model the responses first ignoring the variable type. > mod1 = lm(y~x) > abline(mod1)
> summary(mod1) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) *** x e-08 *** --- Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: on 20 degrees of freedom Multiple R-Squared: , Adjusted R-squared: F-statistic: 69.4 on 1 and 20 DF, p-value: 6.201e-08
> summary.aov(mod1) Df Sum Sq Mean Sq F value Pr(>F) x e-08 *** Residuals Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1
We now model the responses using a model which includes the qualitative variable type, Which was declared as a factor when the data frame was set up > type = factor(c( rep(1,14),rep(2,8))) >mod2 = lm(y~x+type)
> summary(mod2) Call: lm(formula = y ~ x + type) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) e-05 *** x e-11 *** type e-06 *** --- Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: on 19 degrees of freedom Multiple R-Squared: , Adjusted R-squared: F-statistic: on 2 and 19 DF, p-value: 2.001e-11
Interpreting the output: The fit is so e.g.observation 1 : x = 2.4, type = 1, and for observation 20: x = 9.1, type = 2,
> summary.aov(mod2) Df Sum Sq Mean Sq F value Pr(>F) x e-11 *** type e-06 *** Residuals Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1
The fitted values for Model 2 can be obtained in R by: >fitted.values(mod2)
The total variation in the responses is S yy = ; variable x explains of this total (77.6%) and the coefficient associated with it (0.6090) is highly significant (significantly different from 0) – it has a negligible P-value.
In the presence of x, type explains a further of the total variation and its coefficient is also highly significant. Together the two variables explain 92.5% of the total variation. In the presence of x, we gain much by including type.
Finally we extend the previous model (mod2) by allowing for an interaction between the explanatory variables x and type. An interaction exists between two explanatory variables when the effect of one on a response variable is different at different values/levels of the other.
For example consider the effect of policyholder’s age and gender on a response variable claim rate. If the effect of age on claim rate is different for males and females, then there is an interaction between age and gender.
> mod3 = lm(y ~ x * type) > summary(mod5) Call: lm(formula = y ~ x * type) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) e-05 *** x e-10 *** type x:type Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: on 18 degrees of freedom Multiple R-Squared: , Adjusted R-squared: F-statistic: 74.6 on 3 and 18 DF, p-value: 2.388e-10
> summary.aov(mod5) Df Sum Sq Mean Sq F value Pr(>F) x e-11 *** type e-05 *** x:type Residuals Signif. codes: 0 `***' `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1
The interaction appears to have added nothing - the coefficient of determination is effectively unchanged compared to the previous model. We also note that the extra parameter value is small and is not significant. In this particular case, an interaction term is not helpful - including it has simply confused the issue.
In a case where an interaction term does improve the fit and the coefficient is significant, then both variables and the interaction between them should be included in the model