1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used to compare their goodness of fit.
However, when the dependent variable is different, this is not legitimate. 2 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
In the case of the linear model, R 2 measures the proportion of the variance in Y explained by the model. In the case of the semilogarithmic model, it measures the proportion of the variance of the logarithm of Y explained by the model. 3 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
Clearly these are related, but they are not the same and direct comparisons are not valid. 4 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
However, the goodness of fit of models with linear and logarithmic versions of the same dependent variable can be compared indirectly by subjecting the dependent variable to the Box–Cox transformation and fitting the model shown. 5 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation:
This is a family of specifications that depend on the parameter. The determination of is an empirical matter, like the determination of the other parameters. 6 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation:
The model is nonlinear in parameters and so a nonlinear regression method should be used. In practice, maximum likelihood estimation is used. 7 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation:
8 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS The reason that this transformation is of interest in the present context is that specifications with linear and logarithmic dependent variables are special cases. when Box–Cox transformation:
9 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Putting = 1 gives the linear model. The dependent variable is then Y – 1, rather than Y, but subtracting a constant from the dependent variable does not affect the regression results, except for the estimate of the intercept. when Box–Cox transformation:
10 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Putting = 0 gives the (semi–)logarithmic model. Of course, one cannot talk about putting exactly equal to 0, because then the dependent variable becomes zero divided by zero. We are talking about the limiting form as tends to zero and we have used L'Hôpital's rule. when Box–Cox transformation:
11 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS So one could fit the general model and see whether is close to 0 or close to 1. Of course. 'close' has no meaning in econometrics. To approach this issue technically, one should test the hypotheses = 0 and = 1. when Box–Cox transformation:
The outcome might be that one is rejected and the other not rejected, but of course it is possible that neither might be rejected or both might be rejected, given your chosen significance level. 12 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS when Box–Cox transformation:
when 13 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation: If you are interested only in comparing the fits of the linear and logarithmic specifications, there is a short-cut procedure that involves only standard least squares regressions.
The first step is to divide the observations on the dependent variable by their geometric mean. We will call the transformed variable Y*. 14 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS geometric mean of Y
You now regress Y* and log e Y*, leaving the right side of the equation unchanged. (The parameters have been given prime marks to emphasize that the coefficients will not be estimates of the original 1 and 2.) 15 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS geometric mean of Y
The residual sums of squares are now directly comparable. The specification with the smaller RSS therefore provides the better fit. 16 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS geometric mean of Y
We will use the transformation to compare the fits of the linear and semilogarithmic versions of a simple earnings function, using EAEF Data Set COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
The first step is to calculate the geometric mean of the dependent variable. The easiest way to do this is to take the exponential of the mean of the log of the dependent variable. 18 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
The sum of the logarithms of Y is equal to the logarithm of the products of Y. 19 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
Now we use the rule that alog X is the same as log X a. 20 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
And finally we use the fact that the exponential of the logarithm of X reduces to X. 21 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
LGEARN has already been defined as the logarithm of EARNINGS. We find its mean. In Stata this is done with the ‘sum’ command. 22 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. sum LGEARN Variable | Obs Mean Std. Dev. Min Max LGEARN |
We then define EARNSTAR, dividing EARNINGS by the exponential of the mean of LGEARN. 23 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. sum LGEARN Variable | Obs Mean Std. Dev. Min Max LGEARN | gen EARNSTAR = EARNINGS/exp(2.79)
We also define LGEARNST, the logarithm of EARNSTAR. 24 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. sum LGEARN Variable | Obs Mean Std. Dev. Min Max LGEARN | gen EARNSTAR = EARNINGS/exp(2.79). gen LGEARNST = ln(EARNSTAR)
. reg EARNSTAR S EXP Source | SS df MS Number of obs = F( 2, 537) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = EARNSTAR | Coef. Std. Err. t P>|t| [95% Conf. Interval] S | EXP | _cons | Here is the regression of EARNSTAR on S and EXP. The residual sum of squares is COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
We run the parallel regression for LGEARNST. The residual sum of squares is Thus we conclude that the semilogarithmic version gives a better fit. 26 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. reg LGEARNST S EXP Source | SS df MS Number of obs = F( 2, 537) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = LGEARNST | Coef. Std. Err. t P>|t| [95% Conf. Interval] S | EXP | _cons |
. boxcox EARNINGS S EXP Number of obs = 540 LR chi2(2) = Log likelihood = Prob > chi2 = EARNINGS | Coef. Std. Err. z P>|z| [95% Conf. Interval] /theta | Test Restricted LR statistic P-value H0: log likelihood chi2 Prob > chi theta = theta = theta = Here is the output for the full Box–Cox regression. The parameter that we have denoted (lambda) is called theta by Stata. It is estimated at –0.13. Since it is closer to 0 than to 1, it indicates that the dependent variable should be logarithmic rather than linear. 27 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS
However, even the value of 0 does not lie in the 95 percent confidence interval. (The log likelihood tests will be explained in Chapter 10.) 28 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. boxcox EARNINGS S EXP Number of obs = 540 LR chi2(2) = Log likelihood = Prob > chi2 = EARNINGS | Coef. Std. Err. z P>|z| [95% Conf. Interval] /theta | Test Restricted LR statistic P-value H0: log likelihood chi2 Prob > chi theta = theta = theta =
Copyright Christopher Dougherty These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 4.2 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre Individuals studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics or the University of London International Programmes distance learning course EC2020 Elements of Econometrics