Download presentation
Presentation is loading. Please wait.
1
Chapter 8 Dummy Variables and Truncated Variables
2
What is in this Chapter? This chapter relaxes the assumption made in Chapter 4 that the variables in the regression are observed as continuous variables. –Differences in intercepts and/or slope coefficients –The linear probability model and the logit and probit models. –Truncated variables, Tobit models
3
8.1 Introduction The variables we will be considering are: –1.Dummy variables. –2.Truncated variables. They can be used to –1.Allow for differences in intercept terms. –2.Allow for differences in slopes. –3.Estimate equations with cross-equation restrictions. –4.Test for stability of regression coefficients.
4
8.2 Dummy Variables for Changes in the Intercept Term
10
Two More Illustrative Examples We will discuss two more examples using dummy variables. They are meant to illustrate two points worth noting, which are as follows: –1. In some studies with a large number of dummy variables it becomes somewhat difficult to interpret the signs of the coefficients because they seem to have the wrong signs. The first example illustrates this problem –2. Sometimes the introduction of dummy variables produces a drastic change in the slope coefficient. The second example illustrates this point
11
8.2 Dummy Variables for Changes in the Intercept Term The first example is a study of the determinants of automobile prices. Griliches regressed the logarithm of new passenger car prices on various specifications. The results are shown in Table 8.1 Since the dependent variable is the logarithm of price, the regression coefficients can be interpreted as the estimated percentage change in the price for a unit change in a particular quality, holding other qualities constant For example, the coefficient of H indicates that an increase in 10 units of horsepower, ceteris paribus, results in a 1.2 increase in price
12
8.2 Dummy Variables for Changes in the Intercept Term However, some of the coefficients have to be interpreted with caution For example, the coefficient of P in the equation for 1960 says that the presence of power steering as "standard equipment" led to a 22.5 higher price in 1960 In this case the variable P is obviously not measuring the effect of power steering alone but is measuring the effect of "luxuriousness" of the car It is also picking up the effects of A and B. This explains why the coefficient of A is so low in 1960. In fact. A, P, and B together can perhaps be replaced by a single dummy that measures "luxuriousness." These variables appear to be highly intercorrelated
13
8.2 Dummy Variables for Changes in the Intercept Term Another coefficient, at first sight puzzling, is the coefficient of V, which, though not significant, is consistently negative Though a V-8 costs more than a six-cylinder engine on a "comparable" car, what this coefficients says is that, holding horsepower and other variables constant, a V-8 is cheaper by about 4% Since the V-8's have higher horsepower, what this coefficient is saying is that higher horse power can be achieved more cheaply if one shifts to V-8 than by using the six-cylinder engine
14
8.2 Dummy Variables for Changes in the Intercept Term It measures the decline in price per horsepower as one shifts to V-8's even though the total expenditure on horsepower goes up This example illustrates the use of dummy variables and the interpretation of seemingly wrong coefficients
15
8.2 Dummy Variables for Changes in the Intercept Term
16
As another example consider the estimates of liquid-asset demand by manufacturing corporations Vogel and Maddala computed regressions of the form log C =α +ß log S, where C is the cash and S the sales, on the basis of data from the Internal Revenue Service, "Statistics of Income," for the year 1960-1961. The data consisted of 16 industry subgroups and 14 size classes, size being measured by total assets. When the regression
17
8.2 Dummy Variables for Changes in the Intercept Term
18
8.3 Dummy Variables for Changes in Slope Coefficients
24
8.4 Dummy Variables for Cross- Equation Constraints
28
8.5 Dummy Variables for Testing Stability of Regression Coefficients
31
8.6 Dummy Variables Under Heteroskedasticity and Autocorrelation
36
8.7 Dummy Dependent Variables Until now we have been considering models where the explanatory variables are dummy variables. We now discuss models where the explained variable is a dummy variable. This dummy variable can take on two or more values but we consider here the case where it takes on only two values, zero or 1. Considering the other cases is beyond the scope of this book. Since the dummy variable takes on two values, it is called a dichotomous variable There are numerous examples of dichotomous explained variables.
37
8.7 Dummy Dependent Variables
38
8.8 The Linear Probability Model and the Linear Discriminant Function The Linear Probability Model
39
8.8 The Linear Probability Model and the Linear Discriminant Function
43
The Linear Discriminant Function
44
8.8 The Linear Probability Model and the Linear Discriminant Function
46
8.9 The Probit and Logit Models
61
8.11 Truncated Variables: The Tobit Model
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.