Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical Inference and Regression Analysis: GB

Similar presentations


Presentation on theme: "Statistical Inference and Regression Analysis: GB"— Presentation transcript:

1 Statistical Inference and Regression Analysis: GB.3302.30
Professor William Greene Stern School of Business IOMS Department Department of Economics

2 Inference and Regression
Perfect Collinearity

3 Perfect Multicollinearity
If X does not have full rank, then at least one column can be written as a linear combination of the other columns. X’X does not have rank and cannot be inverted, so b cannot be computed.

4 Multicollinearity Enhanced Monet Area Effect Model: Height and Width Effects Log(Price) = β1 + β2 ln Area + β3 ln Aspect Ratio + β4 ln Height + β5 Signature + ε (Aspect Ratio = Width/Height)

5 Multicollinearity Enhanced Monet Area Effect Model: Height and Width Effects Log(Price) = β1 + β2 ln Area + β3 ln Aspect Ratio + β4 ln Height + β5 Signature + ε (Aspect Ratio = Width/Height) X1 = 1, X2 = lnArea, X3 = LnAspect, X4 = lnHeight, X5 = Signature X2 = lnH + LnW X3 = lnW - LnH X4 = lnH x2 - x3 - 2x4 = (lnH + lnW) - (lnW - lnH) - 2lnH = 0 X5 = Signature X4 = 1/2X2 - 1/2X3 c = [0, 1, -1, -2, 0]

6 Inference and Regression
Least Squares Fit

7 Minimizing the sum of squares
b minimizes iei2 = ee = (y - Xb)(y - Xb). Any other coefficient vector has a larger sum of squares. (Least squares is least squares.) A quick proof: d = the vector, not b u = y - Xd. Then, uu = (y - Xd)(y-Xd) = [y - Xb - X(d - b)][y - Xb - X(d - b)] = [e - X(d - b)] [e - X(d - b)] Expand to find uu = ee + (d-b)XX(d-b) > ee

8 Dropping a Variable An important special case. Comparing the results that we get with and without a variable z in the equation in addition to the other variables in X. Results which we can show using the previous result: Dropping a variable(s) cannot improve the fit - that is, reduce the sum of squares. The relevant d is (* ,* ,*. … , 0) i.e., some vector that has a zero in a particular place. Adding a variable(s) cannot degrade the fit - that is, increase the sum of squares. Compare the sum of squares when there is a zero in the location to where the vector does not contain the zero – just reverse the cases.

9 The Fit of the Regression
“Variation:” In the context of the “model” we speak of variation of a variable as movement of the variable, usually associated with (not necessarily caused by) movement of another variable.

10 Decomposing the Variation of y
Total sum of squares = Regression Sum of Squares (SSR) + Residual Sum of Squares (SSE)

11 Decomposing the Variation

12 A Fit Measure R2 = (Very Important Result.) R2 is bounded by zero and one if and only if: (a) There is a constant term in X and (b) The line is computed by linear least squares.

13 Understanding R2 R2 = squared correlation between y and the prediction of y given by the regression

14 Regression Results Ordinary least squares regression LHS=BOX Mean = Standard deviation = No. of observations = DegFreedom Mean square Regression Sum of Squares = Residual Sum of Squares = Total Sum of Squares = Standard error of e = Root MSE Fit R-squared = R-bar squared Model test F[ 2, 59] = Prob F > F* | Standard Prob % Confidence BOX| Coefficient Error t |t|>T* Interval Constant| ** CNTWAIT3| *** BUDGET| ***

15 Adding Variables R2 never falls when a new variable, z, is added to the regression. A useful general result

16 Adding Variables to a Model What is the effect of adding PN, PD, PS, YEAR to the model (one at a time)? Ordinary least squares regression LHS=G Mean = Standard deviation = Number of observs. = Model size Parameters = Degrees of freedom = Residuals Sum of squares = Fit R-squared = Adjusted R-squared = Model test F[ 2, 33] (prob) = (.0000) Effects of additional variables on the regression below: Variable Coefficient New R-sqrd Chg.R-sqrd Partial-Rsq Partial F PD PN PS YEAR Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X Constant| *** PG| *** Y| ***

17 Adjusted R Squared Adjusted R2 (for degrees of freedom?)
Includes a penalty for variables that don’t add much fit. Can fall when a variable is added to the equation.

18 Regression Results Ordinary least squares regression LHS=BOX Mean = Standard deviation = No. of observations = DegFreedom Mean square Regression Sum of Squares = Residual Sum of Squares = Total Sum of Squares = Standard error of e = Root MSE Fit R-squared = R-bar squared Model test F[ 2, 59] = Prob F > F* | Standard Prob % Confidence BOX| Coefficient Error t |t|>T* Interval Constant| ** CNTWAIT3| *** BUDGET| ***

19 Adjusted R-Squared We will discover when we study regression with more than one variable, a researcher can increase R2 just by adding variables to a model, even if those variables do not really explain y or have any real relationship at all. To have a fit measure that accounts for this, “Adjusted R2” is a number that increases with the correlation, but decreases with the number of variables.

20 Notes About Adjusted R2

21 Inference and Regression
Transformed Data

22 Linear Transformations of Data
Change units of measurement by dividing every observation – e.g., $ to Millions of $ (see internet buzz regression) by dividing Box by Change meaning of variables: x=(x1=nominal interest=i, x2=inflation=dp, x3=GDP) z=(x1-x2 = real interest i-dp, x2=inflation=dp, x3=GDP) Change theory of art appreciation: x=(x1=logHeight, x2=logWidth, x3=signature) z=(x1-x2=logAspectRatio, x2=logHeight, x3=signature) Coefficients will change. R squared and sum of squared residuals do not change.

23 Principal Components Z = XC Why do we do this? Fewer columns than X
Includes as much ‘variation’ of X as possible Columns of Z are orthogonal Why do we do this? Collinearity Combine variables of ambiguous identity such as test scores as measures of ‘ability’

24 +----------------------------------------------------+
| Ordinary least squares regression | | LHS=LOGBOX Mean = | | Standard deviation = | | Number of observs. = | | Residuals Sum of squares = | | Standard error of e = | | Fit R-squared = | | Adjusted R-squared = | |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| |Constant| *** | |LOGBUDGT| | |STARPOWR| | |SEQUEL | | |MPRATING| * | |ACTION | *** | |COMEDY | | |ANIMATED| ** | |HORROR | | 4 INTERNET BUZZ VARIABLES |LOGADCT | ** | |LOGCMSON| | |LOGFNDGO| | |CNTWAIT3| *** |

25 +----------------------------------------------------+
| Ordinary least squares regression | | LHS=LOGBOX Mean = | | Standard deviation = | | Number of observs. = | | Residuals Sum of squares = | | Standard error of e = | | Fit R-squared = | | Adjusted R-squared = | |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| |Constant| *** | |LOGBUDGT| ** | |STARPOWR| | |SEQUEL | | |MPRATING| | |ACTION | ** | |COMEDY | | |ANIMATED| * | |HORROR | | |PCBUZZ | *** |

26

27

28 Inference and Regression
Model Building and Functional Form

29 Using Logs

30 Time Trends in Regression
y = α + β1x + β2t + ε β2 is the period to period increase not explained by anything else. log y = α + β1log x + β2t + ε (not log t, just t) β2 is the period to period % increase not explained by anything else.

31 U.S. Gasoline Market: Price and Income Elasticities Downward Trend in Gasoline Usage

32 Application: Health Care Data
German Health Care Usage Data, There are altogether 27,326 observations on German households, DOCTOR = 1(number of doctor visits > 0) HOSPITAL = 1(number of hospital visits > 0) HSAT =  health satisfaction, coded 0 (low) - 10 (high)   DOCVIS =  number of doctor visits in last three months HOSPVIS =  number of hospital visits in last calendar year PUBLIC =  insured in public health insurance = 1; otherwise = ADDON =  insured by add-on insurance = 1; otherswise = 0 INCOME =  household nominal monthly net income in German marks / HHKIDS = children under age 16 in the household = 1; otherwise = EDUC =  years of schooling FEMALE = 1(female headed household) AGE = age in years MARRIED = marital status EDUC = years of education 32

33 Dummy Variable D = 0 in one case and 1 in the other
Y = a + bX + cD + e When D = 0, E[Y|X] = a + bX When D = 1, E[Y|X] = a + c + bX

34

35

36

37 A Conspiracy Theory for Art Sales at Auction
Sotheby’s and Christies, 1995 to about 2000 conspired on commission rates.

38 If the Theory is Correct…
Sold from 1995 to 2000 Sold before 1995 or after 2000

39 Evidence: Two Dummy Variables Signature and Conspiracy Effects
The statistical evidence seems to be consistent with the theory.

40 Set of Dummy Variables Usually, Z = Type = 1,2,…,K
Y = a + bX + d1 if Type= d2 if Type= … dK if Type=K

41 A Set of Dummy Variables
Complete set of dummy variables divides the sample into groups. Fit the regression with “group” effects. Need to drop one (any one) of the variables to compute the regression. (Avoid the “dummy variable trap.”)

42 Group Effects in Teacher Ratings

43 Rankings of 132 U.S.Liberal Arts Colleges
Nancy Burnett: Journal of Economic Education, 1998 Rankings of 132 U.S.Liberal Arts Colleges Reputation=α+β1Religious + β2GenderEcon + β3EconFac β4North + β5South + β6Midwest + β7West + ε

44 Minitab does not like this model.

45 Too many dummy variables cause perfect multicollinearity
If we us all four region dummies Reputation = a + bn + … if north Reputation = a + bm + … if midwest Reputation = a + bs + … if south Reputation = a + bw + … if west Only three are needed – so Minitab dropped west Reputation = a … if west

46 Unordered Categorical Variables
House price data (fictitious) Type 1 = Split level Type 2 = Ranch Type 3 = Colonial Type 4 = Tudor Use 3 dummy variables for this kind of data. (Not all 4) Using variable STYLE in the model makes no sense. You could change the numbering scale any way you like. 1,2,3,4 are just labels.

47 Transform Style to Types

48

49 Hedonic House Price Regression
Each of these is relative to a Split Level, since that is the omitted category. E.g., the price of a Ranch house is $74,369 less than a Split Level of the same size with the same number of bedrooms.

50 We used McDonald’s Per Capita

51 More Movie Madness McDonald’s and Movies (Craig, Douglas, Greene: International Journal of Marketing) Log Foreign Box Office(movie,country,year) = α + β1*LogBox(movie,US,year) + β2*LogPCIncome + β4LogMacsPC + GenreEffect + CountryEffect + ε.

52 Movie Madness Data (n=2198)

53 Macs and Movies Genres (MPAA) 1=Drama 2=Romance 3=Comedy 4=Action
5=Fantasy 6=Adventure 7=Family 8=Animated 9=Thriller 10=Mystery 11=Science Fiction 12=Horror 13=Crime Countries and Some of the Data Code Pop(mm) per cap # of Language Income McDonalds 1 Argentina Spanish 2 Chile, Spanish 3 Spain Spanish 4 Mexico Spanish 5 Germany German 6 Austria German 7 Australia English 8 UK UK

54

55 CRIME is the left out GENRE.
AUSTRIA is the left out country. Australia and UK were left out for other reasons (algebraic problem with only 8 countries).

56 Functional Form: Quadratic
Y = a + b1X + b2X2 + e dE[Y|X]/dX = b1 + 2b2X

57

58

59

60 Interaction Effect Y = a + b1X + b2Z + b3X*Z + e
E.g., the benefit of a year of education depends on how old one is. Log(income)=a + b1*Ed + b2*Ed b3*Ed*Age + e dlogIncome/dEd=b1+2b2*Ed+b3*Age

61 Effect of an additional year of education increases from about 6
Effect of an additional year of education increases from about 6.8% at age 20 to 7.2% at age 40

62 Statistics and Data Analysis
Properties of Least Squares

63 Terms of Art Estimates and estimators
Properties of an estimator - the sampling distribution “Finite sample” properties as opposed to “asymptotic” or “large sample” properties

64 Least Squares

65 Deriving the Properties of b
So, b = the parameter vector + a linear combination of the disturbances, each times a vector. Therefore, b is a vector of random variables. We analyze it as such. We do the analysis conditional on an X, then show that results do not depend on the particular X in hand, so the result must be general – i.e., independent of X.

66 b is Unbiased

67 Left Out Variable Bias A Crucial Result About Specification: Two sets of variables in the regression, X1 and X2. y = X1 1 + X2 2 +  What if the regression is computed without the second set of variables? What is the expectation of the "short" regression estimator? b1 = (X1X1)-1X1y

68 The Left Out Variable Formula
E[b1] = 1 + (X1X1)-1X1X22 The (truly) short regression estimator is biased. Application: Quantity = 1Price + 2Income +  If you regress Quantity on Price and leave out Income. What do you get?

69 Application: Left out Variable
Leave out Income. What do you get? In time series data, 1 < 0, 2 > 0 (usually) Cov[Price,Income] > 0 in time series data. So, the short regression will overestimate the price coefficient. Simple Regression of G on a constant and PG Price Coefficient should be negative.

70 Estimated ‘Demand’ Equation Shouldn’t the Price Coefficient be Negative?

71 Multiple Regression of G on Y and PG. The Theory Works!
Ordinary least squares regression LHS=G Mean = Standard deviation = Number of observs. = Model size Parameters = Degrees of freedom = Residuals Sum of squares = Standard error of e = Fit R-squared = Adjusted R-squared = Model test F[ 2, 33] (prob) = (.0000) Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X Constant| *** Y| *** PG| ***

72 Specification Errors-1
Omitting relevant variables: Suppose the correct model is y = X11 + X22 + . I.e., two sets of variables. Compute least squares omitting X2. Some easily proved results: Var[b1] is smaller than Var[b1.2]. You get a smaller variance when you omit X2. (One interpretation: Omitting X2 amounts to using extra information (2 = 0). Even if the information is wrong (see the next result), it reduces the variance. (This is an important result.)

73 Specification Errors-2
Including superfluous variables: Just reverse the results. Including superfluous variables increases variance. (The cost of not using information.) Does not cause a bias, because if the variables in X2 are truly superfluous, then 2 = 0, so E[b1.2] = 1.

74 Inference and Regression
Estimating Var[b|X]

75 Estimating σ2 The unbiased estimator of σ2 is s2 = ee/(N-K).
N-K = “Degrees of freedom correction”

76 Var[b|X] Estimating the Covariance Matrix for b|X
The true covariance matrix is 2 (X’X)-1 The natural estimator is s2(X’X)-1 “Standard errors” of the individual coefficients are the square roots of the diagonal elements.

77 X’X (X’X)-1 s2(X’X)-1

78 Regression Results Ordinary least squares regression LHS=G Mean = Standard deviation = Number of observs. = Model size Parameters = Degrees of freedom = Residuals Sum of squares = Standard error of e = <***** sqr[ /(36 – 7)] Fit R-squared = Adjusted R-squared = Model test F[ 6, 29] (prob) = (.0000) Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X Constant| PG| *** Y| *** TREND| ** PNC| PUC| PPT| ** Create ; trend=year-1960$ Namelist; x=one,pg,y,trend,pnc,puc,ppt$ Regress ; lhs=g ; rhs=x$

79 Inference and Regression
Not Perfect Collinearity

80 Variance Inflation and Multicollinearity
When variables are highly but not perfectly correlated, least squares is difficult to compute accurately Variances of least squares slopes become very large. Variance inflation factors: For each xk, VIF(k) = 1/[1 – R2(k)] where R2(k) is the R2 in the regression of xk on all the other x variables in the data matrix

81 NIST Statistical Reference Data Sets – Accuracy Tests

82 The Filipelli Problem

83 VIF for X10: R2 = VIF = D+15

84

85 Other software: Minitab reports the correct answer
Stata drops X10

86 Accurate and Inaccurate Computation of Filipelli Results
Accurate computation requires not actually computing (X’X)-1. We (and others) use the QR method. See text for details.

87 Stata Filipelli Results

88 Even after dropping two (random columns), results are only correct to 1 or 2 digits.

89 Inference and Regression
Testing Hypotheses

90 Testing Hypotheses

91 Hypothesis Testing: Criteria

92 The F Statistic has an F Distribution

93 Nonnormality or Large N
Denominator of F converges to 1. Numerator converges to chi squared[J]/J. Rely on law of large numbers for the denominator and CLT for the numerator: JF  Chi squared[J] Use critical values from chi squared.

94 Significance of the Regression - R*2 = 0

95 Table of 95% Critical Values for F

96

97 +----------------------------------------------------+
| Ordinary least squares regression | | LHS=LOGBOX Mean = | | Standard deviation = | | Number of observs. = | | Residuals Sum of squares = | | Standard error of e = | | Fit R-squared = | | Adjusted R-squared = | |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| |Constant| *** | |LOGBUDGT| ** | |STARPOWR| | |SEQUEL | | |MPRATING| | |ACTION | ** | |COMEDY | | |ANIMATED| * | |HORROR | | |PCBUZZ | *** | F = [( )/3] / [( )/(62 – 13)] = ; F* = 2.84


Download ppt "Statistical Inference and Regression Analysis: GB"

Similar presentations


Ads by Google