Statistical Inference and Regression Analysis: GB

Slides:



Advertisements
Similar presentations
Managerial Economics in a Global Economy
Advertisements

The Multiple Regression Model.
Part 17: Nonlinear Regression 17-1/26 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Part 12: Asymptotics for the Regression Model 12-1/39 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Part 7: Estimating the Variance of b 7-1/53 Econometrics I Professor William Greene Stern School of Business Department of Economics.
[Part 1] 1/15 Discrete Choice Modeling Econometric Methodology Discrete Choice Modeling William Greene Stern School of Business New York University 0Introduction.
Part 22: Multiple Regression – Part /60 Statistics and Data Analysis Professor William Greene Stern School of Business IOMS Department Department.
Chapter 13 Multiple Regression
Statistical Inference and Regression Analysis: GB Professor William Greene Stern School of Business IOMS Department Department of Economics.
Statistical Inference and Regression Analysis: GB Professor William Greene Stern School of Business IOMS Department Department of Economics.
Classical Linear Regression Model Finite Sample Properties of OLS Restricted Least Squares Specification Errors –Omitted Variables –Irreverent Variables.
Chapter 12 Multiple Regression
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 11 th Edition.
Chapter 11 Multiple Regression.
Part 4: Partial Regression and Correlation 4-1/24 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Part 18: Regression Modeling 18-1/44 Statistics and Data Analysis Professor William Greene Stern School of Business IOMS Department Department of Economics.
SIMPLE LINEAR REGRESSION
Part 5: Regression Algebra and Fit 5-1/34 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Part 7: Multiple Regression Analysis 7-1/54 Regression Models Professor William Greene Stern School of Business IOMS Department Department of Economics.
Contracts 3:C - 1(17) Entertainment and Media: Markets and Economics Contracts Between Talent and Entertainment Producers Appendix: A-Rod Deal.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Introduction to Multiple Regression Statistics for Managers.
Part 9: Model Building 9-1/43 Regression Models Professor William Greene Stern School of Business IOMS Department Department of Economics.
Econometric Methodology. The Sample and Measurement Population Measurement Theory Characteristics Behavior Patterns Choices.
Part 0: Introduction 0-1/17 Regression and Forecasting Models Professor William Greene Stern School of Business IOMS Department Department of Economics.
Part 3: Regression and Correlation 3-1/41 Regression Models Professor William Greene Stern School of Business IOMS Department Department of Economics.
Objectives of Multiple Regression
Part 24: Multiple Regression – Part /45 Statistics and Data Analysis Professor William Greene Stern School of Business IOMS Department Department.
SIMPLE LINEAR REGRESSION
Discrete Choice Modeling William Greene Stern School of Business New York University.
Chapter 14 Introduction to Multiple Regression
CHAPTER 14 MULTIPLE REGRESSION
Part 2: Model and Inference 2-1/49 Regression Models Professor William Greene Stern School of Business IOMS Department Department of Economics.
Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall 14-1 Chapter 14 Introduction to Multiple Regression Statistics for Managers using Microsoft.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 10 th Edition.
Introduction to Multiple Regression Lecture 11. The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (Y) & 2 or more.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 14-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
Part 3: Least Squares Algebra 3-1/58 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Statistical Inference and Regression Analysis: GB
Multiple Regression.
Chapter 4: Basic Estimation Techniques
Chapter 14 Introduction to Multiple Regression
Chapter 4 Basic Estimation Techniques
6. Simple Regression and OLS Estimation
William Greene Stern School of Business New York University
THE LINEAR REGRESSION MODEL: AN OVERVIEW
Multiple Regression Analysis and Model Building
Simple Linear Regression
Statistical Inference and Regression Analysis: GB
Multiple Regression.
Prepared by Lee Revere and John Large
Statistical Inference and Regression Analysis: GB
Undergraduated Econometrics
Econometrics I Professor William Greene Stern School of Business
Chengyuan Yin School of Mathematics
Statistical Inference and Regression Analysis: GB
Econometrics Chengyaun yin School of Mathematics SHUFE.
Entertainment and Media: Markets and Economics
Econometrics Analysis
SIMPLE LINEAR REGRESSION
CHAPTER 14 MULTIPLE REGRESSION
Econometrics Analysis
Econometrics I Professor William Greene Stern School of Business
Econometrics I Professor William Greene Stern School of Business
Introduction to Regression
Econometrics I Professor William Greene Stern School of Business
Econometrics I Professor William Greene Stern School of Business
MGS 3100 Business Analysis Regression Feb 18, 2016
Econometrics I Professor William Greene Stern School of Business
Presentation transcript:

Statistical Inference and Regression Analysis: GB.3302.30 Professor William Greene Stern School of Business IOMS Department Department of Economics

Inference and Regression Perfect Collinearity

Perfect Multicollinearity If X does not have full rank, then at least one column can be written as a linear combination of the other columns. X’X does not have rank and cannot be inverted. b cannot be computed.

Multicollinearity Enhanced Monet Area Effect Model: Height and Width Effects Log(Price) = β1 + β2 log Area + β3 log Aspect Ratio + β4 log Height + β5 Signature + ε (Aspect Ratio = Height/Width)

Short Rank X Enhanced Monet Area Effect Model: Height and Width Effects Log(Price) = β1 + β2 log Area + β3 log Aspect Ratio + β4 log Height + β5 Signature + ε (Aspect Ratio = Height/Width) X1 = 1, X2 = logArea, X3 = LogAspect, X4 = logHeight, X5 = Signature X2 = logH + LogW X3 = logH - LogW X4 = logH x2 + x3 – 2x4 = (logH + logW) + (logH – logW) - 2logH = 0 X5 = Signature X4 = 1/2X2 + 1/2X3 c = [0, 1, 1, -2, 0]

Inference and Regression Least Squares Fit

Minimizing e’e = [e - X(d - b)] [e - X(d - b)] b minimizes ee = (y - Xb)(y - Xb). Any other coefficient vector has a larger sum of squares. (Least squares is least squares.) A quick proof: d = the vector, not b u = y - Xd. Then, uu = (y - Xd)(y-Xd) = [y - Xb - X(d - b)][y - Xb - X(d - b)] = [e - X(d - b)] [e - X(d - b)] Expand to find uu = ee + (d-b)XX(d-b) > ee

Dropping a Variable An important special case. Comparing the results that we get with and without a variable z in the equation in addition to the other variables in X. Results which we can show using the previous result: 1. Dropping a variable(s) cannot improve the fit - that is, reduce the sum of squares. The relevant d is (* ,* ,*. … , 0) i.e., some vector that has a zero in a particular place. 2. Adding a variable(s) cannot degrade the fit - that is, increase the sum of squares. Compare the sum of squares when there is a zero in the location to where the vector does not contain the zero – just reverse the cases.

The Fit of the Regression “Variation:” In the context of the “model” we speak of variation of a variable as movement of the variable, usually associated with (not necessarily caused by) movement of another variable.

Decomposing the Variation of y Total sum of squares = Regression Sum of Squares (SSR) + Residual Sum of Squares (SSE)

Decomposing the Variation

A Fit Measure R2 = (Very Important Result.) R2 is bounded by zero and one if and only if: (a) There is a constant term in X and (b) The line is computed by linear least squares.

Understanding R2 R2 = squared correlation between y and the prediction of y given by the regression

Regression Results ----------------------------------------------------------------------------- Ordinary least squares regression ............ LHS=BOX Mean = 20.72065 Standard deviation = 17.49244 ---------- No. of observations = 62 DegFreedom Mean square Regression Sum of Squares = 9203.46 2 4601.72954 Residual Sum of Squares = 9461.66 59 160.36711 Total Sum of Squares = 18665.1 61 305.98555 ---------- Standard error of e = 12.66361 Root MSE 12.35344 Fit R-squared = .49308 R-bar squared .47590 Model test F[ 2, 59] = 28.69497 Prob F > F* .00000 --------+-------------------------------------------------------------------- | Standard Prob. 95% Confidence BOX| Coefficient Error t |t|>T* Interval Constant| -12.0721** 5.30813 -2.27 .0266 -22.4758 -1.6684 CNTWAIT3| 53.9033*** 12.29513 4.38 .0000 29.8053 78.0013 BUDGET| .12740*** .04492 2.84 .0062 .03936 .21544

Adding Variables R2 never falls when a z is added to the regression. A useful general result

Adding Variables to a Model What is the effect of adding PN, PD, PS, YEAR to the model (one at a time)? ---------------------------------------------------------------------- Ordinary least squares regression ............ LHS=G Mean = 226.09444 Standard deviation = 50.59182 Number of observs. = 36 Model size Parameters = 3 Degrees of freedom = 33 Residuals Sum of squares = 1472.79834 Fit R-squared = .98356 Adjusted R-squared = .98256 Model test F[ 2, 33] (prob) = 987.1(.0000) Effects of additional variables on the regression below: ------------- Variable Coefficient New R-sqrd Chg.R-sqrd Partial-Rsq Partial F PD -26.0499 .9867 .0031 .1880 7.411 PN -15.1726 .9878 .0043 .2594 11.209 PS -8.2171 .9890 .0055 .3320 15.904 YEAR -2.1958 .9861 .0025 .1549 5.864 --------+------------------------------------------------------------- Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X Constant| -79.7535*** 8.67255 -9.196 .0000 PG| -15.1224*** 1.88034 -8.042 .0000 2.31661 Y| .03692*** .00132 28.022 .0000 9232.86

Adjusted R Squared Adjusted R2 (for degrees of freedom?) Includes a penalty for variables that don’t add much fit. Can fall when a variable is added to the equation.

Regression Results ----------------------------------------------------------------------------- Ordinary least squares regression ............ LHS=BOX Mean = 20.72065 Standard deviation = 17.49244 ---------- No. of observations = 62 DegFreedom Mean square Regression Sum of Squares = 9203.46 2 4601.72954 Residual Sum of Squares = 9461.66 59 160.36711 Total Sum of Squares = 18665.1 61 305.98555 ---------- Standard error of e = 12.66361 Root MSE 12.35344 Fit R-squared = .49308 R-bar squared .47590 Model test F[ 2, 59] = 28.69497 Prob F > F* .00000 --------+-------------------------------------------------------------------- | Standard Prob. 95% Confidence BOX| Coefficient Error t |t|>T* Interval Constant| -12.0721** 5.30813 -2.27 .0266 -22.4758 -1.6684 CNTWAIT3| 53.9033*** 12.29513 4.38 .0000 29.8053 78.0013 BUDGET| .12740*** .04492 2.84 .0062 .03936 .21544

Adjusted R-Squared We will discover when we study regression with more than one variable, a researcher can increase R2 just by adding variables to a model, even if those variables do not really explain y or have any real relationship at all. To have a fit measure that accounts for this, “Adjusted R2” is a number that increases with the correlation, but decreases with the number of variables.

Notes About Adjusted R2

Inference and Regression Transformed Data

Linear Transformations of Data Change units of measurement by dividing every observation – e.g., $ to Millions of $ (see internet buzz regression) by dividing Box by 1000000. Change meaning of variables: x=(x1=nominal interest=i, x2=inflation=dp, x3=GDP) z=(x1-x2 = real interest i-dp, x2=inflation=dp, x3=GDP) Change theory of art appreciation: x=(x1=logHeight, x2=logWidth, x3=signature) z=(x1-x2=logAspectRatio, x2=logHeight, x3=signature)

(Linearly) Transformed Data How does linear transformation affect the results of least squares? Z = XP for KxK nonsingular P (Each variable in Z is a combination of the variables in X.) Based on X, b = (XX)-1X’y. You can show (just multiply it out), the coefficients when y is regressed on Z are c = P -1 b “Fitted value” is Zc = XPP-1b = Xb. The same!! Residuals from using Z are y - Zc = y - Xb (we just proved this.). The same!! Sum of squared residuals must be identical, as y-Xb = e = y-Zc. R2 must also be identical, as R2 = 1 - ee/same total SS.

Principal Components Z = XC Why do we do this? Fewer columns than X Includes as much ‘variation’ of X as possible Columns of Z are orthogonal Why do we do this? Collinearity Combine variables of ambiguous identity such as test scores as measures of ‘ability’

+----------------------------------------------------+ | Ordinary least squares regression | | LHS=LOGBOX Mean = 16.47993 | | Standard deviation = .9429722 | | Number of observs. = 62 | | Residuals Sum of squares = 20.54972 | | Standard error of e = .6475971 | | Fit R-squared = .6211405 | | Adjusted R-squared = .5283586 | +--------+--------------+----------------+--------+--------+----------+ |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| |Constant| 12.5388*** .98766 12.695 .0000 | |LOGBUDGT| .23193 .18346 1.264 .2122 3.71468| |STARPOWR| .00175 .01303 .135 .8935 18.0316| |SEQUEL | .43480 .29668 1.466 .1492 .14516| |MPRATING| -.26265* .14179 -1.852 .0700 2.96774| |ACTION | -.83091*** .29297 -2.836 .0066 .22581| |COMEDY | -.03344 .23626 -.142 .8880 .32258| |ANIMATED| -.82655** .38407 -2.152 .0363 .09677| |HORROR | .33094 .36318 .911 .3666 .09677| 4 INTERNET BUZZ VARIABLES |LOGADCT | .29451** .13146 2.240 .0296 8.16947| |LOGCMSON| .05950 .12633 .471 .6397 3.60648| |LOGFNDGO| .02322 .11460 .203 .8403 5.95764| |CNTWAIT3| 2.59489*** .90981 2.852 .0063 .48242| +--------+------------------------------------------------------------+

+----------------------------------------------------+ | Ordinary least squares regression | | LHS=LOGBOX Mean = 16.47993 | | Standard deviation = .9429722 | | Number of observs. = 62 | | Residuals Sum of squares = 25.36721 | | Standard error of e = .6984489 | | Fit R-squared = .5323241 | | Adjusted R-squared = .4513802 | +--------+--------------+----------------+--------+--------+----------+ |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| |Constant| 11.9602*** .91818 13.026 .0000 | |LOGBUDGT| .38159** .18711 2.039 .0465 3.71468| |STARPOWR| .01303 .01315 .991 .3263 18.0316| |SEQUEL | .33147 .28492 1.163 .2500 .14516| |MPRATING| -.21185 .13975 -1.516 .1356 2.96774| |ACTION | -.81404** .30760 -2.646 .0107 .22581| |COMEDY | .04048 .25367 .160 .8738 .32258| |ANIMATED| -.80183* .40776 -1.966 .0546 .09677| |HORROR | .47454 .38629 1.228 .2248 .09677| |PCBUZZ | .39704*** .08575 4.630 .0000 9.19362| +--------+------------------------------------------------------------+

Inference and Regression Model Building and Functional Form

Using Logs

Time Trends in Regression y = α + β1x + β2t + ε β2 is the period to period increase not explained by anything else. log y = α + β1log x + β2t + ε (not log t, just t) 100β2 is the period to period % increase not explained by anything else.

U.S. Gasoline Market: Price and Income Elasticities Downward Trend in Gasoline Usage

Application: Health Care Data German Health Care Usage Data, There are altogether 27,326 observations on German households, 1984-1994. DOCTOR = 1(number of doctor visits > 0) HOSPITAL = 1(number of hospital visits > 0) HSAT =  health satisfaction, coded 0 (low) - 10 (high)   DOCVIS =  number of doctor visits in last three months HOSPVIS =  number of hospital visits in last calendar year PUBLIC =  insured in public health insurance = 1; otherwise = 0 ADDON =  insured by add-on insurance = 1; otherswise = 0 INCOME =  household nominal monthly net income in German marks / 10000. HHKIDS = children under age 16 in the household = 1; otherwise = 0 EDUC =  years of schooling FEMALE = 1(female headed household) AGE = age in years MARRIED = marital status EDUC = years of education 31

Dummy Variable D = 0 in one case and 1 in the other Y = a + bX + cD + e When D = 0, E[Y|X] = a + bX When D = 1, E[Y|X] = a + c + bX

A Conspiracy Theory for Art Sales at Auction Sotheby’s and Christies, 1995 to about 2000 conspired on commission rates.

If the Theory is Correct… Sold from 1995 to 2000 Sold before 1995 or after 2000

Evidence: Two Dummy Variables Signature and Conspiracy Effects The statistical evidence seems to be consistent with the theory.

Set of Dummy Variables Usually, Z = Type = 1,2,…,K Y = a + bX + d1 if Type=1 + d2 if Type=2 … + dK if Type=K

A Set of Dummy Variables Complete set of dummy variables divides the sample into groups. Fit the regression with “group” effects. Need to drop one (any one) of the variables to compute the regression. (Avoid the “dummy variable trap.”)

Group Effects in Teacher Ratings

Rankings of 132 U.S.Liberal Arts Colleges Nancy Burnett: Journal of Economic Education, 1998 Rankings of 132 U.S.Liberal Arts Colleges Reputation=α+β1Religious + β2GenderEcon + β3EconFac + β4North + β5South + β6Midwest + β7West + ε

Minitab does not like this model.

Too many dummy variables cause perfect multicollinearity If we us all four region dummies Reputation = a + bn + … if north Reputation = a + bm + … if midwest Reputation = a + bs + … if south Reputation = a + bw + … if west Only three are needed – so Minitab dropped west Reputation = a + … if west

Unordered Categorical Variables House price data (fictitious) Type 1 = Split level Type 2 = Ranch Type 3 = Colonial Type 4 = Tudor Use 3 dummy variables for this kind of data. (Not all 4) Using variable STYLE in the model makes no sense. You could change the numbering scale any way you like. 1,2,3,4 are just labels.

Transform Style to Types

Hedonic House Price Regression Each of these is relative to a Split Level, since that is the omitted category. E.g., the price of a Ranch house is $74,369 less than a Split Level of the same size with the same number of bedrooms.

We used McDonald’s Per Capita

More Movie Madness McDonald’s and Movies (Craig, Douglas, Greene: International Journal of Marketing) Log Foreign Box Office(movie,country,year) = α + β1*LogBox(movie,US,year) + β2*LogPCIncome + β4LogMacsPC + GenreEffect + CountryEffect + ε.

Movie Madness Data (n=2198)

Macs and Movies Genres (MPAA) 1=Drama 2=Romance 3=Comedy 4=Action 5=Fantasy 6=Adventure 7=Family 8=Animated 9=Thriller 10=Mystery 11=Science Fiction 12=Horror 13=Crime Countries and Some of the Data Code Pop(mm) per cap # of Language Income McDonalds 1 Argentina 37 12090 173 Spanish 2 Chile, 15 9110 70 Spanish 3 Spain 39 19180 300 Spanish 4 Mexico 98 8810 270 Spanish 5 Germany 82 25010 1152 German 6 Austria 8 26310 159 German 7 Australia 19 25370 680 English 8 UK 60 23550 1152 UK

CRIME is the left out GENRE. AUSTRIA is the left out country. Australia and UK were left out for other reasons (algebraic problem with only 8 countries).

Functional Form: Quadratic Y = a + b1X + b2X2 + e dE[Y|X]/dX = b1 + 2b2X

Interaction Effect Y = a + b1X + b2Z + b3X*Z + e E.g., the benefit of a year of education depends on how old one is. Log(income)=a + b1*Ed + b2*Ed2 + b3*Ed*Age + e dlogIncome/dEd=b1+2b2*Ed+b3*Age

Effect of an additional year of education increases from about 6 Effect of an additional year of education increases from about 6.8% at age 20 to 7.2% at age 40

Statistics and Data Analysis Properties of Least Squares

Terms of Art Estimates and estimators Properties of an estimator - the sampling distribution “Finite sample” properties as opposed to “asymptotic” or “large sample” properties

Least Squares

Deriving the Properties of b So, b = the parameter vector + a linear combination of the disturbances, each times a vector. Therefore, b is a vector of random variables. We analyze it as such. We do the analysis conditional on an X, then show that results do not depend on the particular X in hand, so the result must be general – i.e., independent of X.

Unbiasedness of b

Left Out Variable Bias A Crucial Result About Specification: Two sets of variables in the regression, X1 and X2. y = X1 1 + X2 2 +  What if the regression is computed without the second set of variables? What is the expectation of the "short" regression estimator? b1 = (X1X1)-1X1y

The Left Out Variable Formula E[b1] = 1 + (X1X1)-1X1X22 The (truly) short regression estimator is biased. Application: Quantity = 1Price + 2Income +  If you regress Quantity on Price and leave out Income. What do you get?

Application: Left out Variable Leave out Income. What do you get? In time series data, 1 < 0, 2 > 0 (usually) Cov[Price,Income] > 0 in time series data. So, the short regression will overestimate the price coefficient. Simple Regression of G on a constant and PG Price Coefficient should be negative.

Estimated ‘Demand’ Equation Shouldn’t the Price Coefficient be Negative?

Multiple Regression of G on Y and PG. The Theory Works! ---------------------------------------------------------------------- Ordinary least squares regression ............ LHS=G Mean = 226.09444 Standard deviation = 50.59182 Number of observs. = 36 Model size Parameters = 3 Degrees of freedom = 33 Residuals Sum of squares = 1472.79834 Standard error of e = 6.68059 Fit R-squared = .98356 Adjusted R-squared = .98256 Model test F[ 2, 33] (prob) = 987.1(.0000) --------+------------------------------------------------------------- Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X Constant| -79.7535*** 8.67255 -9.196 .0000 Y| .03692*** .00132 28.022 .0000 9232.86 PG| -15.1224*** 1.88034 -8.042 .0000 2.31661

Specification Errors-1 Omitting relevant variables: Suppose the correct model is y = X11 + X22 + . I.e., two sets of variables. Compute least squares omitting X2. Some easily proved results: Var[b1] is smaller than Var[b1.2]. You get a smaller variance when you omit X2. (One interpretation: Omitting X2 amounts to using extra information (2 = 0). Even if the information is wrong (see the next result), it reduces the variance. (This is an important result.)

Specification Errors-2 Including superfluous variables: Just reverse the results. Including superfluous variables increases variance. (The cost of not using information.) Does not cause a bias, because if the variables in X2 are truly superfluous, then 2 = 0, so E[b1.2] = 1.

Inference and Regression Estimating Var[b|X]

Variance of the Least Squares Estimator

Gauss-Markov Theorem A theorem of Gauss and Markov: Least Squares is the Minimum Variance Linear Unbiased Estimator 1. Linear estimator 2. Unbiased: E[b|X] = β Comparing positive definite matrices: Var[c|X] – Var[b|X] is nonnegative definite for any other linear and unbiased estimator.

True Variance of b|X

Estimating 2 Using the residuals instead of the disturbances: The natural estimator: ee/N as a sample surrogate for /N Imperfect observation of i = ei + ( - b)xi Downward bias of ee/N. We obtain the result E[ee|X] = (N-K)2

Expectation of e’e

Expected Value of e’e:

Estimating σ2 The unbiased estimator is s2 = ee/(N-K). N-K = “Degrees of freedom correction”

Var[b|X] Estimating the Covariance Matrix for b|X The true covariance matrix is 2 (X’X)-1 The natural estimator is s2(X’X)-1 “Standard errors” of the individual coefficients are the square roots of the diagonal elements.

X’X (X’X)-1 s2(X’X)-1

Regression Results ---------------------------------------------------------------------- Ordinary least squares regression ............ LHS=G Mean = 226.09444 Standard deviation = 50.59182 Number of observs. = 36 Model size Parameters = 7 Degrees of freedom = 29 Residuals Sum of squares = 778.70227 Standard error of e = 5.18187 <***** sqr[778.70227/(36 – 7)] Fit R-squared = .99131 Adjusted R-squared = .98951 Model test F[ 6, 29] (prob) = 551.2(.0000) --------+------------------------------------------------------------- Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X Constant| -7.73975 49.95915 -.155 .8780 PG| -15.3008*** 2.42171 -6.318 .0000 2.31661 Y| .02365*** .00779 3.037 .0050 9232.86 TREND| 4.14359** 1.91513 2.164 .0389 17.5000 PNC| 15.4387 15.21899 1.014 .3188 1.67078 PUC| -5.63438 5.02666 -1.121 .2715 2.34364 PPT| -12.4378** 5.20697 -2.389 .0236 2.74486 Create ; trend=year-1960$ Namelist; x=one,pg,y,trend,pnc,puc,ppt$ Regress ; lhs=g ; rhs=x$

Inference and Regression Not Perfect Collinearity

Variance Inflation and Multicollinearity When variables are highly but not perfectly correlated, least squares is difficult to compute accurately Variances of least squares slopes become very large. Variance inflation factors: For each xk, VIF(k) = 1/[1 – R2(k)] where R2(k) is the R2 in the regression of xk on all the other x variables in the data matrix

NIST Statistical Reference Data Sets – Accuracy Tests

The Filipelli Problem

VIF for X10: R2 = .99999999999999630 VIF = .27294543196184830D+15

Other software: Minitab reports the correct answer Stata drops X10

Accurate and Inaccurate Computation of Filipelli Results Accurate computation requires not actually computing (X’X)-1. We (and others) use the QR method. See text for details.

Inference and Regression Testing Hypotheses

Testing Hypotheses

Hypothesis Testing: Criteria

The F Statistic has an F Distribution

Nonnormality or Large N Denominator of F converges to 1. Numerator converges to chi squared[J]/J. Rely on law of large numbers for the denominator and CLT for the numerator: JF  Chi squared[J] Use critical values from chi squared.

Significance of the Regression - R*2 = 0

Table of 95% Critical Values for F

+----------------------------------------------------+ | Ordinary least squares regression | | LHS=LOGBOX Mean = 16.47993 | | Standard deviation = .9429722 | | Number of observs. = 62 | | Residuals Sum of squares = 25.36721 | | Standard error of e = .6984489 | | Fit R-squared = .5323241 | | Adjusted R-squared = .4513802 | +--------+--------------+----------------+--------+--------+----------+ |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| |Constant| 11.9602*** .91818 13.026 .0000 | |LOGBUDGT| .38159** .18711 2.039 .0465 3.71468| |STARPOWR| .01303 .01315 .991 .3263 18.0316| |SEQUEL | .33147 .28492 1.163 .2500 .14516| |MPRATING| -.21185 .13975 -1.516 .1356 2.96774| |ACTION | -.81404** .30760 -2.646 .0107 .22581| |COMEDY | .04048 .25367 .160 .8738 .32258| |ANIMATED| -.80183* .40776 -1.966 .0546 .09677| |HORROR | .47454 .38629 1.228 .2248 .09677| |PCBUZZ | .39704*** .08575 4.630 .0000 9.19362| +--------+------------------------------------------------------------+ F = [(.6211405 - .5323241)/3] / [(1 - .6211405)/(62 – 13)] = 3.829; F* = 2.84

Inference and Regression A Case Study

Mega Deals for Stars A Capital Budgeting Computation Costs and Benefits Certainty: Costs Uncertainty: Benefits Long Term: Need for discounting

Baseball Story A Huge Sports Contract Alex Rodriguez hired by the Texas Rangers for something like $25 million per year in 2000. Costs – the salary plus and minus some fine tuning of the numbers Benefits – more fans in the stands. How to determine if the benefits exceed the costs? Use a regression model.

The Texas Deal for Alex Rodriguez 2001 Signing Bonus = 10M 2001 21 2002 21 2003 21 2004 21 2005 25 2006 25 2007 27 2008 27 2009 27 2010 27 Total: $252M ???

The Real Deal Year Salary Bonus Deferral 2001 21 2 5 to 2011 Deferrals accrue interest of 3% per year.

Costs Insurance: About 10% of the contract per year (Taxes: About 40% of the contract) Some additional costs in revenue sharing revenues from the league (anticipated, about 17.5% of marginal benefits – uncertain) Interest on deferred salary - $150,000 in first year, well over $1,000,000 in 2010. (Reduction) $3M it would cost to have a different shortstop. (Nomar Garciaparra)

PDV of the Costs Using 8% discount factor (They used) Accounting for all costs Roughly $21M to $28M in each year from 2001 to 2010, then the deferred payments from 2010 to 2020 Total costs: About $165 Million/Year in 2001 (Present discounted value)

Benefits More fans in the seats Gate Parking Merchandise Increased chance at playoffs and world series Sponsorships (Loss to revenue sharing) Franchise value

How Many New Fans? Projected 8 more wins per year. What is the relationship between wins and attendance? Not known precisely Many empirical studies (The Journal of Sports Economics) Use a regression model to find out.

Baseball Data 31 teams, 17 years (fewer years for 6 teams) Winning percentage: Wins = 162 * percentage Rank Average attendance. Attendance = 81*Average Average team salary Number of all stars Manager years of experience Percent of team that is rookies Lineup changes Mean player experience Dummy variable for change in manager

Baseball Data (Panel Data)

A Dynamic Equation

About 220,000 fans

Marginal Value of One More Win

The Regression Model

Marginal Value of One Win

Marginal Value of an A Rod 8 games * 63,734 fans = 509,878 fans 509,878 fans * $18 per ticket $2.50 parking etc. $1.80 stuff (hats, bobble head dolls,…) $11.3 Million per year !!!!! It’s not close. (Marginal cost is at least $16.5M / year)