Download presentation
Presentation is loading. Please wait.
Published byEstella Harrison Modified over 9 years ago
1
Statistical Inference and Regression Analysis: GB.3302.30 Professor William Greene Stern School of Business IOMS Department Department of Economics
2
Inference and Regression Perfect Collinearity
3
3/120 Perfect Multicollinearity If X does not have full rank, then at least one column can be written as a linear combination of the other columns. X’X does not have rank and cannot be inverted. b cannot be computed. 3
4
4/120 Multicollinearity 4 Enhanced Monet Area Effect Model: Height and Width Effects Log(Price) = β 1 + β 2 log Area + β 3 log Aspect Ratio + β 4 log Height + β 5 Signature + ε (Aspect Ratio = Height/Width)
5
5/120 Short Rank X 5 Enhanced Monet Area Effect Model: Height and Width Effects Log(Price) = β 1 + β 2 log Area + β 3 log Aspect Ratio + β 4 log Height + β 5 Signature + ε (Aspect Ratio = Height/Width) X 1 = 1, X 2 = logArea, X 3 = LogAspect, X 4 = logHeight, X 5 = Signature X 2 = logH + LogW X 3 = logH - LogW X 4 = logH x 2 + x 3 – 2x 4 = (logH + logW) + (logH – logW) - 2logH = 0 X 5 = Signature X 4 = 1/2X 2 + 1/2X 3 c = [0, 1, 1, -2, 0]
6
Inference and Regression Least Squares Fit
7
7/120 Minimizing e’e b minimizes ee = (y - Xb)(y - Xb). Any other coefficient vector has a larger sum of squares. (Least squares is least squares.) A quick proof: d = the vector, not b u = y - Xd. Then, uu = (y - Xd)(y-Xd) = [y - Xb - X(d - b)][y - Xb - X(d - b)] = [e - X(d - b)] [e - X(d - b)] Expand to find uu = ee + (d-b)XX(d-b) > ee
8
8/120 Dropping a Variable An important special case. Comparing the results that we get with and without a variable z in the equation in addition to the other variables in X. Results which we can show using the previous result: 1. Dropping a variable(s) cannot improve the fit - that is, reduce the sum of squares. The relevant d is (*,*,*. …, 0) i.e., some vector that has a zero in a particular place. 2. Adding a variable(s) cannot degrade the fit - that is, increase the sum of squares. Compare the sum of squares when there is a zero in the location to where the vector does not contain the zero – just reverse the cases.
9
9/120 The Fit of the Regression “Variation:” In the context of the “model” we speak of variation of a variable as movement of the variable, usually associated with (not necessarily caused by) movement of another variable.
10
10/120 Decomposing the Variation of y Total sum of squares = Regression Sum of Squares (SSR) + Residual Sum of Squares (SSE)
11
11/120 Decomposing the Variation
12
12/120 A Fit Measure R 2 = (Very Important Result.) R 2 is bounded by zero and one if and only if: (a) There is a constant term in X and (b) The line is computed by linear least squares.
13
13/120 Understanding R 2 R 2 = squared correlation between y and the prediction of y given by the regression
14
14/120 Regression Results 14 ----------------------------------------------------------------------------- Ordinary least squares regression............ LHS=BOX Mean = 20.72065 Standard deviation = 17.49244 ---------- No. of observations = 62 DegFreedom Mean square Regression Sum of Squares = 9203.46 2 4601.72954 Residual Sum of Squares = 9461.66 59 160.36711 Total Sum of Squares = 18665.1 61 305.98555 ---------- Standard error of e = 12.66361 Root MSE 12.35344 Fit R-squared =.49308 R-bar squared.47590 Model test F[ 2, 59] = 28.69497 Prob F > F*.00000 --------+-------------------------------------------------------------------- | Standard Prob. 95% Confidence BOX| Coefficient Error t |t|>T* Interval --------+-------------------------------------------------------------------- Constant| -12.0721** 5.30813 -2.27.0266 -22.4758 -1.6684 CNTWAIT3| 53.9033*** 12.29513 4.38.0000 29.8053 78.0013 BUDGET|.12740***.04492 2.84.0062.03936.21544 --------+--------------------------------------------------------------------
15
15/120 Adding Variables R 2 never falls when a z is added to the regression. A useful general result
16
16/120 ---------------------------------------------------------------------- Ordinary least squares regression............ LHS=G Mean = 226.09444 Standard deviation = 50.59182 Number of observs. = 36 Model size Parameters = 3 Degrees of freedom = 33 Residuals Sum of squares = 1472.79834 Fit R-squared =.98356 Adjusted R-squared =.98256 Model test F[ 2, 33] (prob) = 987.1(.0000) Effects of additional variables on the regression below: ------------- Variable Coefficient New R-sqrd Chg.R-sqrd Partial-Rsq Partial F PD -26.0499.9867.0031.1880 7.411 PN -15.1726.9878.0043.2594 11.209 PS -8.2171.9890.0055.3320 15.904 YEAR -2.1958.9861.0025.1549 5.864 --------+------------------------------------------------------------- Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X --------+------------------------------------------------------------- Constant| -79.7535*** 8.67255 -9.196.0000 PG| -15.1224*** 1.88034 -8.042.0000 2.31661 Y|.03692***.00132 28.022.0000 9232.86 --------+------------------------------------------------------------- Adding Variables to a Model What is the effect of adding PN, PD, PS, YEAR to the model (one at a time)?
17
17/120 Adjusted R Squared Adjusted R 2 (for degrees of freedom?) Includes a penalty for variables that don’t add much fit. Can fall when a variable is added to the equation.
18
18/120 Regression Results 18 ----------------------------------------------------------------------------- Ordinary least squares regression............ LHS=BOX Mean = 20.72065 Standard deviation = 17.49244 ---------- No. of observations = 62 DegFreedom Mean square Regression Sum of Squares = 9203.46 2 4601.72954 Residual Sum of Squares = 9461.66 59 160.36711 Total Sum of Squares = 18665.1 61 305.98555 ---------- Standard error of e = 12.66361 Root MSE 12.35344 Fit R-squared =.49308 R-bar squared.47590 Model test F[ 2, 59] = 28.69497 Prob F > F*.00000 --------+-------------------------------------------------------------------- | Standard Prob. 95% Confidence BOX| Coefficient Error t |t|>T* Interval --------+-------------------------------------------------------------------- Constant| -12.0721** 5.30813 -2.27.0266 -22.4758 -1.6684 CNTWAIT3| 53.9033*** 12.29513 4.38.0000 29.8053 78.0013 BUDGET|.12740***.04492 2.84.0062.03936.21544 --------+--------------------------------------------------------------------
19
19/120 Adjusted R-Squared We will discover when we study regression with more than one variable, a researcher can increase R 2 just by adding variables to a model, even if those variables do not really explain y or have any real relationship at all. To have a fit measure that accounts for this, “Adjusted R 2 ” is a number that increases with the correlation, but decreases with the number of variables.
20
20/120 Notes About Adjusted R 2
21
Inference and Regression Transformed Data
22
22/120 Linear Transformations of Data Change units of measurement by dividing every observation – e.g., $ to Millions of $ (see internet buzz regression) by dividing Box by 1000000. Change meaning of variables: x=(x1=nominal interest=i, x2=inflation=dp, x3=GDP) z=(x1-x2 = real interest i-dp, x2=inflation=dp, x3=GDP) Change theory of art appreciation: x=(x1=logHeight, x2=logWidth, x3=signature) z=(x1-x2=logAspectRatio, x2=logHeight, x3=signature) 22
23
23/120 (Linearly) Transformed Data How does linear transformation affect the results of least squares? Z = XP for KxK nonsingular P (Each variable in Z is a combination of the variables in X.) Based on X, b = (XX) -1 X’y. You can show (just multiply it out), the coefficients when y is regressed on Z are c = P -1 b “Fitted value” is Zc = XPP -1 b = Xb. The same!! Residuals from using Z are y - Zc = y - Xb (we just proved this.). The same!! Sum of squared residuals must be identical, as y-Xb = e = y-Zc. R 2 must also be identical, as R 2 = 1 - ee/same total SS.
24
24/120 Principal Components Z = XC Fewer columns than X Includes as much ‘variation’ of X as possible Columns of Z are orthogonal Why do we do this? Collinearity Combine variables of ambiguous identity such as test scores as measures of ‘ability’
25
25/120 +----------------------------------------------------+ | Ordinary least squares regression | | LHS=LOGBOX Mean = 16.47993 | | Standard deviation =.9429722 | | Number of observs. = 62 | | Residuals Sum of squares = 20.54972 | | Standard error of e =.6475971 | | Fit R-squared =.6211405 | | Adjusted R-squared =.5283586 | +----------------------------------------------------+ +--------+--------------+----------------+--------+--------+----------+ |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| +--------+--------------+----------------+--------+--------+----------+ |Constant| 12.5388***.98766 12.695.0000 | |LOGBUDGT|.23193.18346 1.264.2122 3.71468| |STARPOWR|.00175.01303.135.8935 18.0316| |SEQUEL |.43480.29668 1.466.1492.14516| |MPRATING| -.26265*.14179 -1.852.0700 2.96774| |ACTION | -.83091***.29297 -2.836.0066.22581| |COMEDY | -.03344.23626 -.142.8880.32258| |ANIMATED| -.82655**.38407 -2.152.0363.09677| |HORROR |.33094.36318.911.3666.09677| 4 INTERNET BUZZ VARIABLES |LOGADCT |.29451**.13146 2.240.0296 8.16947| |LOGCMSON|.05950.12633.471.6397 3.60648| |LOGFNDGO|.02322.11460.203.8403 5.95764| |CNTWAIT3| 2.59489***.90981 2.852.0063.48242| +--------+------------------------------------------------------------+
26
26/120 +----------------------------------------------------+ | Ordinary least squares regression | | LHS=LOGBOX Mean = 16.47993 | | Standard deviation =.9429722 | | Number of observs. = 62 | | Residuals Sum of squares = 25.36721 | | Standard error of e =.6984489 | | Fit R-squared =.5323241 | | Adjusted R-squared =.4513802 | +----------------------------------------------------+ +--------+--------------+----------------+--------+--------+----------+ |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| +--------+--------------+----------------+--------+--------+----------+ |Constant| 11.9602***.91818 13.026.0000 | |LOGBUDGT|.38159**.18711 2.039.0465 3.71468| |STARPOWR|.01303.01315.991.3263 18.0316| |SEQUEL |.33147.28492 1.163.2500.14516| |MPRATING| -.21185.13975 -1.516.1356 2.96774| |ACTION | -.81404**.30760 -2.646.0107.22581| |COMEDY |.04048.25367.160.8738.32258| |ANIMATED| -.80183*.40776 -1.966.0546.09677| |HORROR |.47454.38629 1.228.2248.09677| |PCBUZZ |.39704***.08575 4.630.0000 9.19362| +--------+------------------------------------------------------------+
27
Inference and Regression Model Building and Functional Form
28
28/120 Using Logs 28
29
29/120 Time Trends in Regression y = α + β 1 x + β 2 t + ε β 2 is the period to period increase not explained by anything else. log y = α + β 1 log x + β 2 t + ε (not log t, just t) 100β 2 is the period to period % increase not explained by anything else.
30
30/120 30 U.S. Gasoline Market: Price and Income Elasticities Downward Trend in Gasoline Usage
31
31/120 Application: Health Care Data German Health Care Usage Data, There are altogether 27,326 observations on German households, 1984-1994. DOCTOR = 1(number of doctor visits > 0) HOSPITAL= 1(number of hospital visits > 0) HSAT = health satisfaction, coded 0 (low) - 10 (high) DOCVIS = number of doctor visits in last three months HOSPVIS = number of hospital visits in last calendar year PUBLIC = insured in public health insurance = 1; otherwise = 0 ADDON = insured by add-on insurance = 1; otherswise = 0 INCOME = household nominal monthly net income in German marks / 10000. HHKIDS = children under age 16 in the household = 1; otherwise = 0 EDUC = years of schooling FEMALE = 1(female headed household) AGE = age in years MARRIED = marital status EDUC = years of education
32
32/120 Dummy Variable D = 0 in one case and 1 in the other Y = a + bX + cD + e When D = 0, E[Y|X] = a + bX When D = 1, E[Y|X] = a + c + bX
33
33/120
34
34/120
35
35/120
36
36/120 A Conspiracy Theory for Art Sales at Auction Sotheby’s and Christies, 1995 to about 2000 conspired on commission rates.
37
37/120 If the Theory is Correct… Sold from 1995 to 2000 Sold before 1995 or after 2000
38
38/120 Evidence: Two Dummy Variables Signature and Conspiracy Effects The statistical evidence seems to be consistent with the theory.
39
39/120 Set of Dummy Variables Usually, Z = Type = 1,2,…,K Y = a + bX + d 1 if Type=1 + d 2 if Type=2 … + d K if Type=K
40
40/120 A Set of Dummy Variables Complete set of dummy variables divides the sample into groups. Fit the regression with “group” effects. Need to drop one (any one) of the variables to compute the regression. (Avoid the “dummy variable trap.”)
41
41/120 Group Effects in Teacher Ratings
42
42/120 Rankings of 132 U.S.Liberal Arts Colleges Reputation=α+β 1 Religious + β 2 GenderEcon + β 3 EconFac + β 4 North + β 5 South + β 6 Midwest + β 7 West + ε Nancy Burnett: Journal of Economic Education, 1998
43
43/120 Minitab does not like this model.
44
44/120 Too many dummy variables cause perfect multicollinearity If we us all four region dummies Reputation = a + bn + … if north Reputation = a + bm + … if midwest Reputation = a + bs + … if south Reputation = a + bw + … if west Only three are needed – so Minitab dropped west Reputation = a + bn + … if north Reputation = a + bm + … if midwest Reputation = a + bs + … if south Reputation = a + … if west
45
45/120 Unordered Categorical Variables House price data (fictitious) Type 1 = Split level Type 2 = Ranch Type 3 = Colonial Type 4 = Tudor Use 3 dummy variables for this kind of data. (Not all 4) Using variable STYLE in the model makes no sense. You could change the numbering scale any way you like. 1,2,3,4 are just labels.
46
46/120 Transform Style to Types
47
47/120
48
48/120 Hedonic House Price Regression Each of these is relative to a Split Level, since that is the omitted category. E.g., the price of a Ranch house is $74,369 less than a Split Level of the same size with the same number of bedrooms.
49
49/120 We used McDonald’s Per Capita
50
50/120 More Movie Madness McDonald’s and Movies (Craig, Douglas, Greene: International Journal of Marketing) Log Foreign Box Office(movie,country,year) = α + β 1 *LogBox(movie,US,year) + β 2 *LogPCIncome + β 4 LogMacsPC + GenreEffect + CountryEffect + ε.
51
51/120 Movie Madness Data (n=2198)
52
52/120 Macs and Movies Countries and Some of the Data Code Pop(mm) per cap # of Language Income McDonalds 1 Argentina 37 12090 173 Spanish 2 Chile, 15 9110 70 Spanish 3 Spain 39 19180 300 Spanish 4 Mexico 98 8810 270 Spanish 5 Germany 82 25010 1152 German 6 Austria 8 26310 159 German 7 Australia 19 25370 680 English 8 UK 60 23550 1152 UK Genres (MPAA) 1=Drama 2=Romance 3=Comedy 4=Action 5=Fantasy 6=Adventure 7=Family 8=Animated 9=Thriller 10=Mystery 11=Science Fiction 12=Horror 13=Crime
53
53/120
54
54/120 CRIME is the left out GENRE. AUSTRIA is the left out country. Australia and UK were left out for other reasons (algebraic problem with only 8 countries).
55
55/120 Functional Form: Quadratic Y = a + b 1 X + b 2 X 2 + e dE[Y|X]/dX = b 1 + 2b 2 X
56
56/120
57
57/120
58
58/120
59
59/120 Interaction Effect Y = a + b 1 X + b 2 Z + b 3 X*Z + e E.g., the benefit of a year of education depends on how old one is. Log(income)=a + b 1 *Ed + b 2 *Ed 2 + b 3 *Ed*Age + e dlogIncome/dEd=b 1 +2b 2 *Ed+b 3 *Age
60
60/120 Effect of an additional year of education increases from about 6.8% at age 20 to 7.2% at age 40
61
Statistics and Data Analysis Properties of Least Squares
62
62/120 Terms of Art Estimates and estimators Properties of an estimator - the sampling distribution “Finite sample” properties as opposed to “asymptotic” or “large sample” properties
63
63/120 Least Squares
64
64/120 Deriving the Properties of b So, b = the parameter vector + a linear combination of the disturbances, each times a vector. Therefore, b is a vector of random variables. We analyze it as such. We do the analysis conditional on an X, then show that results do not depend on the particular X in hand, so the result must be general – i.e., independent of X.
65
65/120 Unbiasedness of b
66
66/120 Left Out Variable Bias A Crucial Result About Specification: Two sets of variables in the regression, X 1 and X 2. y = X 1 1 + X 2 2 + What if the regression is computed without the second set of variables? What is the expectation of the "short" regression estimator? b 1 = (X 1 X 1 ) -1 X 1 y
67
67/120 The Left Out Variable Formula E[b 1 ] = 1 + (X 1 X 1 ) -1 X 1 X 2 2 The (truly) short regression estimator is biased. Application: Quantity = 1 Price + 2 Income + If you regress Quantity on Price and leave out Income. What do you get?
68
68/120 Application: Left out Variable Leave out Income. What do you get? In time series data, 1 0 (usually) Cov[Price,Income] > 0 in time series data. So, the short regression will overestimate the price coefficient. Simple Regression of G on a constant and PG Price Coefficient should be negative.
69
69/120 Estimated ‘Demand’ Equation Shouldn’t the Price Coefficient be Negative?
70
70/120 Multiple Regression of G on Y and PG. The Theory Works! ---------------------------------------------------------------------- Ordinary least squares regression............ LHS=G Mean = 226.09444 Standard deviation = 50.59182 Number of observs. = 36 Model size Parameters = 3 Degrees of freedom = 33 Residuals Sum of squares = 1472.79834 Standard error of e = 6.68059 Fit R-squared =.98356 Adjusted R-squared =.98256 Model test F[ 2, 33] (prob) = 987.1(.0000) --------+------------------------------------------------------------- Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X --------+------------------------------------------------------------- Constant| -79.7535*** 8.67255 -9.196.0000 Y|.03692***.00132 28.022.0000 9232.86 PG| -15.1224*** 1.88034 -8.042.0000 2.31661 --------+-------------------------------------------------------------
71
71/120 Specification Errors-1 Omitting relevant variables: Suppose the correct model is y = X 1 1 + X 2 2 + . I.e., two sets of variables. Compute least squares omitting X 2. Some easily proved results: Var[b 1 ] is smaller than Var[b 1.2 ]. You get a smaller variance when you omit X 2. (One interpretation: Omitting X 2 amounts to using extra information ( 2 = 0). Even if the information is wrong (see the next result), it reduces the variance. (This is an important result.)
72
72/120 Specification Errors-2 Including superfluous variables: Just reverse the results. Including superfluous variables increases variance. (The cost of not using information.) Does not cause a bias, because if the variables in X 2 are truly superfluous, then 2 = 0, so E[b 1.2 ] = 1.
73
Inference and Regression Estimating Var[b|X]
74
74/120 Variance of the Least Squares Estimator
75
75/120 Gauss-Markov Theorem A theorem of Gauss and Markov: Least Squares is the Minimum Variance Linear Unbiased Estimator 1. Linear estimator 2. Unbiased: E[b|X] = β Comparing positive definite matrices: Var[c|X] – Var[b|X] is nonnegative definite for any other linear and unbiased estimator.
76
76/120 True Variance of b|X 76
77
77/120 Estimating 2 Using the residuals instead of the disturbances: The natural estimator: ee/N as a sample surrogate for /N Imperfect observation of i = e i + ( - b)x i Downward bias of ee/N. We obtain the result E[ee|X] = (N-K) 2
78
78/120 Expectation of e’e
79
79/120 Expected Value of e’e:
80
80/120 Estimating σ 2 The unbiased estimator is s 2 = ee/(N-K). N-K = “Degrees of freedom correction”
81
81/120 Var[b|X] Estimating the Covariance Matrix for b|X The true covariance matrix is 2 (X’X) -1 The natural estimator is s 2 (X’X) -1 “Standard errors” of the individual coefficients are the square roots of the diagonal elements.
82
82/120 X’X (X’X) -1 s 2 (X’X) -1
83
83/120 Regression Results ---------------------------------------------------------------------- Ordinary least squares regression............ LHS=G Mean = 226.09444 Standard deviation = 50.59182 Number of observs. = 36 Model size Parameters = 7 Degrees of freedom = 29 Residuals Sum of squares = 778.70227 Standard error of e = 5.18187 <***** sqr[778.70227/(36 – 7)] Fit R-squared =.99131 Adjusted R-squared =.98951 Model test F[ 6, 29] (prob) = 551.2(.0000) --------+------------------------------------------------------------- Variable| Coefficient Standard Error t-ratio P[|T|>t] Mean of X --------+------------------------------------------------------------- Constant| -7.73975 49.95915 -.155.8780 PG| -15.3008*** 2.42171 -6.318.0000 2.31661 Y|.02365***.00779 3.037.0050 9232.86 TREND| 4.14359** 1.91513 2.164.0389 17.5000 PNC| 15.4387 15.21899 1.014.3188 1.67078 PUC| -5.63438 5.02666 -1.121.2715 2.34364 PPT| -12.4378** 5.20697 -2.389.0236 2.74486 --------+------------------------------------------------------------- Create ; trend=year-1960$ Namelist; x=one,pg,y,trend,pnc,puc,ppt$ Regress ; lhs=g ; rhs=x$
84
Inference and Regression Not Perfect Collinearity
85
85/120 Variance Inflation and Multicollinearity When variables are highly but not perfectly correlated, least squares is difficult to compute accurately Variances of least squares slopes become very large. Variance inflation factors: For each x k, VIF(k) = 1/[1 – R 2 (k)] where R 2 (k) is the R 2 in the regression of x k on all the other x variables in the data matrix 85
86
86/120 NIST Statistical Reference Data Sets – Accuracy Tests
87
87/120 The Filipelli Problem
88
88/120 VIF for X10: R 2 =.99999999999999630 VIF =.27294543196184830D+15
89
89/120
90
90/120 Other software: Minitab reports the correct answer Stata drops X10
91
91/120 Accurate and Inaccurate Computation of Filipelli Results Accurate computation requires not actually computing (X’X) -1. We (and others) use the QR method. See text for details.
92
Inference and Regression Testing Hypotheses
93
93/120 Testing Hypotheses
94
94/120 Hypothesis Testing: Criteria
95
95/120 The F Statistic has an F Distribution
96
96/120 Nonnormality or Large N Denominator of F converges to 1. Numerator converges to chi squared[J]/J. Rely on law of large numbers for the denominator and CLT for the numerator: JF Chi squared[J] Use critical values from chi squared.
97
97/120 Significance of the Regression - R * 2 = 0
98
98/120 Table of 95% Critical Values for F
99
99/120
100
100/120 +----------------------------------------------------+ | Ordinary least squares regression | | LHS=LOGBOX Mean = 16.47993 | | Standard deviation =.9429722 | | Number of observs. = 62 | | Residuals Sum of squares = 25.36721 | | Standard error of e =.6984489 | | Fit R-squared =.5323241 | | Adjusted R-squared =.4513802 | +----------------------------------------------------+ +--------+--------------+----------------+--------+--------+----------+ |Variable| Coefficient | Standard Error |t-ratio |P[|T|>t]| Mean of X| +--------+--------------+----------------+--------+--------+----------+ |Constant| 11.9602***.91818 13.026.0000 | |LOGBUDGT|.38159**.18711 2.039.0465 3.71468| |STARPOWR|.01303.01315.991.3263 18.0316| |SEQUEL |.33147.28492 1.163.2500.14516| |MPRATING| -.21185.13975 -1.516.1356 2.96774| |ACTION | -.81404**.30760 -2.646.0107.22581| |COMEDY |.04048.25367.160.8738.32258| |ANIMATED| -.80183*.40776 -1.966.0546.09677| |HORROR |.47454.38629 1.228.2248.09677| |PCBUZZ |.39704***.08575 4.630.0000 9.19362| +--------+------------------------------------------------------------+ F = [(.6211405 -.5323241)/3] / [(1 -.6211405)/(62 – 13)] = 3.829; F* = 2.84
101
Inference and Regression A Case Study
102
102/120 Mega Deals for Stars A Capital Budgeting Computation Costs and Benefits Certainty: Costs Uncertainty: Benefits Long Term: Need for discounting
103
103/120 Baseball Story A Huge Sports Contract Alex Rodriguez hired by the Texas Rangers for something like $25 million per year in 2000. Costs – the salary plus and minus some fine tuning of the numbers Benefits – more fans in the stands. How to determine if the benefits exceed the costs? Use a regression model.
104
104/120 The Texas Deal for Alex Rodriguez 2001Signing Bonus = 10M 200121 200221 200321 200421 200525 200625 200727 200827 200927 201027 Total:$252M ???
105
105/120 The Real Deal YearSalaryBonusDeferral 2001 2125 to 2011 20022124 to 2012 20032123 to 2013 20042124 to 2014 20052524 to 2015 200625 4 to 2016 200727 3 to 2017 2008273 to 2018 2009273 to 2019 2010275 to 2020 Deferrals accrue interest of 3% per year.
106
106/120 Costs Insurance: About 10% of the contract per year (Taxes: About 40% of the contract) Some additional costs in revenue sharing revenues from the league (anticipated, about 17.5% of marginal benefits – uncertain) Interest on deferred salary - $150,000 in first year, well over $1,000,000 in 2010. (Reduction) $3M it would cost to have a different shortstop. (Nomar Garciaparra)
107
107/120 PDV of the Costs Using 8% discount factor (They used) Accounting for all costs Roughly $21M to $28M in each year from 2001 to 2010, then the deferred payments from 2010 to 2020 Total costs: About $165 Million/Year in 2001 (Present discounted value)
108
108/120 Benefits More fans in the seats Gate Parking Merchandise Increased chance at playoffs and world series Sponsorships (Loss to revenue sharing) Franchise value
109
109/120 How Many New Fans? Projected 8 more wins per year. What is the relationship between wins and attendance? Not known precisely Many empirical studies (The Journal of Sports Economics) Use a regression model to find out.
110
110/120 Baseball Data 31 teams, 17 years (fewer years for 6 teams) Winning percentage: Wins = 162 * percentage Rank Average attendance. Attendance = 81*Average Average team salary Number of all stars Manager years of experience Percent of team that is rookies Lineup changes Mean player experience Dummy variable for change in manager
111
111/120 Baseball Data (Panel Data)
112
112/120 A Dynamic Equation
113
113/120
114
114/120
115
115/120
116
116/120 About 220,000 fans
117
117/120 The Regression Model
118
118/120
119
119/120 Marginal Value of One Win
120
120/120 Marginal Value of an A Rod 8 games * 63,734 fans = 509,878 fans 509,878 fans * $18 per ticket $2.50 parking etc. $1.80 stuff (hats, bobble head dolls,…) $11.3 Million per year !!!!! It’s not close. (Marginal cost is at least $16.5M / year)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.