Classical Linear Regression Model

Slides:



Advertisements
Similar presentations
SJS SDI_21 Design of Statistical Investigations Stephen Senn 2 Background Stats.
Advertisements

The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Part 3: Least Squares Algebra 3-1/27 Econometrics I Professor William Greene Stern School of Business Department of Economics.
1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
Multiple Regression Analysis
Topic 12: Multiple Linear Regression
The Simple Regression Model
CHAPTER 3: TWO VARIABLE REGRESSION MODEL: THE PROBLEM OF ESTIMATION
Chapter 12 Simple Linear Regression
Part 12: Asymptotics for the Regression Model 12-1/39 Econometrics I Professor William Greene Stern School of Business Department of Economics.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Definition  Regression Model  Regression Equation Y i =  0 +  1 X i ^ Given a collection of paired data, the regression equation algebraically describes.
Simple Linear Regression
The Simple Linear Regression Model: Specification and Estimation
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
Chapter 10 Simple Regression.
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Simple Linear Regression
Chapter 3 Simple Regression. What is in this Chapter? This chapter starts with a linear regression model with one explanatory variable, and states the.
Classical Linear Regression Model Finite Sample Properties of OLS Restricted Least Squares Specification Errors –Omitted Variables –Irreverent Variables.
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
3-variable Regression Derive OLS estimators of 3-variable regression
Generalized Regression Model Based on Greene’s Note 15 (Chapter 8)
CHAPTER 4 ECONOMETRICS x x x x x Multiple Regression = more than one explanatory variable Independent variables are X 2 and X 3. Y i = B 1 + B 2 X 2i +
OLS: Theoretical Review Assumption 1: Linearity –y = x′  +  or y = X  +  ) (where x,  : Kx1) Assumption 2: Exogeneity –E(  |X) = 0 –E(x  ) = lim.
Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing.
Chapter Topics Types of Regression Models
Chapter 11 Multiple Regression.
Topic4 Ordinary Least Squares. Suppose that X is a non-random variable Y is a random variable that is affected by X in a linear fashion and by the random.
Part 4: Partial Regression and Correlation 4-1/24 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Ch. 14: The Multiple Regression Model building
FIN357 Li1 The Simple Regression Model y =  0 +  1 x + u.
Linear Regression Models Powerful modeling technique Tease out relationships between “independent” variables and 1 “dependent” variable Models not perfect…need.
Simple Linear Regression and Correlation
1 Simple Linear Regression 1. review of least squares procedure 2. inference for least squares lines.
3. Multiple Regression Analysis: Estimation -Although bivariate linear regressions are sometimes useful, they are often unrealistic -SLR.4, that all factors.
Regression and Correlation Methods Judy Zhong Ph.D.
Chapter 11 Simple Regression
Regression Method.
Chapter 4-5: Analytical Solutions to OLS
MTH 161: Introduction To Statistics
Applied Quantitative Analysis and Practices LECTURE#23 By Dr. Osman Sadiq Paracha.
Y X 0 X and Y are not perfectly correlated. However, there is on average a positive relationship between Y and X X1X1 X2X2.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Ordinary Least Squares Regression.
Chapter Three TWO-VARIABLEREGRESSION MODEL: THE PROBLEM OF ESTIMATION
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin The Two-Variable Model: Hypothesis Testing chapter seven.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 13-1 Introduction to Regression Analysis Regression analysis is used.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Multiple Regression Analysis: Estimation. Multiple Regression Model y = ß 0 + ß 1 x 1 + ß 2 x 2 + …+ ß k x k + u -ß 0 is still the intercept -ß 1 to ß.
Lecturer: Ing. Martina Hanová, PhD. Business Modeling.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Part 3: Least Squares Algebra 3-1/58 Econometrics I Professor William Greene Stern School of Business Department of Economics.
The simple linear regression model and parameter estimation
Objectives By the end of this lecture students will:
THE LINEAR REGRESSION MODEL: AN OVERVIEW
Regression.
Evgeniya Anatolievna Kolomak, Professor
ECONOMETRICS DR. DEEPTI.
Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation
Chapter 6: MULTIPLE REGRESSION ANALYSIS
The regression model in matrix form
Econometrics Chengyaun yin School of Mathematics SHUFE.
Simple Linear Regression
Multiple Regression Analysis: Estimation
Econometrics Analysis
Linear Regression Summer School IFPRI
Introduction to Regression
Topic 11: Matrix Approach to Linear Regression
Econometrics I Professor William Greene Stern School of Business
Regression Models - Introduction
Presentation transcript:

Classical Linear Regression Model Notation and Assumptions Model Estimation Method of Moments Least Squares Partitioned Regression Model Interpretation

Notations y: Dependent Variable (Regressand) yi, i = 1,2,…n X: Explanatory Variables (Regressors) xi', i = 1,2,…,n xij, i = 1,2,…,n; j = 1,2,…,K

Assumptions Assumption 1: Linearity yi = b1xi1+ b2xi2+…+ bKxiK+ ei (i = 1,2,…,n) Linearity in Vector Notation yi = xi'b + ei (i = 1,2,…,n)

Assumptions Linearity in Matrix Notation y = Xb + e

Assumptions Assumption 2: Exogeneity E(ei|X) = E(ei|xj1,xj2 ,…,xjK) = 0 (i,j = 1,2,…,n) Implications of Exogeneity E(ei) = E(E(ei|X)) = 0 E(xjkei) = E(E(xjkei|X)) = E(xjkE(ei|X)) = 0 Cov(ei,xjk) = E(xjkei) - E(xjk)E(ei) = 0

Assumptions Assumption 3: No Multicolinearity rank(X) = K, with probability 1. Assumption 4: Spherical Error Variance E(ei2|X) = s2 > 0, i = 1,2,…,n E(eiej|X) = 0, i,j = 1,2,…,n; ij In matrix notation: Var(e|X) = E(ee'|X) = s2In

Assumptions Implication of Spherical Error Variance (Homoscedasticity and No Autocorrelation) Var(e) = E(Var(e|X))+Var(E(e|X)) = E(Var(e|X)) = s2In Var(X'e|X) = E(X'ee'X|X) = s2E(X'X) Note: E(X'e|X) = 0 followed from Exogeneity Assumption

Discussions Exogeneity in Time Series Models Random Samples The sample (y,X) is a random sample if {yi,xi} is i.i.d (independently and identically distributed). Fixed Regressors X is fixed or deterministic.

Discussions Nonlinearity in Variables g(yi) = b1f1(xi1)+b2f2(xi2)+…+ bKfK(xiK)+ei (i = 1,2,…,n) Linearity in Parameters and Model Errors

Method-of-Moments Estimator From the implication of strict exogeneity assumption, E(xjkei) = 0 for i = 1,2,…,n and k = 1,2,…,K. That is, Moment Conditions: E(X'e) = 0 X'(y-Xb) = 0 or X'y=X'Xb The method-of-moments estimator of b: b = (X'X)-1X'y

Least Squares Estimator OLS: b = argminb SSR(b) SSR(b) = e'e = (y-Xb)'(y-Xb) = i=1,2…,n(yi-xi'b)2 b = (X'X)-1X'y

The Algebra of Least Squares Normal Equations: X'Xb = X'y b = (X'X)-1X’y

The Algebra of Least Squares b = (X'X)-1X'y = b + (X'X)-1X'e b = (X'X/n)-1 (X'y/n) = Sxx-1sxy Sxx is the sample average of xixi', and sxy is the sample average of xiyi. OLS Residuals: e = y – Xb X'e = 0 s2 = SSR/(n-K) = e'e/(n-K)

The Algebra of Least Squares Projection Matrix P and “Residual Maker” M: P = X(X'X)-1X', M = In - P PX = X, MX = 0 Py = Xb, My = y-Py = e, Me = e SSR = e'e = e'Me h = Vector of diagonal elements in P can be used to check for the influential observation: hi > K/n, with 0  hi  1.

The Algebra of Least Squares Fitted Value: Uncentered R2: Ruc2 = 1 – e'e/y'y = Centered R2: R2 =

Analysis of Variance TSS = ESS = RSS = e'e (or SSR) TSS = ESS + RSS Degrees of Freedom TSS: n-1 ESS: K-1 RSS: n-K

Analysis of Variance R2 Adjusted R2

Discussions Examples Constant or Mean Regression yi = a + ei (i=1,2,…,n) Simple Regression yi = a + bxi + ei (i=1,2,…,n)

Partitioned Regression y = Xb + e = X1b1 + X2b2 + e X = [X1 X2], b = [b1' b2']' Normal Equations X'Xb = X'y, or

Partitioned Regression b1 = (X1'X1)-1 X1'y - (X1'X1)-1(X1'X2)b2 = (X1'X1)-1X1'(y-X2b2) b2 = (X2'M1X2)-1 X2'M1y = (X2*'X2*)-1X2*'y* M1 = [I-X1(X1'X1)-1X1'] X2* = M1X2 (matrix of residuals obtained from each column of X2 regressed on X1) y* = M1y (residuals of y regressed on X1) b2 is LS estimator of y* on X2*

Partitioned Regression Frisch-Waugh Theorem If y = X1b1 + X2b2 + e , then b2 is the LS estimator for the regression of residuals from a regression of y on X1 alone are regressed on the set of residuals obtained when each column of X2 is regressed on X1. If X1 and X2 are independent, X1'X2 = 0. b1= (X1'X1)-1X1'y b2 = (X2'X2)-1 X2'y

Partitioned Regression Applications of Frisch-Waugh Theorem Partial Regression Coefficient Trend Regression yt = a + bt + et

Model Interpretation Marginal Effects (Ceteris Paribus Interpretation): y = Xb + e, where Elasticity Interpretation

Example U. S. Gasoline Market, 1953-2004 EXPG = Total U.S. gasoline expenditure PG = Price index for gasoline Y = Per capita disposable income Pnc = Price index for new cars Puc = Price index for used cars Ppt = Price index for public transportation Pd = Aggregate price index for consumer durables Pn = Aggregate price index for consumer nondurables Ps = Aggregate price index for consumer services Pop = U.S. total population in thousands

Example y = Xb + e y = G; X = [1 PG Y] where G = (EXPG/PG)/POP y = ln(G); X = [1 ln(PG) ln(Y)] Elasticity Interpretation

Example (Continued) y = X1b1 + X2b2 + e y = ln(G); X1 = [1 YEAR]; X2 = [ln(PG) ln(Y)] where G = (EXPG/PG)/POP Frisch-Waugh Theorem