Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing.

Slides:



Advertisements
Similar presentations
Classical Linear Regression Model
Advertisements

General Linear Model With correlated error terms  =  2 V ≠  2 I.
The Simple Regression Model
Part 11: Asymptotic Distribution Theory 11-1/72 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Part 12: Asymptotics for the Regression Model 12-1/39 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
2. Fixed Effects Models 2.1 Basic fixed-effects model
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
Chapter 10 Simple Regression.
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Simple Linear Regression
Classical Linear Regression Model Finite Sample Properties of OLS Restricted Least Squares Specification Errors –Omitted Variables –Irreverent Variables.
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
3-variable Regression Derive OLS estimators of 3-variable regression
Generalized Regression Model Based on Greene’s Note 15 (Chapter 8)
4. Convergence of random variables  Convergence in probability  Convergence in distribution  Convergence in quadratic mean  Properties  The law of.
Classical Linear Regression Model Normality Assumption Hypothesis Testing Under Normality Maximum Likelihood Estimator Generalized Least Squares.
OLS: Theoretical Review Assumption 1: Linearity –y = x′  +  or y = X  +  ) (where x,  : Kx1) Assumption 2: Exogeneity –E(  |X) = 0 –E(x  ) = lim.
GRA 6020 Multivariate Statistics Regression examples Ulf H. Olsson Professor of Statistics.
Topic4 Ordinary Least Squares. Suppose that X is a non-random variable Y is a random variable that is affected by X in a linear fashion and by the random.
Violations of Assumptions In Least Squares Regression.
Approximations to Probability Distributions: Limit Theorems.
Part 14: Generalized Regression 14-1/46 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Separate multivariate observations
1 MADE WHAT IF SOME OLS ASSUMPTIONS ARE NOT FULFILED?
Inference issues in OLS
What does it mean? The variance of the error term is not constant
Limits and the Law of Large Numbers Lecture XIII.
MTH 161: Introduction To Statistics
Slide 6.1 Linear Hypotheses MathematicalMarketing In This Chapter We Will Cover Deductions we can make about  even though it is not observed. These include.
Convergence in Distribution
Maximum Likelihood Estimation Methods of Economic Investigation Lecture 17.
ANOVA Assumptions 1.Normality (sampling distribution of the mean) 2.Homogeneity of Variance 3.Independence of Observations - reason for random assignment.
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin The Two-Variable Model: Hypothesis Testing chapter seven.
405 ECONOMETRICS Domodar N. Gujarati Prof. M. El-Sakka
Monte-Carlo method for Two-Stage SLP Lecture 5 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
EC 532 Advanced Econometrics Lecture 1 : Heteroscedasticity Prof. Burak Saltoglu.
EC208 – Introductory Econometrics. Topic: Spurious/Nonsense Regressions (as part of chapter on Dynamic Models)
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Econometric Analysis of Panel Data Panel Data Analysis – Linear Model One-Way Effects Two-Way Effects – Pooled Regression Classical Model Extensions.
1 G Lect 4W Multiple regression in matrix terms Exploring Regression Examples G Multiple Regression Week 4 (Wednesday)
Econometrics - Lecture 2 Introduction to Linear Regression – Part 2.
Statistics 350 Lecture 13. Today Last Day: Some Chapter 4 and start Chapter 5 Today: Some matrix results Mid-Term Friday…..Sections ; ;
Sums of Random Variables and Long-Term Averages Sums of R.V. ‘s S n = X 1 + X X n of course.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
Lecturer: Ing. Martina Hanová, PhD..  How do we evaluate a model?  How do we know if the model we are using is good?  assumptions relate to the (population)
Large Sample Distribution Theory
Linear Regression Modelling
Large Sample Theory EC 532 Burak Saltoğlu.
Esman M. Nyamongo Central Bank of Kenya
Regression.
Evgeniya Anatolievna Kolomak, Professor
Parameter, Statistic and Random Samples
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
Linear Combination of Two Random Variables
Large Sample Theory EC 532 Burak Saltoğlu.
The regression model in matrix form
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
MLR5 MLR3* MLR4’ MLR3 + Q is finite MLR5 MLR{2 4’} MLR2 MLR3 MLR4 MLR6
Econometrics Chengyaun yin School of Mathematics SHUFE.
Chengyuan Yin School of Mathematics
Chengyuan Yin School of Mathematics
Econometrics I Professor William Greene Stern School of Business
Linear Regression Summer School IFPRI
Violations of Assumptions In Least Squares Regression
Review of Hypothesis Testing
Violations of Assumptions In Least Squares Regression
Presentation transcript:

Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing

Convergence of Estimators Let be an estimator of a parameter vector  based on a sample of size n. is a sequence of random variables. is consistent for  if

Convergence of Estimators A consistent estimator is asymptotically normal if Such an estimator is called  n-consistent. The asymptotic variance is derived from the variance of the limiting distribution of

Convergence of Estimators Delta Method:  n(  n -  d N(0,  )   n[a(  n )-a(  )]  d N(0, A(  )  A(  )′) where a(.): R K  R r has continuous first derivatives with A(  ) defined by

Least Squares Assumptions A1: Linearity (Data Generating Process) –y i = x i ′  +  i (i=1,2,…,n) –The stochastic process that generated the finite sample {y i,x i } must be stationary and asymptotic independent: lim n   i x i x i ′/n = E(x i x i ′) lim n   i x i ′y i /n = E(x i ′y i ) –Finite 4 th Moments for Regressors: E[(x ik x ij ) 2 ] exists and is finite for all k,j (=1,2,…,K).

Least Squares Assumptions A2: Exogeneity –E(  i |X) = 0 or E(  i |x i ) = 0 –E(  i |X) = 0  E(  i |x i ) = 0  E(x ik  i ) = 0. A2’: Weak Exogeneity –E(x ik  i ) = 0 for all i=1,2,…,n; k=1,2,…K. –It is possible that E(x jk  i )  0 for some j,k and j  i. Define g i = x i  i = [x i1,x i2,…,x iK ]  i, then E(g i ) = E(x i  i ) = 0 What is Var(g i ) = E(g i g i ′) = E(x i x i ′  i 2 )?

Least Squares Assumptions –We assume lim n   i x i  i /n = E(g i ) lim n   i x i x i ′  i 2 /n = E(g i g i ′) E(g i ) = 0 (A3) Var(g i ) = E(g i g i ′) = Ω is nonsingular –CLT states that

Least Squares Assumptions A3: Full Rank –E(x i x i ′) =  xx is nonsingular. –Let Q =  i x i x i ′/n = X′X/n –lim n  Q = E(x i x i ′) =  xx (A1) –Therefore Q is nonsingular (no multicolinearity in the limit).

Least Squares Assumptions A4: Spherical Disturbances –Homoscedasticity and No Autocorrelation Var(  |X) = E(  '|X) =  2 I n Var(g i ) = E(x i x i ′  i 2 ) =  2 X′X CLT:

Least Squares Assumptions A4’: Non-Spherical Disturbances: –Heteroscedasticity Var(  |X) = E(  '|X) = V(X) Var(g i ) = E(x i x i ′  i 2 ) = X′VX CLT:

Least Squares Estimator OLS –b = (X′X) -1 X′y =  + (X′X) -1 X′  =  + (X′X/n) -1 (X′  /n) –b -  = (X′X/n) -1 (X′  /n) –(b –  (b –  ′  (X′X/n) -1 (X′  /n)(X′  /n) ′ (X′X/n) -1

Least Squares Estimator Consistency of b –b -  = (X′X/n) -1 (X′  /n) –plim(b –  = plim[(X′X/n) -1 (X′  /n)] = plim[(X′X/n) -1 ]plim[(X′  /n)] = [plim(X′X/n)] -1 plim[(X′  /n)] = Q -1 0

Least Squares Estimator Consistency of s 2 –s 2 = e′e/(n-K) =  ′M  /(n-K) = [n/(n-K)](1/n)  ′M  –plim(s 2  = plim[(1/n)  ′M  ] = plim[  ′  /n -  ′X(X′X) -1 X′  /n)] = plim[  ′  /n] -plim[(  ′X/n)][plim(X′X/n)] -1 plim[(X′  /n)] = plim[  ′  /n] - 0′Q -1 0 = E(  ′  ) =  2

Least Squares Estimator Asymptotic Distribution –b -  = (X′X/n) -1 (X′  /n) –√n (b -  ) = (X′X/n) -1 (X′  / √ n) → d Q -1 N(0, Ω) –This is because of A4 or A4’ –√n (b -  ) → d N(0, Q -1 ΩQ -1 )

Least Squares Estimator Asymptotic Distribution –How to estimate Ω? If A4 (homoscedasticity), Ω =  2 Q –√n (b -  ) → d N(0,  2 Q -1 ) –Approximately, b → a N( , (  2 /n)Q -1 ) –Consistent estimator of (  2 /n)Q -1 is (s 2 /n)Q -1 = s 2 (X ′ X) -1

Least Squares Estimator Asymptotic Distribution –How to estimate Ω? If A4’ (heteroscedasticity), –√n (b -  ) → d N(0, Q -1 ΩQ -1 ), where Ω = X ′ E(  ′ )X = X ′ VX –Asy var(b) = (1/n)Q -1 ΩQ -1 –White Estimator of Ω:

Least Squares Estimator Asymptotic Distribution –White Estimator of Asy var(b) Est(Asy var(b)) = (1/n)Q -1 [  i x i x i ′e i 2 /n] Q -1 = (X ′ X) -1 [  i x i x i ′e i 2 /n] (X ′ X) -1 White Estimator is Consistent: Heteroscedasticity Consistent Variance-Covariance Matrix

Least Squares Estimator Asymptotic Results –How “fast” does b →  ? Under A4, Asy Var(b) = (  2 /n)Q -1 is O(1/n) –√nb has variance of O(1) –Convergence is at the rate of 1/ √n –Asymptotic distribution of b does not depend on normality assumption of . –Slutsky theorem and the delta method apply to function of b.

Hypothesis Testing

Under the null hypothesis H 0 : Rb = q, where R is J by K matrix of full row rank and J is the number of restrictions (the dimension of q), W = n(Rb-q) ′ {R[Est(Asy var(b))]R ′ } -1 (Rb-q) = n(Rb-q) ′ {R[ns 2 (X ′ X) -1 ]R ′ } -1 (Rb-q) = (Rb-q) ′ {R(X ′ X) -1 ]R ′ } -1 (Rb-q)/s 2 = JF = (SSR r – SSR ur )/s 2 ~  2 (J)