Presentation is loading. Please wait.

Presentation is loading. Please wait.

Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing.

Similar presentations


Presentation on theme: "Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing."— Presentation transcript:

1 Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing

2 Convergence of Estimators Let be an estimator of a parameter vector  based on a sample of size n. is a sequence of random variables. is consistent for  if

3 Convergence of Estimators A consistent estimator is asymptotically normal if Such an estimator is called  n-consistent. The asymptotic variance is derived from the variance of the limiting distribution of

4 Convergence of Estimators Delta Method:  n(  n -  d N(0,  )   n[a(  n )-a(  )]  d N(0, A(  )  A(  )′) where a(.): R K  R r has continuous first derivatives with A(  ) defined by

5 Least Squares Assumptions A1: Linearity (Data Generating Process) –y i = x i ′  +  i (i=1,2,…,n) –The stochastic process that generated the finite sample {y i,x i } must be stationary and asymptotic independent: lim n   i x i x i ′/n = E(x i x i ′) lim n   i x i ′y i /n = E(x i ′y i ) –Finite 4 th Moments for Regressors: E[(x ik x ij ) 2 ] exists and is finite for all k,j (=1,2,…,K).

6 Least Squares Assumptions A2: Exogeneity –E(  i |X) = 0 or E(  i |x i ) = 0 –E(  i |X) = 0  E(  i |x i ) = 0  E(x ik  i ) = 0. A2’: Weak Exogeneity –E(x ik  i ) = 0 for all i=1,2,…,n; k=1,2,…K. –It is possible that E(x jk  i )  0 for some j,k and j  i. Define g i = x i  i = [x i1,x i2,…,x iK ]  i, then E(g i ) = E(x i  i ) = 0 What is Var(g i ) = E(g i g i ′) = E(x i x i ′  i 2 )?

7 Least Squares Assumptions –We assume lim n   i x i  i /n = E(g i ) lim n   i x i x i ′  i 2 /n = E(g i g i ′) E(g i ) = 0 (A3) Var(g i ) = E(g i g i ′) = Ω is nonsingular –CLT states that

8 Least Squares Assumptions A3: Full Rank –E(x i x i ′) =  xx is nonsingular. –Let Q =  i x i x i ′/n = X′X/n –lim n  Q = E(x i x i ′) =  xx (A1) –Therefore Q is nonsingular (no multicolinearity in the limit).

9 Least Squares Assumptions A4: Spherical Disturbances –Homoscedasticity and No Autocorrelation Var(  |X) = E(  '|X) =  2 I n Var(g i ) = E(x i x i ′  i 2 ) =  2 X′X CLT:

10 Least Squares Assumptions A4’: Non-Spherical Disturbances: –Heteroscedasticity Var(  |X) = E(  '|X) = V(X) Var(g i ) = E(x i x i ′  i 2 ) = X′VX CLT:

11 Least Squares Estimator OLS –b = (X′X) -1 X′y =  + (X′X) -1 X′  =  + (X′X/n) -1 (X′  /n) –b -  = (X′X/n) -1 (X′  /n) –(b –  (b –  ′  (X′X/n) -1 (X′  /n)(X′  /n) ′ (X′X/n) -1

12 Least Squares Estimator Consistency of b –b -  = (X′X/n) -1 (X′  /n) –plim(b –  = plim[(X′X/n) -1 (X′  /n)] = plim[(X′X/n) -1 ]plim[(X′  /n)] = [plim(X′X/n)] -1 plim[(X′  /n)] = Q -1 0

13 Least Squares Estimator Consistency of s 2 –s 2 = e′e/(n-K) =  ′M  /(n-K) = [n/(n-K)](1/n)  ′M  –plim(s 2  = plim[(1/n)  ′M  ] = plim[  ′  /n -  ′X(X′X) -1 X′  /n)] = plim[  ′  /n] -plim[(  ′X/n)][plim(X′X/n)] -1 plim[(X′  /n)] = plim[  ′  /n] - 0′Q -1 0 = E(  ′  ) =  2

14 Least Squares Estimator Asymptotic Distribution –b -  = (X′X/n) -1 (X′  /n) –√n (b -  ) = (X′X/n) -1 (X′  / √ n) → d Q -1 N(0, Ω) –This is because of A4 or A4’ –√n (b -  ) → d N(0, Q -1 ΩQ -1 )

15 Least Squares Estimator Asymptotic Distribution –How to estimate Ω? If A4 (homoscedasticity), Ω =  2 Q –√n (b -  ) → d N(0,  2 Q -1 ) –Approximately, b → a N( , (  2 /n)Q -1 ) –Consistent estimator of (  2 /n)Q -1 is (s 2 /n)Q -1 = s 2 (X ′ X) -1

16 Least Squares Estimator Asymptotic Distribution –How to estimate Ω? If A4’ (heteroscedasticity), –√n (b -  ) → d N(0, Q -1 ΩQ -1 ), where Ω = X ′ E(  ′ )X = X ′ VX –Asy var(b) = (1/n)Q -1 ΩQ -1 –White Estimator of Ω:

17 Least Squares Estimator Asymptotic Distribution –White Estimator of Asy var(b) Est(Asy var(b)) = (1/n)Q -1 [  i x i x i ′e i 2 /n] Q -1 = (X ′ X) -1 [  i x i x i ′e i 2 /n] (X ′ X) -1 White Estimator is Consistent: Heteroscedasticity Consistent Variance-Covariance Matrix

18 Least Squares Estimator Asymptotic Results –How “fast” does b →  ? Under A4, Asy Var(b) = (  2 /n)Q -1 is O(1/n) –√nb has variance of O(1) –Convergence is at the rate of 1/ √n –Asymptotic distribution of b does not depend on normality assumption of . –Slutsky theorem and the delta method apply to function of b.

19 Hypothesis Testing

20 Under the null hypothesis H 0 : Rb = q, where R is J by K matrix of full row rank and J is the number of restrictions (the dimension of q), W = n(Rb-q) ′ {R[Est(Asy var(b))]R ′ } -1 (Rb-q) = n(Rb-q) ′ {R[ns 2 (X ′ X) -1 ]R ′ } -1 (Rb-q) = (Rb-q) ′ {R(X ′ X) -1 ]R ′ } -1 (Rb-q)/s 2 = JF = (SSR r – SSR ur )/s 2 ~  2 (J)


Download ppt "Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing."

Similar presentations


Ads by Google