Presentation is loading. Please wait.

Presentation is loading. Please wait.

MLR5 MLR3* MLR4’ MLR3 + Q is finite MLR5 MLR{2 4’} MLR2 MLR3 MLR4 MLR6

Similar presentations


Presentation on theme: "MLR5 MLR3* MLR4’ MLR3 + Q is finite MLR5 MLR{2 4’} MLR2 MLR3 MLR4 MLR6"β€” Presentation transcript:

1 MLR5 MLR3* MLR4’ MLR3 + Q is finite MLR5 MLR{2 4’} MLR2 MLR3 MLR4 MLR6
Let 𝐹= 𝑅𝑏 βˆ’π‘Ÿ β€² 𝑅 𝑠 2 𝑋 β€² 𝑋 βˆ’1 𝑅 β€² βˆ’1 (𝑅𝑏 βˆ’π‘Ÿ) 𝐽 [ ] Let 𝐻 0 :𝑅𝛽=π‘Ÿ (system of linear equations) Estimation of 𝑽𝒂𝒓(𝒃) without MLR5 (Heteroskedasticity and/or autocorrelation) (White 1980) Hence, given MLR{ }, and under 𝐻 0 , for 𝐽 restrictions 𝐹 ~ 𝐹 𝐽,π‘›βˆ’πΎ [1.4 40] Hence, given MLR{1 2 3* 4 5}, and under 𝐻 0 , for 𝐽 restrictions 𝐹 𝑑 𝐹 𝐽,π‘›βˆ’πΎ Given MLR{1 3 4} π‘‰π‘Žπ‘Ÿ 𝑏 𝑋 ]= 𝜎 π‘ˆ 2 𝑋 β€² 𝑋 βˆ’1 𝑋 β€² Ξ©X 𝑋 β€² 𝑋 βˆ’1 Given MLR{1 2 3* 4} 𝑛 (π‘βˆ’π›½) 𝑑 𝑁(0, 𝜎 π‘ˆ 2 𝑄 βˆ’1 𝑄 βˆ— 𝑄 βˆ’1 ) Where Q βˆ— =plim X β€² Ω𝑋 𝑛 Hence, given MLR{ }, for π‘˜=1,…,𝐾 𝑏 π‘˜ βˆ’ 𝛽 π‘˜ 𝑠𝑒( 𝑏 π‘˜ ) ~ 𝑑 π‘›βˆ’πΎ [ ] Where 𝑠𝑒 𝑏 π‘˜ = 𝑠 2 ( 𝑋 β€² 𝑋 βˆ’1 ) π‘˜π‘˜ Hence, given MLR{1 2 3* 4 5}, for π‘˜=1,…,𝐾 𝑏 π‘˜ βˆ’ 𝛽 π‘˜ 𝑠𝑒( 𝑏 π‘˜ ) 𝑑 𝑁 0,1 Where se 𝑏 π‘˜ = 𝑠 2 ( 𝑋 β€² 𝑋 βˆ’1 ) π‘˜π‘˜ Inference (finite sample) Given MLR{ } 𝑏 | 𝑋 ~ 𝑁 ( 𝛽, 𝜎 π‘ˆ 2 𝑋 β€² 𝑋 βˆ’1 ) [ ] Inference without MLR6 (Asymptotics) Given MLR{1 2 3* 4 5} 𝑛 π‘βˆ’π›½ 𝑑 𝑁 0, 𝜎 π‘ˆ 2 𝑄 βˆ’1 [ ] MLR5 MLR3* 𝑄 = plim π‘›β†’βˆž 𝑋 β€² 𝑋 𝑛 has rank 𝐾 i.e. is positive definite MLR4’ 𝐸 π‘ˆ 𝑋 =0 Var 𝑏| 𝑋 = 𝜎 π‘ˆ 2 𝑋 β€² 𝑋 βˆ’1 MLR3 + Q is finite MLR5 Where 𝐸 π‘ˆ π‘ˆ β€² 𝑋]= 𝜎 π‘ˆ 2 Ξ©, we have Ξ©=𝐼 MLR{2 4’} MLR1 π‘Œ=𝑋𝛽+π‘ˆ MLR2 { 𝑋 𝑖 , π‘Œ 𝑖 } ~ 𝑖𝑖𝑑 MLR3 𝑋 is an 𝑛×𝐾 matrix with rank 𝐾 MLR4 βˆ€π‘– 𝐸 𝑒 𝑖 𝑋]=0 MLR6 π‘ˆ | 𝑋 ~ 𝑁[0, 𝜎 π‘ˆ 2 𝐼] Given MLR{1 3}, min U β€² π‘ˆ 𝛽 has a single solution 𝑏= 𝑋 β€² 𝑋 βˆ’1 𝑋 β€² π‘Œ =𝛽+ 𝑋 β€² 𝑋 βˆ’1 ( 𝑋 β€² π‘ˆ) Given MLR{1 2 3* 4} 𝑏 𝑝 𝛽 [ ] Given MLR{1 3 4} 𝐸 π‘βˆ£π‘‹ =𝛽 [1.1 27] Given MLR{ } Var 𝑏 | 𝑋 = 𝜎 π‘ˆ 2 𝑋 β€² 𝑋 βˆ’1 [1.1 27] I made a big diagram describing some assumptions (MLR1-6) that are used in linear regression. In my diagram, there are categories (in rectangles with dotted lines) of mathematical facts that follow from different subsets of MLR1-6. References in brackets are to Hayashi (2000). [[diagram]] A couple of comments about the diagram are in order. UU,YYΒ are aΒ nΓ—1nΓ—1Β vectors of random variables.Β XXΒ may contain numbers or random variables.Β Ξ²Ξ²Β is aΒ KΓ—1KΓ—1Β vector of numbers. We measure: realisations ofΒ YY, (realisations of)Β XX. We do not measure:Β Ξ²Ξ²,Β UU. We have one equation and two unknowns: we need additional assumptions onΒ UU. We make a set of assumptions (MLR1-6) about the joint distributionΒ f(U,X)f(U,X). These assumptions imply some theorems relating the distribution ofΒ bbΒ and the distribution ofΒ Ξ²Ξ². Note the difference between MLR4 and MLR4’. The point of using the stronger MLR4 is that, in some cases, provided MLR4, MLR2 is not needed. To prove unbiasedness, we don’t need MLR2. For finite sample inference, we also don’t need MLR2. But whenever the law of large numbers is involved, we do need MLR2 as a standalone condition. Note also that, since MLR2 and MLR4’ together imply MLR4, clearly MLR2 and MLR4 are never both needed. But I follow standard practise (e.g. Hayashi) in including them both, for example in the asymptotic inference theorems. Note that since X’X is a symmetric square matrix, Q has full rank K iff Q is positive definite; these are equivalent statements. Furthermore, if X has full rank K, then X’X has full rank K, so MLR3* is equivalent to MLR3 plus the fact that Q is finite (i.e actually converges). (see Wooldridge 2010 p. 57). Note that Q could alternatively be written E[X’X] Note that whenever I write a plim and set it equal to some matrix, I am assuming the matrix is finite. Some treatments will explicitly say Q is finite, but I omit this. In the diagram, I stick to the brute mathematics, which is entirely independent of its (causal) interpretation.1 Estimation of 𝒃 under classical assumptions Algebra Given MLR{1 2 3* 4 5} 𝑠 2 𝑝 𝜎 π‘ˆ 2 [ ] Given MLR{ } 𝐸[𝑠 2 βˆ£π‘‹]= 𝜎 π‘ˆ 2 [1.2 30] Let 𝑠 2 = π‘ˆ β€² π‘ˆ π‘›βˆ’πΎ Estimation of 𝜎 π‘ˆ 2

2 𝒏 (𝒃 𝑰𝑽 βˆ’πœ·) 𝒅 𝑡(𝟎, 𝝈 𝑼 𝟐 𝑸 𝒁𝑿 βˆ’πŸ 𝑸 𝒁𝒁 𝑸 𝑿𝒁 βˆ’πŸ )
The instrument matrix 𝑍 (𝑛×𝐾) (just-identified case) is obtained by taking 𝑋 and replacing each column for which we have an instrument by the values of the instrument. The IV estimator 𝑏 𝐼𝑉 = 𝑍 β€² 𝑋 βˆ’1 𝑍 β€² π‘Œ can be expressed as 𝑏 𝐼𝑉 =𝛽+ 𝑍 β€² 𝑋 βˆ’1 𝑍 β€² π‘ˆ Under MLR{1 3* 5} and IV2-5: 𝒏 (𝒃 𝑰𝑽 βˆ’πœ·) 𝒅 𝑡(𝟎, 𝝈 𝑼 𝟐 𝑸 𝒁𝑿 βˆ’πŸ 𝑸 𝒁𝒁 𝑸 𝑿𝒁 βˆ’πŸ ) MLR3* 𝑄 = plim π‘›β†’βˆž 𝑋 β€² 𝑋 𝑛 has rank 𝐾 i.e. is positive definite IV4’ 𝐸 π‘ˆ 𝑍]=0 MLR5 𝐸 π‘ˆ π‘ˆ β€² 𝑋= 𝜎 π‘ˆ 2 𝐼 IV{2 4’} MLR1 π‘Œ=𝑋𝛽+π‘ˆ IV2 { 𝑋 𝑖 , π‘Œ 𝑖 , 𝑍 𝑖 } ~ 𝑖𝑖𝑑 IV3 𝑄 𝑍𝑍 = 𝑝lim π‘›β†’βˆž 𝑛 βˆ’1 ( 𝑍 β€² 𝑍) has rank 𝐾 IV4 βˆ€π‘– 𝐸 𝑒 𝑖 𝑍]=0 IV5 (relevance) 𝑄 𝑍𝑋 = 𝑝lim π‘›β†’βˆž 𝑛 βˆ’1 ( 𝑍 β€² 𝑋) has rank 𝐾 In the over-identified case, the 2SLS proposal is to use as instruments 𝑋 =𝑍 𝑍 β€² 𝑍 βˆ’1 𝑍 β€² 𝑋 (The predicted value of 𝑋 from a first-stage regression 𝑋=π‘Ξ˜+𝐸.) The 2SLS estimator is 𝑏 2𝑆𝐿𝑆 = 𝑋 β€² 𝑋 βˆ’1 𝑋 β€² π‘Œ = (𝑋 β€² 𝑍 𝑍 β€² 𝑍 βˆ’1 𝑍 β€² 𝑋) βˆ’1 ( (𝑋 β€² 𝑍 𝑍 β€² 𝑍 βˆ’1 𝑍 β€² π‘Œ) Under MLR{1 3* 5} and IV2-5: 𝒏 (𝒃 πŸπ‘Ίπ‘³π‘Ί βˆ’πœ·) 𝒅 𝑡(𝟎, 𝝈 𝑼 𝟐 𝑸 𝑿𝒁 𝑸 𝒁𝒁 βˆ’πŸ 𝑸 𝒁𝑿 βˆ’πŸ )

3 X π‘Œ ≔ 𝑋𝛽+π‘ˆ U


Download ppt "MLR5 MLR3* MLR4’ MLR3 + Q is finite MLR5 MLR{2 4’} MLR2 MLR3 MLR4 MLR6"

Similar presentations


Ads by Google