MLR5 MLR3* MLR4’ MLR3 + Q is finite MLR5 MLR{2 4’} MLR2 MLR3 MLR4 MLR6

Slides:



Advertisements
Similar presentations
Properties of Least Squares Regression Coefficients
Advertisements

1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
The Simple Regression Model
Part 12: Asymptotics for the Regression Model 12-1/39 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Instrumental Variables Estimation and Two Stage Least Square
5.3 Linear Independence.
The General Linear Model. The Simple Linear Model Linear Regression.
Assumption MLR.3 Notes (No Perfect Collinearity)
The Simple Linear Regression Model: Specification and Estimation
Maximum likelihood (ML) and likelihood ratio (LR) test
2.III. Basis and Dimension 1.Basis 2.Dimension 3.Vector Spaces and Linear Systems 4.Combining Subspaces.
Visual Recognition Tutorial
Chapter 4 Multiple Regression.
SYSTEMS Identification
Generalized Regression Model Based on Greene’s Note 15 (Chapter 8)
Maximum likelihood (ML) and likelihood ratio (LR) test
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Least Squares Asymptotics Convergence of Estimators: Review Least Squares Assumptions Least Squares Estimator Asymptotic Distribution Hypothesis Testing.
Basics of regression analysis
Maximum likelihood (ML)
Linear Regression Analysis
Objectives of Multiple Regression
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
LECTURE 2. GENERALIZED LINEAR ECONOMETRIC MODEL AND METHODS OF ITS CONSTRUCTION.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Linear Equations in Linear Algebra
Properties of OLS How Reliable is OLS?. Learning Objectives 1.Review of the idea that the OLS estimator is a random variable 2.How do we judge the quality.
10. Basic Regressions with Times Series Data 10.1 The Nature of Time Series Data 10.2 Examples of Time Series Regression Models 10.3 Finite Sample Properties.
Vector Spaces RANK © 2016 Pearson Education, Inc..
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES Read pp Stop at “Inverse of a Matrix” box.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
5. Consistency We cannot always achieve unbiasedness of estimators. -For example, σhat is not an unbiased estimator of σ -It is only consistent -Where.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
7.3 Linear Systems of Equations. Gauss Elimination
Simultaneous equation system
Matrices 3 1.
Complex Variables. Complex Variables Open Disks or Neighborhoods Definition. The set of all points z which satisfy the inequality |z – z0|
Sample Mean Distributions
Evgeniya Anatolievna Kolomak, Professor
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Basis and Dimension Basis Dimension Vector Spaces and Linear Systems
The regression model in matrix form
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Linear Algebra Lecture 39.
2.III. Basis and Dimension
Econometrics Chengyuan Yin School of Mathematics.
I.4 Polyhedral Theory (NW)
Charles University Charles University STAKAN III
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
I.4 Polyhedral Theory.
Linear Panel Data Models
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Simplex method (algebraic interpretation)
16. Mean Square Estimation
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Vector Spaces RANK © 2012 Pearson Education, Inc..
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Linear Equations in Linear Algebra
Regression Models - Introduction
Presentation transcript:

MLR5 MLR3* MLR4’ MLR3 + Q is finite MLR5 MLR{2 4’} MLR2 MLR3 MLR4 MLR6 Let 𝐹= 𝑅𝑏 −𝑟 ′ 𝑅 𝑠 2 𝑋 ′ 𝑋 −1 𝑅 ′ −1 (𝑅𝑏 −𝑟) 𝐽 [1.4.9 41] Let 𝐻 0 :𝑅𝛽=𝑟 (system of linear equations) Estimation of 𝑽𝒂𝒓(𝒃) without MLR5 (Heteroskedasticity and/or autocorrelation) (White 1980) Hence, given MLR{1 3 4 5 6}, and under 𝐻 0 , for 𝐽 restrictions 𝐹 ~ 𝐹 𝐽,𝑛−𝐾 [1.4 40] Hence, given MLR{1 2 3* 4 5}, and under 𝐻 0 , for 𝐽 restrictions 𝐹 𝑑 𝐹 𝐽,𝑛−𝐾 Given MLR{1 3 4} 𝑉𝑎𝑟 𝑏 𝑋 ]= 𝜎 𝑈 2 𝑋 ′ 𝑋 −1 𝑋 ′ ΩX 𝑋 ′ 𝑋 −1 Given MLR{1 2 3* 4} 𝑛 (𝑏−𝛽) 𝑑 𝑁(0, 𝜎 𝑈 2 𝑄 −1 𝑄 ∗ 𝑄 −1 ) Where Q ∗ =plim X ′ Ω𝑋 𝑛 Hence, given MLR{1 3 4 5 6}, for 𝑘=1,…,𝐾 𝑏 𝑘 − 𝛽 𝑘 𝑠𝑒( 𝑏 𝑘 ) ~ 𝑡 𝑛−𝐾 [1.4.5 36] Where 𝑠𝑒 𝑏 𝑘 = 𝑠 2 ( 𝑋 ′ 𝑋 −1 ) 𝑘𝑘 Hence, given MLR{1 2 3* 4 5}, for 𝑘=1,…,𝐾 𝑏 𝑘 − 𝛽 𝑘 𝑠𝑒( 𝑏 𝑘 ) 𝑑 𝑁 0,1 Where se 𝑏 𝑘 = 𝑠 2 ( 𝑋 ′ 𝑋 −1 ) 𝑘𝑘 Inference (finite sample) Given MLR{1 3 4 5 6} 𝑏 | 𝑋 ~ 𝑁 ( 𝛽, 𝜎 𝑈 2 𝑋 ′ 𝑋 −1 ) [1.4.2 35] Inference without MLR6 (Asymptotics) Given MLR{1 2 3* 4 5} 𝑛 𝑏−𝛽 𝑑 𝑁 0, 𝜎 𝑈 2 𝑄 −1 [2.5 129] MLR5 MLR3* 𝑄 = plim 𝑛→∞ 𝑋 ′ 𝑋 𝑛 has rank 𝐾 i.e. is positive definite MLR4’ 𝐸 𝑈 𝑋 =0 Var 𝑏| 𝑋 = 𝜎 𝑈 2 𝑋 ′ 𝑋 −1 MLR3 + Q is finite MLR5 Where 𝐸 𝑈 𝑈 ′ 𝑋]= 𝜎 𝑈 2 Ω, we have Ω=𝐼 MLR{2 4’} MLR1 𝑌=𝑋𝛽+𝑈 MLR2 { 𝑋 𝑖 , 𝑌 𝑖 } ~ 𝑖𝑖𝑑 MLR3 𝑋 is an 𝑛×𝐾 matrix with rank 𝐾 MLR4 ∀𝑖 𝐸 𝑢 𝑖 𝑋]=0 MLR6 𝑈 | 𝑋 ~ 𝑁[0, 𝜎 𝑈 2 𝐼] Given MLR{1 3}, min U ′ 𝑈 𝛽 has a single solution 𝑏= 𝑋 ′ 𝑋 −1 𝑋 ′ 𝑌 =𝛽+ 𝑋 ′ 𝑋 −1 ( 𝑋 ′ 𝑈) Given MLR{1 2 3* 4} 𝑏 𝑝 𝛽 [2.1 113] Given MLR{1 3 4} 𝐸 𝑏∣𝑋 =𝛽 [1.1 27] Given MLR{1 3 4 5} Var 𝑏 | 𝑋 = 𝜎 𝑈 2 𝑋 ′ 𝑋 −1 [1.1 27] I made a big diagram describing some assumptions (MLR1-6) that are used in linear regression. In my diagram, there are categories (in rectangles with dotted lines) of mathematical facts that follow from different subsets of MLR1-6. References in brackets are to Hayashi (2000). [[diagram]] A couple of comments about the diagram are in order. UU,YY are a n×1n×1 vectors of random variables. XX may contain numbers or random variables. ββ is a K×1K×1 vector of numbers. We measure: realisations of YY, (realisations of) XX. We do not measure: ββ, UU. We have one equation and two unknowns: we need additional assumptions on UU. We make a set of assumptions (MLR1-6) about the joint distribution f(U,X)f(U,X). These assumptions imply some theorems relating the distribution of bb and the distribution of ββ. Note the difference between MLR4 and MLR4’. The point of using the stronger MLR4 is that, in some cases, provided MLR4, MLR2 is not needed. To prove unbiasedness, we don’t need MLR2. For finite sample inference, we also don’t need MLR2. But whenever the law of large numbers is involved, we do need MLR2 as a standalone condition. Note also that, since MLR2 and MLR4’ together imply MLR4, clearly MLR2 and MLR4 are never both needed. But I follow standard practise (e.g. Hayashi) in including them both, for example in the asymptotic inference theorems. Note that since X’X is a symmetric square matrix, Q has full rank K iff Q is positive definite; these are equivalent statements. Furthermore, if X has full rank K, then X’X has full rank K, so MLR3* is equivalent to MLR3 plus the fact that Q is finite (i.e actually converges). (see Wooldridge 2010 p. 57). Note that Q could alternatively be written E[X’X] Note that whenever I write a plim and set it equal to some matrix, I am assuming the matrix is finite. Some treatments will explicitly say Q is finite, but I omit this. In the diagram, I stick to the brute mathematics, which is entirely independent of its (causal) interpretation.1 Estimation of 𝒃 under classical assumptions Algebra Given MLR{1 2 3* 4 5} 𝑠 2 𝑝 𝜎 𝑈 2 [2.2 115] Given MLR{1 3 4 5} 𝐸[𝑠 2 ∣𝑋]= 𝜎 𝑈 2 [1.2 30] Let 𝑠 2 = 𝑈 ′ 𝑈 𝑛−𝐾 Estimation of 𝜎 𝑈 2

𝒏 (𝒃 𝑰𝑽 −𝜷) 𝒅 𝑵(𝟎, 𝝈 𝑼 𝟐 𝑸 𝒁𝑿 −𝟏 𝑸 𝒁𝒁 𝑸 𝑿𝒁 −𝟏 ) The instrument matrix 𝑍 (𝑛×𝐾) (just-identified case) is obtained by taking 𝑋 and replacing each column for which we have an instrument by the values of the instrument. The IV estimator 𝑏 𝐼𝑉 = 𝑍 ′ 𝑋 −1 𝑍 ′ 𝑌 can be expressed as 𝑏 𝐼𝑉 =𝛽+ 𝑍 ′ 𝑋 −1 𝑍 ′ 𝑈 Under MLR{1 3* 5} and IV2-5: 𝒏 (𝒃 𝑰𝑽 −𝜷) 𝒅 𝑵(𝟎, 𝝈 𝑼 𝟐 𝑸 𝒁𝑿 −𝟏 𝑸 𝒁𝒁 𝑸 𝑿𝒁 −𝟏 ) MLR3* 𝑄 = plim 𝑛→∞ 𝑋 ′ 𝑋 𝑛 has rank 𝐾 i.e. is positive definite IV4’ 𝐸 𝑈 𝑍]=0 MLR5 𝐸 𝑈 𝑈 ′ 𝑋= 𝜎 𝑈 2 𝐼 IV{2 4’} MLR1 𝑌=𝑋𝛽+𝑈 IV2 { 𝑋 𝑖 , 𝑌 𝑖 , 𝑍 𝑖 } ~ 𝑖𝑖𝑑 IV3 𝑄 𝑍𝑍 = 𝑝lim 𝑛→∞ 𝑛 −1 ( 𝑍 ′ 𝑍) has rank 𝐾 IV4 ∀𝑖 𝐸 𝑢 𝑖 𝑍]=0 IV5 (relevance) 𝑄 𝑍𝑋 = 𝑝lim 𝑛→∞ 𝑛 −1 ( 𝑍 ′ 𝑋) has rank 𝐾 In the over-identified case, the 2SLS proposal is to use as instruments 𝑋 =𝑍 𝑍 ′ 𝑍 −1 𝑍 ′ 𝑋 (The predicted value of 𝑋 from a first-stage regression 𝑋=𝑍Θ+𝐸.) The 2SLS estimator is 𝑏 2𝑆𝐿𝑆 = 𝑋 ′ 𝑋 −1 𝑋 ′ 𝑌 = (𝑋 ′ 𝑍 𝑍 ′ 𝑍 −1 𝑍 ′ 𝑋) −1 ( (𝑋 ′ 𝑍 𝑍 ′ 𝑍 −1 𝑍 ′ 𝑌) Under MLR{1 3* 5} and IV2-5: 𝒏 (𝒃 𝟐𝑺𝑳𝑺 −𝜷) 𝒅 𝑵(𝟎, 𝝈 𝑼 𝟐 𝑸 𝑿𝒁 𝑸 𝒁𝒁 −𝟏 𝑸 𝒁𝑿 −𝟏 )

X 𝑌 ≔ 𝑋𝛽+𝑈 U