Basic Econometrics Chapter 4: THE NORMALITY ASSUMPTION:

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

Multiple Regression Analysis
CHAPTER 3: TWO VARIABLE REGRESSION MODEL: THE PROBLEM OF ESTIMATION
Section 6-4 Sampling Distributions and Estimators.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
CHAPTER 8 MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF INFERENCE
3.3 Omitted Variable Bias -When a valid variable is excluded, we UNDERSPECIFY THE MODEL and OLS estimates are biased -Consider the true population model:
Econ 140 Lecture 61 Inference about a Mean Lecture 6.
Lecture 4 Econ 488. Ordinary Least Squares (OLS) Objective of OLS  Minimize the sum of squared residuals: where Remember that OLS is not the only possible.
The Simple Linear Regression Model: Specification and Estimation
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Simple Linear Regression
Econ Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Generalized Regression Model Based on Greene’s Note 15 (Chapter 8)
Multiple Regression Analysis
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Multiple Regression Analysis
FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Visual Recognition Tutorial
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 3. Asymptotic Properties.
FIN357 Li1 The Simple Regression Model y =  0 +  1 x + u.
Ordinary Least Squares
Intermediate Econometrics
2-1 MGMG 522 : Session #2 Learning to Use Regression Analysis & The Classical Model (Ch. 3 & 4)
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
8 Sampling Distribution of the Mean Chapter8 p Sampling Distributions Population mean and standard deviation,  and   unknown Maximal Likelihood.
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin The Two-Variable Model: Hypothesis Testing chapter seven.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
405 ECONOMETRICS Domodar N. Gujarati Prof. M. El-Sakka
May 2004 Prof. Himayatullah 1 Basic Econometrics Chapter 7 MULTIPLE REGRESSION ANALYSIS: The Problem of Estimation.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
Confidence Interval & Unbiased Estimator Review and Foreword.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
5. Consistency We cannot always achieve unbiasedness of estimators. -For example, σhat is not an unbiased estimator of σ -It is only consistent -Where.
CLASSICAL NORMAL LINEAR REGRESSION MODEL (CNLRM )
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Lecturer: Ing. Martina Hanová, PhD..  How do we evaluate a model?  How do we know if the model we are using is good?  assumptions relate to the (population)
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Regression Overview. Definition The simple linear regression model is given by the linear equation where is the y-intercept for the population data, is.
Visual Recognition Tutorial
Probability Theory and Parameter Estimation I
Multiple Regression Analysis: Estimation
Multiple Regression Analysis: Inference
THE LINEAR REGRESSION MODEL: AN OVERVIEW
Chapter 5: The Simple Regression Model
Regression.
Evgeniya Anatolievna Kolomak, Professor
Fundamentals of regression analysis
Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation
Multiple Regression Analysis
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
Two-Variable Regression Model: The Problem of Estimation
Chapter 6: MULTIPLE REGRESSION ANALYSIS
Statistical Assumptions for SLR
OVERVIEW OF LINEAR MODELS
Simple Linear Regression
MULTIVARIATE REGRESSION MODELS
OVERVIEW OF LINEAR MODELS
Multiple Regression Analysis: OLS Asymptotics
Multiple Regression Analysis: OLS Asymptotics
Multiple Regression Analysis
Introduction to Econometrics, 5th edition
Presentation transcript:

Basic Econometrics Chapter 4: THE NORMALITY ASSUMPTION: Classical Normal Linear Regression Model (CNLRM) Prof. Himayatullah May 2004

4-2.The normality assumption CNLR assumes that each u i is distributed normally u i  N(0, 2) with: Mean = E(u i) = 0 Ass 3 Variance = E(u2i) = 2 Ass 4 Cov(u i , u j ) = E(u i , u j) = 0 (i#j) Ass 5 Note: For two normally distributed variables, the zero covariance or correlation means independence of them, so u i and u j are not only uncorrelated but also independently distributed. Therefore u i  NID(0, 2) is Normal and Independently Distributed Prof. Himayatullah May 2004

4-2.The normality assumption Why the normality assumption? With a few exceptions, the distribution of sum of a large number of independent and identically distributed random variables tends to a normal distribution as the number of such variables increases indefinitely If the number of variables is not very large or they are not strictly independent, their sum may still be normally distributed Prof. Himayatullah May 2004

4-2.The normality assumption Why the normality assumption? Under the normality assumption for ui , the OLS estimators ^1 and ^2 are also normally distributed The normal distribution is a comparatively simple distribution involving only two parameters (mean and variance) Prof. Himayatullah May 2004

4-3. Properties of OLS estimators under the normality assumption With the normality assumption the OLS estimators ^1 , ^2 and ^2 have the following properties: 1. They are unbiased 2. They have minimum variance. Combined 1 and 2, they are efficient estimators 3. Consistency, that is, as the sample size increases indefinitely, the estimators converge to their true population values Prof. Himayatullah May 2004

4-3. Properties of OLS estimators under the normality assumption 4. ^1 is normally distributed  N(1, ^12) And Z = (^1- 1)/ ^1 is  N(0,1) 5. ^2 is normally distributed N(2 ,^22) And Z = (^2- 2)/ ^2 is  N(0,1) 6. (n-2) ^2/ 2 is distributed as the 2(n-2) Prof. Himayatullah May 2004

4-3. Properties of OLS estimators under the normality assumption 7. ^1 and ^2 are distributed independently of ^2. They have minimum variance in the entire class of unbiased estimators, whether linear or not. They are best unbiased estimators (BUE) 8. Let ui is  N(0, 2 ) then Yi is  N[E(Yi); Var(Yi)] = N[1+ 2X i ; 2] Prof. Himayatullah May 2004

Some last points of chapter 4 4-4. The method of Maximum likelihood (ML) ML is point estimation method with some stronger theoretical properties than OLS (Appendix 4.A on pages 110-114) The estimators of coefficients ’s by OLS and ML are identical. They are true estimators of the ’s (ML estimator of 2) = u^i2/n (is biased estimator) (OLS estimator of 2) = u^i2/n-2 (is unbiased estimator) When sample size (n) gets larger the two estimators tend to be equal Prof. Himayatullah May 2004

Some last points of chapter 4 4-5. Probability distributions related to the Normal Distribution: The t, 2, and F distributions See section (4.5) on pages 107-108 with 8 theorems and Appendix A, on pages 755-776 4-6. Summary and Conclusions See 10 conclusions on pages 109-110 Prof. Himayatullah May 2004