Econ 140 Lecture 71 Classical Regression Lecture 7.

Slides:



Advertisements
Similar presentations
Properties of Least Squares Regression Coefficients
Advertisements

Autocorrelation Lecture 20 Lecture 20.
Multiple Regression Analysis
The Simple Regression Model
CHAPTER 3: TWO VARIABLE REGRESSION MODEL: THE PROBLEM OF ESTIMATION
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Econ 140 Lecture 81 Classical Regression II Lecture 8.
Econ 140 Lecture 61 Inference about a Mean Lecture 6.
The General Linear Model. The Simple Linear Model Linear Regression.
Lecture 4 Econ 488. Ordinary Least Squares (OLS) Objective of OLS  Minimize the sum of squared residuals: where Remember that OLS is not the only possible.
The Simple Linear Regression Model: Specification and Estimation
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Hypothesis Testing.
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Simple Linear Regression
Econ Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
3-variable Regression Derive OLS estimators of 3-variable regression
All rights reserved by Dr.Bill Wan Sing Hung - HKBU 4A.1 Week 4a Multiple Regression The meaning of partial regression coefficients.
FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Multiple Regression Analysis
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Econ 140 Lecture 181 Multiple Regression Applications III Lecture 18.
Multiple Regression Analysis
FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 3. Asymptotic Properties.
Econ 140 Lecture 191 Heteroskedasticity Lecture 19.
Inference about a Mean Part II
Topic4 Ordinary Least Squares. Suppose that X is a non-random variable Y is a random variable that is affected by X in a linear fashion and by the random.
FIN357 Li1 The Simple Regression Model y =  0 +  1 x + u.
Autocorrelation Lecture 18 Lecture 18.
Statistics 350 Lecture 17. Today Last Day: Introduction to Multiple Linear Regression Model Today: More Chapter 6.
Hypothesis Testing in Linear Regression Analysis
2-1 MGMG 522 : Session #2 Learning to Use Regression Analysis & The Classical Model (Ch. 3 & 4)
7.1 Multiple Regression More than one explanatory/independent variable This makes a slight change to the interpretation of the coefficients This changes.
9-1 MGMG 522 : Session #9 Binary Regression (Ch. 13)
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
Properties of OLS How Reliable is OLS?. Learning Objectives 1.Review of the idea that the OLS estimator is a random variable 2.How do we judge the quality.
2.4 Units of Measurement and Functional Form -Two important econometric issues are: 1) Changing measurement -When does scaling variables have an effect.
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin The Two-Variable Model: Hypothesis Testing chapter seven.
405 ECONOMETRICS Domodar N. Gujarati Prof. M. El-Sakka
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Chap 5 The Multiple Regression Model
AUTOCORRELATION 1 Assumption B.5 states that the values of the disturbance term in the observations in the sample are generated independently of each other.
1 Ka-fu Wong University of Hong Kong A Brief Review of Probability, Statistics, and Regression for Forecasting.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Lecturer: Ing. Martina Hanová, PhD..  How do we evaluate a model?  How do we know if the model we are using is good?  assumptions relate to the (population)
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Regression Overview. Definition The simple linear regression model is given by the linear equation where is the y-intercept for the population data, is.
Inference about the slope parameter and correlation
REGRESSION DIAGNOSTIC II: HETEROSCEDASTICITY
Welcome to Econ 420 Applied Regression Analysis
The Simple Regression Model
Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation
Multiple Regression Analysis
Lecture Slides Elementary Statistics Twelfth Edition
Two-Variable Regression Model: The Problem of Estimation
Basic Econometrics Chapter 4: THE NORMALITY ASSUMPTION:
The regression model in matrix form
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Lecture Slides Elementary Statistics Twelfth Edition
Ch3 The Two-Variable Regression Model
Multiple Regression Analysis
Regression Models - Introduction
Correlation and Simple Linear Regression
Presentation transcript:

Econ 140 Lecture 71 Classical Regression Lecture 7

Econ 140 Lecture 72 Today’s Plan For the next few lectures we’ll be talking about the classical regression model –Looking at both estimators for a and b –Inferences on what a and b actually tell us Today: how to operationalize the model –Looking at BLUE for bi-variate model –Next lecture: Inference and hypothesis tests using the t, F, and  2 –Examples of linear regressions using Excel

Econ 140 Lecture 73 Estimating coefficients Our model: Y = a + bX + e Two things to keep in mind about this model: 1) It is linear in both variables and parameters Examples of non-linearity in variables: Y = a + bX 2 or Y = a + be x Example of non-linearity in parameters: Y = a + b 2 X OLS can cope with non-linearity in variables but not in parameters

Econ 140 Lecture 74 Estimating coefficients (3) 2) Notation: we’re not estimating a and b anymore We are estimating coefficients which are estimates of the parameters of a and b We will denote the coefficients as or and or We are dealing with a sample size of n –For each sample we will get a different and pair

Econ 140 Lecture 75 Estimating coefficients (4) In the same way that you can take a sample to get an estimate of µ y you can take a sample to get an estimate of the regression line, of  and 

Econ 140 Lecture 76 The independent variable We also have a given variable X, its values are known –This is called the independent variable Again, the expectation of Y given X is E(Y | X) = a + bX With constant variance V(Y) =  2

Econ 140 Lecture 77 A graph of the model (Y 1, X 1 ) Y Y X

Econ 140 Lecture 78 What does the error term do? The error termgives us the test statistics and tells us how well the model Y = a+bX+e fits the data The error term represents: 1) Given that Y is a random variable, e is also random, since e is a function of Y 2) Variables not included in the model 3) Random behavior of people 4)Measurement error 5)Enables a model to remain parsimonious - you don’t want all possible variables in the model if some have little or no influence

Econ 140 Lecture 79 Rewriting beta Our complete model is Y = a + bX + e We will never know the true value of the error e so we will estimate the following equation: For our known values of X we have estimates of , , and  So how do we know that our OLS estimators give us the BLUE estimate? –To determine this we want to know the expected value of  as an estimator of b, which is the population parameter

Econ 140 Lecture 710 Rewriting beta(2) To operationalize, we want to think of what we know We know from lecture 2 that there should be no correlation between the errors and the independent variables We also know Now we have that E(Y|X) = a + bX + E(  |X) The variance of Y given X is V(Y) =  2 so V(  |X)=  2

Econ 140 Lecture 711 Rewriting beta(3) Rewriting  –In lecture 2 we found the following estimator for  Using some definitions we can show: E(  ) = b

Econ 140 Lecture 712 Rewriting beta (4) We have definitions that we can use: Using the definitions for y i and x i we can rewrite  as So that We can also write

Econ 140 Lecture 713 Rewriting beta (5) We can rewrite  as where The properties of c i :

Econ 140 Lecture 714 Showing unbiasedness What do we know about the expected value of beta? We can rewrite this as Multiplying the brackets out we get: Since b is constant,

Econ 140 Lecture 715 Showing unbiasedness (2) Looking back at the properties for c i we know that Now we can write this as We can conclude that the expected value of  is b and that  is an unbiased estimator of b

Econ 140 Lecture 716 Gauss Markov Theorem We can now ask: is  an efficient estimator? The variance of  is Where How do we know that OLS is the most efficient estimator? –The Gauss-Markov Theorem

Econ 140 Lecture 717 Gauss Markov Theorem (2) Similar to our proof on the estimator for  y. Suppose we use a new weight We can take the expected value of E(  )

Econ 140 Lecture 718 Gauss Markov Theorem (3) We know that –For  to be unbiased, the following must be true:

Econ 140 Lecture 719 Gauss Markov Theorem (4) Efficiency (best)? We have where Therefore the variance of this new  is +  d i 2 +2  c i d i If each d i  0 such that c i  c’ i then So when we use weights c’ I we have an inefficient estimator

Econ 140 Lecture 720 Gauss Markov Theorem (5) We can conclude that is BLUE

Econ 140 Lecture 721 Wrap up What did we cover today? Introduced the classical linear regression model (CLRM) Assumptions under the CLRM 1)X i is nonrandom (it’s given) 2)E(e i ) = E(e i |X i ) = 0 3)V(e i )= V(e i |X i ) =  2 4)Covariance (e i e j ) = 0 Talked about estimating coefficients Defined the properties of the error term Proof by contradiction for the Gauss Markov Theorem