Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.

Slides:



Advertisements
Similar presentations
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Advertisements

Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
The Simple Regression Model
Lecture 8 (Ch14) Advanced Panel Data Method
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
3.2 OLS Fitted Values and Residuals -after obtaining OLS estimates, we can then obtain fitted or predicted values for y: -given our actual and predicted.
Instrumental Variables Estimation and Two Stage Least Square
The General Linear Model. The Simple Linear Model Linear Regression.
Algebra Problems… Solutions Algebra Problems… Solutions © 2007 Herbert I. Gross Set 4 By Herb I. Gross and Richard A. Medeiros next.
LINEAR REGRESSION MODEL
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture13: Mapping Reductions Prof. Amos Israeli.
Maximum likelihood (ML) and likelihood ratio (LR) test
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Chapter 4 Multiple Regression.
Chapter 9 Simultaneous Equations Models. What is in this Chapter? In Chapter 4 we mentioned that one of the assumptions in the basic regression model.
Maximum likelihood (ML) and likelihood ratio (LR) test
Multiple Regression Analysis
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Linear and generalised linear models
Linear and generalised linear models
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Maximum likelihood (ML)
1 A MONTE CARLO EXPERIMENT In the previous slideshow, we saw that the error term is responsible for the variations of b 2 around its fixed component 
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Introduction While it may not be efficient to write out the justification for each step when solving equations, it is important to remember that the properties.
The Game of Algebra or The Other Side of Arithmetic The Game of Algebra or The Other Side of Arithmetic © 2007 Herbert I. Gross by Herbert I. Gross & Richard.
Intermediate Statistical Analysis Professor K. Leppel.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
SYSTEM OF EQUATIONS SYSTEM OF LINEAR EQUATIONS IN THREE VARIABLES
Mathematics Number: Logarithms Science and Mathematics Education Research Group Supported by UBC Teaching and Learning Enhancement Fund Department.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
1 As we have seen in section 4 conditional probability density functions are useful to update the information about an event based on the knowledge about.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
Psy B07 Chapter 4Slide 1 SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Equations, Inequalities, and Mathematical Models 1.2 Linear Equations
10. Basic Regressions with Times Series Data 10.1 The Nature of Time Series Data 10.2 Examples of Time Series Regression Models 10.3 Finite Sample Properties.
Algebra Problems… Solutions Algebra Problems… Solutions © 2007 Herbert I. Gross Set 10 By Herbert I. Gross and Richard A. Medeiros next.
2.4 Units of Measurement and Functional Form -Two important econometric issues are: 1) Changing measurement -When does scaling variables have an effect.
Lecture 7: What is Regression Analysis? BUEC 333 Summer 2009 Simon Woodcock.
In section 11.9, we were able to find power series representations for a certain restricted class of functions. Here, we investigate more general problems.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Fall 2002Biostat Statistical Inference - Confidence Intervals General (1 -  ) Confidence Intervals: a random interval that will include a fixed.
Solution of. Linear Differential Equations The first special case of first order differential equations that we will look is the linear first order differential.
Founded 1348Charles University
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Joint Moments and Joint Characteristic Functions.
10-1 MGMG 522 : Session #10 Simultaneous Equations (Ch. 14 & the Appendix 14.6)
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Virtual University of Pakistan
Charles University Charles University STAKAN III
Charles University Charles University STAKAN III
12. Principles of Parameter Estimation
Esman M. Nyamongo Central Bank of Kenya
Charles University Charles University STAKAN III
Charles University Charles University STAKAN III
Charles University Charles University STAKAN III
Charles University Charles University STAKAN III
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Charles University Charles University STAKAN III
12. Principles of Parameter Estimation
Presentation transcript:

Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences Jan Ámos Víšek Econometrics Tuesday, – Charles University Eighth Lecture

Schedule of today talk Hausman specificity test, does it work reliably? What happens when orthogonality condition does not hold? Instrumental variable, when we should use them? (Multi)collinerity – today only questions.

 Orthogonality condition  Sphericality condition Let us recall that “Assumptions on disturbances” were modified to At the end of previous lecture we discussed the reformulation of OLS for the random-carriers framework. etc. with other assumptions. We have seen in the Sixth Lecture that need not be verified, it is built-in the model and hence in fact in the “determi- nistic-carriers framework” it is always unbiased. Unfortunately, NO !! Is it also true for “random-carriers framework”?

We have seen in the Third Lecture that. What happens if and how to recognize it ? For the random-carriers framework we have assumed Assumptions Assertions i.i.d r.v’s regular, hence Then also..

Moreover, we have. What happens if and how to recognize it ? We need another estimator for. ( We shall answer the question “How to recognize it?” later.) Let us look for an inspiration how to define an alternative estimator. ( continued ) Assertions Then however, i.e. is not consistent and generally also not unbiased.

Looking for an alternative estimator 1. step: Let us recall that was obtained as the solution of the normal equations. Let us write them as and keep in mind this form of them !! Let us forget ( for a moment ) how we have derived them and try to “discover” them in another way.

Looking for an alternative estimator 2. step: Remember that the regression model is given as, so that multiplying the model from the left by we obtain. If, and it might be an inspiration for looking for an estimator as a solution of i.e as the solution of normal equations.,

Looking, in the case when, for an alternative estimator from the left by we obtain and it implies that. ( continued ) Multiplying then the model Try to find a matrix of type such that and “similar” to. Crucial step:

Looking, in the case when, for an alternative estimator It may indicate ( or hint ) that the estimator defined as the solution of equations can be acceptable instead of OLS. The method is known as ( continued ) “the estimation by means of instrumental variables”. and the estimator is usually denoted Prior to continuing: The matrix is called the matrix of instrumental variables, i.e. its columns are those instrumental variables.

Instrumental variables estimator Moreover, we usually assume that is regular. The estimation So let us repeat:. will be called the realized by the estimator defined as the solution of equations “Estimation by means of instrumental variables” and the estimator will be denoted and is simultaneously “similar” to. We consider a matrix of type such that An example clarifying the word “similar” will be given.

Instrumental variables estimator. ( an example ) Let us consider the model with one explanatory variable in which the lagged values of the explanatory variable are relevant for the response variable We are not able to estimate and directly, so let us write Remenber: The OLS are “black box”, while the IV “requires a decision about what to use as instrumental variables”. We put data in and the value of estimator comes out.

Instrumental variables estimator. ( an example - continued ) multiply the last equation by and finally subtract it from the original model. We arrive at with estimable coefficients but also with.

Instrumental variables estimator ( an example - continued ) So we need for an instrumental variable ( sometimes is proportional ( mainly ) to and we can take as we say simply an instrument ). Since we have, the instrument. Then we obtain from the model.

.... and how to recognize it ? Now let us return to the question: If  is biased while is unbiased.. In other words: is likely large iff. The difference “iff” means “if and only if” First of all, observe: If, both estimators are unbiased.

.... and how to recognize it ? Jerry A. Hausman (1978) specified what means “large difference” He proposed to consider the quadratic form where. Hausman specification test ( continued ) The test is now known as By the way, the idea came from the Neyman-Pearson lemma.

We are going to sketch a proof. Hausman specification test Assumptions Assertions Assume that the disturbances ‘s are i.i.d. and and put Moreover, let us put. and both and are regular. Denote finally. Then we have Notice that is a projection of X by into..

Hausman specification test - proof First of all, let us find. From the Third lecture we have and along similar lines we can find that,. Then with.

Hausman specification test - proof ( continued ) Let us evaluate

Hausman specification test - proof ( continued ) So we know that Since is projection matrix, it is idempotent, i.e.. Then.

Hausman specification test - proof Recalling that we have denoted and, we have. and finally ( continued ). Since is real and symmetric, it can be written as ( see Third Lecture ), i.e. there is a regular matrix such that.

Hausman specification test - proof Now, put ( continued ). Then and hence. It means that and hence.

Hausman specification test - proof On the other hand, recalling that ( continued ). and, i.e. and. Finally, let us recall that we can write,

Hausman specification test - proof ( continued ) Remark Since is unknown, it is substituted by. Then however only. We can rid of also the assumption of normality of disturbances, since. The proof is only a bit more complicated, however neither nor are optimal ( BUE ). Hence it is worthless.. Q.E.D.

Hausman specification test - let us repeat that we proved: Then we have.. Assume that the disturbances ‘s are i.i.d. and put Remark Imagine that, i.e. is biased. However we select instruments in a not very appropriate way, so that, although is unbiased, it is far away from but may be unfortunately close to. Then can be small and we conclude that. Of course, one can easy imagine also “opposite” error.  Hausman test is to be employed with a “high” care !!

( Multi )collinearity Design matrix to be of full rank From the First Lecture we have assumed : What happens if the design matrix is not of full rank ? What happens if the design matrix is “nearly” singular ? How to recognize it ? What is a remedy for such situation ? The answers will be given on the next lecture !! What shall we do on the next lecture ?

What is to be learnt from this lecture for exam ? Orthogonality condition - what does it happen if it is broken ? Instrumental variable - measuring explanatory variables with random errors, - lagged response variable as an explanatory one - the method and examples of instruments. Hausman specificity test, how does it work ? All what you need is on