Charles University Charles University STAKAN III

Slides:



Advertisements
Similar presentations
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Advertisements

Rules of Matrix Arithmetic
5.1 Real Vector Spaces.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Matrices: Inverse Matrix
The General Linear Model. The Simple Linear Model Linear Regression.
The Simple Linear Regression Model: Specification and Estimation
Maximum likelihood (ML) and likelihood ratio (LR) test
Chapter 5 Orthogonality
Review of Matrix Algebra
Visual Recognition Tutorial
Maximum likelihood (ML)
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Autar Kaw Humberto Isaza
Objectives of Multiple Regression
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
4.4 & 4.5 Notes Remember: Identity Matrices: If the product of two matrices equal the identity matrix then they are inverses.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
Instrumental Variables: Problems Methods of Economic Investigation Lecture 16.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Founded 1348Charles University. Johann Kepler University of Linz FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Charles University.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Chapter 8 Integration Techniques. 8.1 Integration by Parts.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
Founded 1348Charles University 1. FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Joint Moments and Joint Characteristic Functions.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
Virtual University of Pakistan
Charles University Charles University STAKAN III
Charles University Charles University STAKAN III
Visual Recognition Tutorial
12. Principles of Parameter Estimation
Probability Theory and Parameter Estimation I
Copyright © Cengage Learning. All rights reserved.
Charles University Charles University STAKAN III
Chapter 2 Minimum Variance Unbiased estimation
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
The regression model in matrix form
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Model Comparison: some basic concepts
Statistical Assumptions for SLR
Summarizing Data by Statistics
Charles University Charles University STAKAN III
Simple Linear Regression
Charles University Charles University STAKAN III
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Parametric Methods Berlin Chen, 2005 References:
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
12. Principles of Parameter Estimation
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Introduction to Econometrics, 5th edition
1st semester a.y. 2018/2019 – November 22, 2018
Statistical inference for the slope and intercept in SLR
Presentation transcript:

Charles University Charles University STAKAN III Tuesday, 14.00 – 15.20 Charles University Charles University Econometrics Econometrics http://samba.fsv.cuni.cz/~visek/Econometrics_Up_To_2010/ http://samba.fsv.cuni.cz/~visek/Econometrics_Up_To_2010/ Jan Ámos Víšek Jan Ámos Víšek FSV UK Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences STAKAN III Third Lecture

Schedule of today talk Recalling OLS and definition of linear estimator. Proof of the theorem given at the end of last lecture. Definition of the best ( linear unbiased ) estimator. Discussion of restrictions on linearity in the case of estimators and of models. Under normality of disturbances OLS is BUE.

Ordinary Least Squares (odhad metodou nejmenších čtverců) Definition An estimator where is matrix, is called the linear estimator .

Let be a sequence of r.v’s, Theorem Assumptions Let be a sequence of r.v’s, . Assertions Then is the best linear unbiased estimator . Assumptions If moreover , and ‘s are independent, Assertions is consistent. Assumptions If further , regular matrix, Assertions then where . is Kronecker delta, i.e. if and for .

Proof is BLUE is unbiased is linear Remember that we have denoted by .

Definition The estimator is the best one in given class of estimators if for any other , the matrix is positive definite, i.e. for any , we have . Recalling that

is the best in the class of unbised linear estimators , i.e. (unit matrix)

is the best in the class of unbised linear estimators

is consistent Denote then and put

. is consistent Lemma – law of large numbers Let be a sequence of independent r.v’s with finite means and positive variances , . Let moreover . Then in probability . Proof : For any

. is consistent Recalling previous slide: Lemma – law of large numbers Let be a sequence of independent r.v’s with finite means and positive variances , . Let moreover . Then in probability . in probability .

. is asymptotically normal Central Limit Theorem - Feller- Lindeberg Let be a sequence of independent r.v’s with finite means and positive variances , . Let moreover . and Then and if and only if for any .

is asymptotically normal Varadarajan theorem Let be sequence of vectors from with d.f. . Further let for any be the d.f. of . Moreover, let be d.f. of and be d.f. of . If for any , then .

is asymptotically normal Firstly we verify conditions of Feller-Lindeberg theorem for , for arbitrary and secondly we apply Vara- darajan theorem. Then we transform asymptotically normally distributed vector by matrix .

REMARK is the best in the class of unbiased linear estimators Normal equations If either for some or for some are large, it may cause serious problems when solving normal equations and solution can be rather strange. (See the next slides ! )

Outlier Solution given by OLS A “reasonable” model, neglecting the outlier

Leverage point Solution given by OLS A “reasonable” model, neglecting the leverage point

Conclusion I Solution given by OLS may be different from that expected by common sense. One reason is that is the best only among linear estimators. Drawing the data from previous slide on the screen of PC, the common sense propose to reject the leverage point and then apply OLS. We obtain than “reasonable” model but it can’t be written as where is the response for all data. So this estimator is not linear. Conclusion II Restriction on the linear estimators can appear to be drastic !!

And what represents the restriction on the linear model ? Remember, we have considered model Time total = -3.62 + 1.27 * Weight - 0.53 * Puls - 0.51 * Strength + 3.90 * Time per ¼-mile But it is easy to test whether the model Time total = -3.62 + 1.27 * Weight + a* Weight - 0.53 * Puls + b* Puls - 0.51 * Strength + c* log(Strength) + 3.90 * Time per ¼-mile 2 3 is not a better one. Weierstrass approximation theorem System of all polynomials is dense in the space of continuous functions on a compact space. Conclusion III Restriction on the linear regression model is not substantial.

linearity of regression model ? What is a mutual relations of linearity of the estimator of regression coefficients linearity of regression model ? and The answer is simpler then one would expect : NONE

And why OLS became so popular ? Firstly It has a simple geometric interpretation, implying existence of solution together with an easy proof of its properties. Secondly There is a simple formula for evaluating it, although the evaluation need not be straightforward. Nowadays however there is a lot of implementation which are safe against numerical difficulties. Conclusion IV We should find the conditions under which OLS are (is) the best estimator among all unbiased estimators. ( and to use OLS only under these conditions ).

Maximum Likelihood Estimator (maximálně věrohodný odhad) Recalling the definition Let and be the density of the distribution Theorem Assumptions Let be iid. r.v’s, . Then and attains Rao-Cramer Assertions lower bound, i.e. is the best unbiased estimator. BLUE Assumptions If is the best unbiased estimator attaining Rao-Cramer lower bound of variance, then and . Assertions

Maximum Likelihood Estimator under assumption of normality of disturbances A monotone transformation doesn’t change location of extreme! This is a constant with respect to The change of sign  changes “max” to “min” !

Recalling Rao-Cramer lower bound of variance of unbiased estimator Denote joint density of disturbances by write instead of If is unbiased, then Let us divide both sides by

So we have In matrix form Assume that , . Then let . was arbitrary hence write instead of it In matrix form Multiply it by from the left-hand-side and by from the right-one.

So we have for any Intermediate considerations

So we have for any But then Further intermediate considerations Finally write as

So we have for any Applying Cauchy-Schwarz inequality

So we have for any Notice, both r.v. are scalars!! , i. e.

Since it holds for any , we have ( in the sense of positive semidefinite matrices) Assuming regularity of Select with

Since it holds for any , we have and (inequality is in the sense of positive semidefinite matrices). We would like to reach equality ! Cauchy-Schwarz inequality has been applied on

Hence the equality is reached iff is a linear function of , i.e. where is a matrix and . Remember the joint density of disturbances is

Hence , . cannot depend on . So is to be unbiased, i.e. for any and so with . Finally .

If attains Rao-Cramer lower bound, The proof of opposite direction. If attains Rao-Cramer lower bound, then the equality in Cauchy Schwarz inequality is reached and hence ( write instead of ) (notice that after integration )

Since , for any regular matrix , there is a vector so that . This we only rewrote from the previous slide Since , for any regular matrix , there is a vector so that . It has to hold for any and any of type and

Imposing the marginal conditions, we obtain finally

What is to be learnt from this lecture for exam ? Linearity of estimator and of model – what advantages and restrictions do they represent ? What means : “The estimator is the best in the class of … .”? OLS is the best unbiased estimator - the condition(s) for it. All what you need is on http://samba.fsv.cuni.cz/~visek/Econometrics_Up_To_2010