Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Multiple Regression Analysis
The Simple Regression Model
The General Linear Model. The Simple Linear Model Linear Regression.
1 Chapter 2 Simple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Multivariate distributions. The Normal distribution.
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
The Simple Linear Regression Model: Specification and Estimation
Symmetric Matrices and Quadratic Forms
Econ Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Chapter 4 Multiple Regression.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Visual Recognition Tutorial
Useful Statistical Distributions for Econometrics Econometrics is usually concerned with the estimation of equations of the form: The normal distribution.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Techniques for studying correlation and covariance structure
The Multivariate Normal Distribution, Part 2 BMTRY 726 1/14/2014.
Linear regression models in matrix terms. The regression function in matrix terms.
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
CHAPTER SIX Eigenvalues
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
1 Matrix Algebra and Random Vectors Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
Founded 1348Charles University 1. FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of.
Founded 1348Charles University
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Chapter 5 Statistical Inference Estimation and Testing Hypotheses.
Joint Moments and Joint Characteristic Functions.
Stats & Summary. The Woodbury Theorem where the inverses.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Sect. 4.5: Cayley-Klein Parameters 3 independent quantities are needed to specify a rigid body orientation. Most often, we choose them to be the Euler.
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Charles University Charles University STAKAN III
Introduction to Vectors and Matrices
Charles University Charles University STAKAN III
Visual Recognition Tutorial
Matrices and vector spaces
The Simple Linear Regression Model: Specification and Estimation
Charles University Charles University STAKAN III
t distribution Suppose Z ~ N(0,1) independent of X ~ χ2(n). Then,
Basic Econometrics Chapter 4: THE NORMALITY ASSUMPTION:
The regression model in matrix form
The Multivariate Normal Distribution, Part 2
Charles University Charles University STAKAN III
Matrix Algebra and Random Vectors
Charles University Charles University STAKAN III
Symmetric Matrices and Quadratic Forms
Charles University Charles University STAKAN III
Introduction to Vectors and Matrices
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences Jan Ámos Víšek Econometrics Tuesday, – Charles University Fourth lecture

Schedule of today talk A summary of previous results. The best unbiased quadratic estimator of the variance of disturbances. Distribution of quadratic forms - Fisher-Cochran lemma. Distribution of the unbiased estimator of variance. Distribution of studentized estimators of regression coefficients.

is BLUE is consistent What we already know about the linear regression model is asymptotically normal is the best among all unbiased estimators

What haven’t we discussed, up to now, about the linear regression model ? In the literature written by a statistician the disturbances are usually called “error term”. It is probably reason why ( especially in the Czech texts ) we can meet with the interpretation of disturbances as of response variable. But then it is likely that also explanatory variables were measured with “random errors” and then the OLS-estimator is biased !!!! ( We shall prove it later and propose a remedy.) Hence this interpretation is not tenable !!! “random errors of measurement” Because if we accept it, we should use something else than OLS from the very beginning !!! udržitelný, obhajitelný

We know how to estimate all coefficients of linear regression model, except of one. Which is it ? Conclusion Definition An estimator where is matrix, is called the quadratic estimator. Definition (recalling skewness and kurtosis - šikmost a špičatost) skewness kurtosis We’ll need it in minute.

. biased quadratic estimators with positive definite matrix. then is the best unbiased estimator of among all un- matrix are all the same, If or the diagonal elements of the projection and Denote sum of squared residuals ( residual sum of squares ) Let be iid. r.v’s,. Theorem Assumptions Assertions

Denote the projection of on by Let be r.v’s with Lemma 1 Assumptions Assertions. Assumptions Then and hence they are not correlated. If morover, Assertions and are independent. Finally. and

Proof From the orthogonality we conclude that and are not cor- related and from it, under normality, that they are independent. We conclude that both and are linear combinations of disturbances, hence they are normally distributed.

Proof Let us evaluate mean values and covariance matrices of and. continued

Under, and are independent. Corollary Let be r.v’s with. Then for any symmetric matrix where “tr” stays for “trace”. Moreover. Assertion 1 Assertion 2 Let be idempotent, i.e..Then. Assertion 3

Let be positive definite (semidefinite), then the eigenvalues are positive (nonnegative). Assertion 4 Recalling that the eigenvector and eigenvalue of a matrix are given as Let be symmetric, then the eigenvalues are real and the eigenvectors can be selected also real. Assertion 5 The eigenvalues and eigenvectors are generally complex valued (of course, they always exist). Remark

Let be matrix of type. Then for any vector there is an eigenvector. Assertion 6 Assertion 7 - Spectral Decomposition ( of matrix ) Assumptions Let be real symmetric matrix (of type ). Assertions Then there is an orthogonal real matrix such that where are the eigenvalues of the matrix ( while are the eigenvectors of it ). Of course, and,.

Proof of Theorem Write instead of. In what follows consider instead of.

is idempotent Consider an alternative estimator It cannot depend on ( real, symmetric, positive definite)

,,positive definite Denote and write,

Put It does not depend on the selection of matrix, so let us minimize. For ( i.e. e.g. ), we need.

For, we have. Since, minimum is attained for. Remember that and which concludes the proof.

Definition The function where is symmetric matrix, is called the quadratic form. DISTRIBUTION of QUADRATIC FORMS Let, independent. Assertions. Put. Finally, let. Then are mutually independent and iff. Then moreover. Lemma (Fisher-Cochran) Assumptions Moreover, let

Proof of Lemma symmetric eigenvectors of with nonzero eigenvalue Put Assume that.

Put It has to hold for all, hence and is regular. is positive definite. Put, then has independent coordinates and.

Moreover hence with and.

Assume that the quadratic forms are independent and have. Opposite direction of the proof. Then their sum has d.f.. On the left hand side we have with d.f.. Hence and.

DISTRIBUTION of THE ESTIMATOR of VARIANCE of RESIDUALS and of STUDENTIZED ESTIMATORS of THE REGRESSION COEFFICIENTS Let us recall that Assertions Assumptions Let be iid. r.v’s. Then. Lemma Moreover, let.,.

Proof of Lemma

Put then where,, is called studentization. Assumptions Let be iid. r.v’s. Lemma and be regular. Assertions Then. Assumptions Put where Assertions Then, i.e. is distributed as Student with degrees of freedom., This transformation is called studentization. Moreover, let

Proof of Lemma Recalling that, we conclude that where. Recalling that is independent from and that, the proof follows from the definition of Student which symbolically reads.

The proof is based on the employment of spectral matrix decom- position of which shows that the numerator can be written as a sum of squares of normally distributed and independent r.v’s. Assertions Then. is Fisher-Snedecorovo. Assumptions Let be iid. r.v’s. and be regular. Moreover, let Corollary

What is to be learnt from this lecture for exam ? The best unbiased quadratic estimator of the variance of disturbances. Spectral decomposition of matrix. Distribution of quadratic forms - Fisher-Cochran lemma. Distribution of the unbiased estimator of variance. Distribution of studentized estimators of regression coefficients. All what you need is on