The Multivariate Normal Distribution, Part 2

Slides:



Advertisements
Similar presentations
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Advertisements

Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
SOLVED EXAMPLES.
Econ 140 Lecture 61 Inference about a Mean Lecture 6.
Multivariate distributions. The Normal distribution.
Statistics 350 Lecture 14. Today Last Day: Matrix results and Chapter 5 Today: More matrix results and Chapter 5 Please read Chapter 5.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
1 MULTIVARIATE GARCH Rob Engle UCSD & NYU. 2 MULTIVARIATE GARCH MULTIVARIATE GARCH MODELS ALTERNATIVE MODELS CHECKING MODEL ADEQUACY FORECASTING CORRELATIONS.
STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS
Lecture 7 1 Statistics Statistics: 1. Model 2. Estimation 3. Hypothesis test.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
The Multivariate Normal Distribution, Part 1 BMTRY 726 1/10/2014.
The Multivariate Normal Distribution, Part 2 BMTRY 726 1/14/2014.
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
Maximum Likelihood Estimation
1 G Lect 3b G Lecture 3b Why are means and variances so useful? Recap of random variables and expectations with examples Further consideration.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
Consistency An estimator is a consistent estimator of θ, if , i.e., if
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Estimation in Marginal Models (GEE and Robust Estimation)
The final exam solutions. Part I, #1, Central limit theorem Let X1,X2, …, Xn be a sequence of i.i.d. random variables each having mean μ and variance.
Confidence Interval & Unbiased Estimator Review and Foreword.
Chapter 5 Statistical Inference Estimation and Testing Hypotheses.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
CLASSICAL NORMAL LINEAR REGRESSION MODEL (CNLRM )
Chapter 5 Joint Probability Distributions and Random Samples  Jointly Distributed Random Variables.2 - Expected Values, Covariance, and Correlation.3.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Week 21 Statistical Assumptions for SLR  Recall, the simple linear regression model is Y i = β 0 + β 1 X i + ε i where i = 1, …, n.  The assumptions.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Stat 223 Introduction to the Theory of Statistics
Sampling and Sampling Distributions
Ch5.4 Central Limit Theorem
Chapter 3: Maximum-Likelihood Parameter Estimation
Visual Recognition Tutorial
Probability Theory and Parameter Estimation I
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
Parameter Estimation 主講人:虞台文.
Chapter 5 Joint Probability Distributions and Random Samples
Inference for the mean vector
Sample Mean Distributions
The distribution function F(x)
Parameter, Statistic and Random Samples
Linear Combination of Two Random Variables
Towson University - J. Jung
t distribution Suppose Z ~ N(0,1) independent of X ~ χ2(n). Then,
Sampling Distribution of Means: Basic Theorems
Inference about the Slope and Intercept
Inference about the Slope and Intercept
Statistical Assumptions for SLR
POINT ESTIMATOR OF PARAMETERS
EC 331 The Theory of and applications of Maximum Likelihood Method
OVERVIEW OF LINEAR MODELS
EM for Inference in MV Data
Stat 223 Introduction to the Theory of Statistics
OVERVIEW OF LINEAR MODELS
数据的矩阵描述.
Multivariate Linear Regression
Parametric Methods Berlin Chen, 2005 References:
6.3 Sampling Distributions
The Multivariate Normal Distribution, Part I
EM for Inference in MV Data
Chapter-1 Multivariate Normal Distributions
Applied Statistics and Probability for Engineers
Moments of Random Variables
Generating Random Matrices
Presentation transcript:

The Multivariate Normal Distribution, Part 2 BMTRY 726 5/22/2018

More Properties of MVN Last lecture we discussed: The form of the MVN distribution Contours of constant density obtained by taking a slice of the MVN distribution as some set height Some of the properties of the MVN distribution Impact of linear combinations of X Partitions of X Conditions for Independence of vectors in X We will continue this discussion with some additional useful properties

Conditional Distributions Result 4.6: Suppose Then the conditional distribution of X1 given that X2 = x2 is a normal distribution Note the covariance matrix does not depend on the value of x2

Proof of Result 4.6

Proof of Result 4.6

Proof of Result 4.6

Example Consider Find the conditional distribution of the 1st and 3rd components

Example

Example

Results 4.6 & Multiple Regression Consider The conditional distribution of Y|X=x is univariate normal with

Result 4.7: If and S is positive definite, then Proof:

Result 4.7: If and S is positive definite, then Proof cont’d:

Result 4.8: If are mutually independent with And c1, c2, …,cn are n constants. Then Additionally if we have and which are r x p matrices of constants we can also say

Sample Data Let’s say that X1, X2, …, Xn are i.i.d. random vectors If the data vectors are sampled from a MVN distribution then

Multivariate Normal Likelihood We can also look at the joint likelihood of our random sample

Some needed Results (1) Given A > 0 and are eigenvalues of A (a) (b) (c) (2) From (c) we can show that:

Some needed Results (2) Proof that:

Some needed Results (2) Proof that:

Some needed Results (2) Proof that:

Some needed Results (1) Given A > 0 and are eigenvalues of A (a) (b) (c) (2) From (c) we can show that: (3) Given Spxp > 0, Bpxp > 0 and scalar b > 0

MLE’s for .

MLE’s for .

MLE’s for .

MLE’s for .

A Few Notes About The MLE’s for Variance As in the univariate setting, the MLE for the variance matrix is biased Thus we generally use an alternative to the MLE…

Sampling Distributions So we’ve discussed that we can estimate the mean vector, m, and the covariance matrix, S, using and S But we need to understand how these are distributed..

Sample Mean Vector We can estimate a sample mean for X1, X2, …, Xn

Sample Mean Vector Now we can estimate the mean of our sample But what about the properties of ? It is an unbiased estimate of the mean It is a sufficient statistic Also, the sampling distribution is:

.

.

Sample Covariance And the sample covariance for X1, X2, …, Xn Sample variance Sample Covariance

Sample Mean Vector So we can also estimate the variance of our sample And like , S also has some nice properties It is an unbiased estimate of the variance It is also a sufficient statistic It is also independent of But what about the sampling distribution of S?

Wishart Distribution Given , the distribution of is called a Wishart distribution with n degrees of freedom. has a Wishart distribution with n -1 degrees of freedom The density function is where A and S are positive definite

Wishart cont’d The Wishart distribution is the multivariate analog of the central chi-squared distribution. If are independent then If then CAC’ is distributed The distribution of the (i, i) element of A is

Large Sample Behavior Let X1, X2, …, Xn be a random sample from a population with mean and variance (not necessarily normally distributed) Then and S are consistent estimators for m and S. This means

Large Sample Behavior If we have a random sample X1, X2, …, Xn a population with mean and variance, we can apply the multivariate central limit theorem as well The multivariate CLT says

Next Time Checking Normality How can we check MVN and what do we do if our data don’t appear MVN? SAS and R Begin our discussion of statistical inference for MV vectors