Download presentation
Presentation is loading. Please wait.
Published byDonald Reynolds Modified over 9 years ago
1
Inference for the mean vector
2
Univariate Inference Let x 1, x 2, …, x n denote a sample of n from the normal distribution with mean and variance 2. Suppose we want to test H 0 : = 0 vs H A : ≠ 0 The appropriate test is the t test: The test statistic: Reject H 0 if |t| > t /2
3
The multivariate Test Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix . Suppose we want to test
4
Example For n = 10 students we measure scores on –Math proficiency test (x 1 ), –Science proficiency test (x 2 ), –English proficiency test (x 3 ) and –French proficiency test (x 4 ) The average score for each of the tests in previous years was 60. Has this changed?
5
The data
6
Summary Statistics the mean vector the sample covariance matrix
7
Roy’s Union- Intersection Principle This is a general procedure for developing a multivariate test from the corresponding univariate test. 1.Convert the multivariate problem to a univariate problem by considering an arbitrary linear combination of the observation vector.
8
2.Perform the test for the arbitrary linear combination of the observation vector. 3.Repeat this for all possible choices of 4.Reject the multivariate hypothesis if H 0 is rejected for any one of the choices for 5.Accept the multivariate hypothesis if H 0 is accepted for all of the choices for 6.Set the type I error rate for the individual tests so that the type I error rate for the multivariate test is .
9
Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix . Suppose we want to test Application of Roy’s principle to the following situation Then u 1, …. u n is a sample of n from the normal distribution with mean and variance.
10
to test we would use the test statistic:
11
and
12
Thus We will reject if
13
We will reject Using Roy’s Union- Intersection principle: We accept
14
We reject i.e. We accept
15
Consider the problem of finding: where
16
thus
17
We reject Thus Roy’s Union- Intersection principle states: We accept is called Hotelling’s T 2 statistic
18
We reject Choosing the critical value for Hotelling’s T 2 statistic, we need to find the sampling distribution of T 2 when H 0 is true. It turns out that if H 0 is true than has an F distribution with 1 = p and 2 = n - p
19
We reject Thus Hotelling’s T 2 test or if
20
Another derivation of Hotelling’s T 2 statistic Another method of developing statistical tests is the Likelihood ratio method. Suppose that the data vector,, has joint density Suppose that the parameter vector,, belongs to the set . Let denote a subset of . Finally we want to test
21
The Likelihood ratio test rejects H 0 if
22
The situation Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix . Suppose we want to test
23
The Likelihood function is: and the Log-likelihood function is:
24
the Maximum Likelihood estimators of are and
25
the Maximum Likelihood estimators of when H 0 is true are: and
26
The Likelihood function is: now
27
Thus similarly
28
and
29
Note: Let
30
and Now and
31
Also
32
Thus
33
using
34
Then Thus to reject H 0 if < This is the same as Hotelling’s T 2 test if
35
Example For n = 10 students we measure scores on –Math proficiency test (x 1 ), –Science proficiency test (x 2 ), –English proficiency test (x 3 ) and –French proficiency test (x 4 ) The average score for each of the tests in previous years was 60. Has this changed?
36
The data
37
Summary Statistics
38
Inference for the mean vector
39
Univariate Inference Let x 1, x 2, …, x n denote a sample of n from the normal distribution with mean and variance 2. Suppose we want to test H 0 : = 0 vs H A : ≠ 0 The appropriate test is the t test: The test statistic: Reject H 0 if |t| > t /2
40
We reject Hotelling’s T 2 statistic and test
41
Example For n = 10 students we measure scores on –Math proficiency test (x 1 ), –Science proficiency test (x 2 ), –English proficiency test (x 3 ) and –French proficiency test (x 4 ) The average score for each of the tests in previous years was 60. Has this changed?
42
The data
43
Summary Statistics
44
The two sample problem
45
Univariate Inference Let x 1, x 2, …, x n denote a sample of n from the normal distribution with mean x and variance 2. Let y 1, y 2, …, y m denote a sample of n from the normal distribution with mean y and variance 2. Suppose we want to test H 0 : x = y vs H A : x ≠ y
46
The appropriate test is the t test: The test statistic: Reject H 0 if |t| > t /2 d.f. = n + m -2
47
The multivariate Test Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix . Suppose we want to test Let denote a sample of m from the p-variate normal distribution with mean vector and covariance matrix .
48
Hotelling’s T 2 statistic for the two sample problem if H 0 is true than has an F distribution with 1 = p and 2 = n +m – p - 1
49
We reject Thus Hotelling’s T 2 test
50
Example 2 Annual financial data are collected for firms approximately 2 years prior to bankruptcy and for financially sound firms at about the same point in time. The data on the four variables x 1 = CF/TD = (cash flow)/(total debt), x 2 = NI/TA = (net income)/(Total assets), x 3 = CA/CL = (current assets)/(current liabilties, and x 4 = CA/NS = (current assets)/(net sales) are given in the following table.
51
The data are given in the following table:
52
Hotelling’s T 2 test A graphical explanation
53
Hotelling’s T 2 statistic for the two sample problem
54
is the test statistic for testing:
55
Popn A Popn B X1X1 X2X2 Hotelling’s T 2 test
56
Popn A Popn B X1X1 X2X2 Univariate test for X 1
57
Popn A Popn B X1X1 X2X2 Univariate test for X 2
58
Popn A Popn B X1X1 X2X2 Univariate test for a 1 X 1 + a 2 X 2
59
Mahalanobis distance A graphical explanation
60
Euclidean distance
61
Mahalanobis distance: , a covariance matrix
62
Hotelling’s T 2 statistic for the two sample problem
63
Popn A Popn B X1X1 X2X2 Case I
64
Popn A Popn B X1X1 X2X2 Case II
65
In Case I the Mahalanobis distance between the mean vectors is larger than in Case II, even though the Euclidean distance is smaller. In Case I there is more separation between the two bivariate normal distributions
66
Discrimination and Classification
67
Discrimination Situation: We have two or more populations 1, 2, etc (possibly p-variate normal). The populations are known (or we have data from each population) We have data for a new case (population unknown) and we want to identify the which population for which the new case is a member.
68
Examples
69
The Basic Problem Suppose that the data from a new case x 1, …, x p has joint density function either : 1 : f(x 1, …, x n ) or 2 : g(x 1, …, x n ) We want to make the decision to D 1 : Classify the case in 1 (f is the correct distribution) or D 2 : Classify the case in 2 (g is the correct distribution)
70
The Two Types of Errors 1.Misclassifying the case in 1 when it actually lies in 2. Let P[1|2] = P[D 1 | 2 ] = probability of this type of error 2.Misclassifying the case in 2 when it actually lies in 1. Let P[2|1] = P[D 2 | 1 ] = probability of this type of error This is similar Type I and Type II errors in hypothesis testing.
71
Note: 1. C 1 = the region were we make the decision D 1. (the decision to classify the case in 1 ) A discrimination scheme is defined by splitting p – dimensional space into two regions. 2. C 2 = the region were we make the decision D 2. (the decision to classify the case in 2 )
72
1.Set up the regions C 1 and C 2 so that one of the probabilities of misclassification, P[2|1] say, is at some low acceptable value . Accept the level of the other probability of misclassification P[1|2] = . There can be several approaches to determining the regions C 1 and C 2. All concerned with taking into account the probabilities of misclassification P[2|1] and P[1|2]
73
2.Set up the regions C 1 and C 2 so that the total probability of misclassification: P[Misclassification] = P[1] P[2|1] + P[2]P[1|2] is minimized P[1] = P[the case belongs to 1 ] P[2] = P[the case belongs to 2 ]
74
3.Set up the regions C 1 and C 2 so that the total expected cost of misclassification: E[Cost of Misclassification] = c 2|1 P[1] P[2|1] + c 1|2 P[2]P[1|2] is minimized P[1] = P[the case belongs to 1 ] P[2] = P[the case belongs to 2 ] c 2|1 = the cost of misclassifying the case in 2 when the case belongs to 1. c 1|2 = the cost of misclassifying the case in 1 when the case belongs to 2.
75
4.Set up the regions C 1 and C 2 The two types of error are equal: P[2|1] = P[1|2]
76
Computer security: P[2|1] = P[identifying a valid user as an imposter] P[2] = P[imposter] 1 : Valid users 2 : Imposters c 1|2 = the cost of identifying the user as a valid user when the user is an imposter. P[1|2] = P[identifying an imposter as a valid user ] P[1] = P[valid user] c 2|1 = the cost of identifying the user as an imposter when the user is a valid user.
77
This problem can be viewed as an Hypothesis testing problem P[2|1] = H 0 : 1 is the correct population H A : 2 is the correct population P[1|2] = Power = 1 -
78
The Neymann-Pearson Lemma Suppose that the data x 1, …, x n has joint density function f(x 1, …, x n ; ) where is either 1 or 2. Let g(x 1, …, x n ) = f(x 1, …, x n ; 1 ) and h(x 1, …, x n ) = f(x 1, …, x n ; 2 ) We want to test H 0 : = 1 (g is the correct distribution) against H A : = 2 (h is the correct distribution)
79
The Neymann-Pearson Lemma states that the Uniformly Most Powerful (UMP) test of size is to reject H 0 if: and accept H 0 if: where k is chosen so that the test is of size .
80
Proof: Let C be the critical region of any test of size . Let Note: We want to show that
81
hence and Thus
83
and
84
Thus and when we add the common quantity to both sides. Q.E.D.
85
Fishers Linear Discriminant Function. Suppose that x 1, …, x p is either data from a p-variate Normal distribution with mean vector: The covariance matrix is the same for both populations 1 and 2.
86
The Neymann-Pearson Lemma states that we should classify into populations 1 and 2 using: That is make the decision D 1 : population is 1 if ≥ k
87
or and Finally we make the decision D 1 : population is 1 if where
88
The function Is called Fisher’s linear discriminant function
89
In the case where the populations are unknown but estimated from data Fisher’s linear discriminant function
93
Example 2 Annual financial data are collected for firms approximately 2 years prior to bankruptcy and for financially sound firms at about the same point in time. The data on the four variables x 1 = CF/TD = (cash flow)/(total debt), x 2 = NI/TA = (net income)/(Total assets), x 3 = CA/CL = (current assets)/(current liabilties, and x 4 = CA/NS = (current assets)/(net sales) are given in the following table.
94
The data are given in the following table:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.