Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inference for the mean vector

Similar presentations


Presentation on theme: "Inference for the mean vector"— Presentation transcript:

1 Inference for the mean vector

2 Univariate Inference Let x1, x2, … , xn denote a sample of n from the normal distribution with mean m and variance s2. Suppose we want to test H0: m = m0 vs HA: m ≠ m0 The appropriate test is the t test: The test statistic: Reject H0 if |t| > ta/2

3 The multivariate Test Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix S. Suppose we want to test

4 Roy’s Union- Intersection Principle
This is a general procedure for developing a multivariate test from the corresponding univariate test. Convert the multivariate problem to a univariate problem by considering an arbitrary linear combination of the observation vector.

5 Perform the test for the arbitrary linear combination of the observation vector.
Repeat this for all possible choices of Reject the multivariate hypothesis if H0 is rejected for any one of the choices for Accept the multivariate hypothesis if H0 is accepted for all of the choices for Set the type I error rate for the individual tests so that the type I error rate for the multivariate test is a.

6 Application of Roy’s principle to the following situation
Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix S. Suppose we want to test Then u1, …. un is a sample of n from the normal distribution with mean and variance

7 to test we would use the test statistic:

8 and

9 Thus We will reject if

10 Using Roy’s Union- Intersection principle:
We will reject We accept

11 i.e. We reject We accept

12 Consider the problem of finding:
where

13 thus

14 Thus Roy’s Union- Intersection principle states:
We reject We accept is called Hotelling’s T2 statistic

15 Choosing the critical value for Hotelling’s T2 statistic
We reject , we need to find the sampling distribution of T2 when H0 is true. It turns out that if H0 is true than has an F distribution with n1 = p and n2 = n - p

16 Thus Hotelling’s T2 test
We reject or if

17 Another derivation of Hotelling’s T2 statistic
Another method of developing statistical tests is the Likelihood ratio method. Suppose that the data vector, , has joint density Suppose that the parameter vector, , belongs to the set W. Let w denote a subset of W. Finally we want to test

18 The Likelihood ratio test rejects H0 if

19 The situation Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix S. Suppose we want to test

20 The Likelihood function is:
and the Log-likelihood function is:

21 the Maximum Likelihood estimators of
are and

22 the Maximum Likelihood estimators of
when H 0 is true are: and

23 The Likelihood function is:
now

24 Thus similarly

25 and

26 Note: Let

27 and Now and

28 Also

29 Thus

30 Thus using

31 Then Thus to reject H0 if l < la This is the same as Hotelling’s T2 test if

32 Example For n = 10 students we measure scores on
Math proficiency test (x1), Science proficiency test (x2), English proficiency test (x3) and French proficiency test (x4) The average score for each of the tests in previous years was 60. Has this changed?

33 The data

34 Summary Statistics

35 Simultaneous Inference for means
Recall (Using Roy’s Union Intersection Principle)

36 Now

37 Thus and the set of intervals Form a set of (1 – a)100 % simultaneous confidence intervals for

38 Recall Thus the set of (1 – a)100 % simultaneous confidence intervals for

39 The two sample problem

40 Univariate Inference Let x1, x2, … , xn denote a sample of n from the normal distribution with mean mx and variance s2. Let y1, y2, … , ym denote a sample of n from the normal distribution with mean my and variance s2. Suppose we want to test H0: mx = my vs HA: mx ≠ my

41 The appropriate test is the t test:
The test statistic: Reject H0 if |t| > ta/2 d.f. = n + m -2

42 The multivariate Test Let denote a sample of n from the p-variate normal distribution with mean vector and covariance matrix S. Let denote a sample of m from the p-variate normal distribution with mean vector and covariance matrix S. Suppose we want to test

43 Hotelling’s T2 statistic for the two sample problem
if H0 is true than has an F distribution with n1 = p and n2 = n +m – p - 1

44 Thus Hotelling’s T2 test
We reject

45 Simultaneous inference for the two-sample problem
Hotelling’s T2 statistic can be shown to have been derived by Roy’s Union-Intersection principle

46 Thus

47 Thus

48 Thus Hence

49 Thus form 1 – a simultaneous confidence intervals for

50 A graphical explanation
Hotelling’s T2 test A graphical explanation

51 Hotelling’s T2 statistic for the two sample problem

52 is the test statistic for testing:

53 Hotelling’s T2 test X2 Popn A Popn B X1

54 Univariate test for X1 X2 Popn A Popn B X1

55 Univariate test for X2 X2 Popn A Popn B X1

56 Univariate test for a1X1 + a2X2
Popn A Popn B X1

57 A graphical explanation
Mahalanobis distance A graphical explanation

58 Euclidean distance

59 Mahalanobis distance: S, a covariance matrix

60 Hotelling’s T2 statistic for the two sample problem

61 Case I X2 Popn A Popn B X1

62 Case II X2 Popn A Popn B X1

63 In Case I the Mahalanobis distance between the mean vectors is larger than in Case II, even though the Euclidean distance is smaller. In Case I there is more separation between the two bivariate normal distributions


Download ppt "Inference for the mean vector"

Similar presentations


Ads by Google