Download presentation
Presentation is loading. Please wait.
Published byCathleen Allen Modified over 8 years ago
1
1 Pertemuan 09 & 10 Pengujian Hipotesis Mata kuliah : A0392 - Statistik Ekonomi Tahun: 2010
2
2 Outline Materi : Uji hipotesis rata-rata dan proporsi Uji hipotesis beda dua rata-rata Uji hipotesis beda dua proporsi
3
3 Introduction When the sample size is small, the estimation and testing procedures of large samples are not appropriate. There are equivalent small sample test and estimation procedures for , the mean of a normal population the difference between two population means 2, the variance of a normal population The ratio of two population variances.
4
4 The Sampling Distribution of the Sample Mean When we take a sample from a normal population, the sample mean has a normal distribution for any sample size n, and has a standard normal distribution. is not normalBut if is unknown, and we must use s to estimate it, the resulting statistic is not normal.
5
5 Student ’ s t Distribution Student ’ s t distribution, n-1 degrees of freedom.Fortunately, this statistic does have a sampling distribution that is well known to statisticians, called the Student ’ s t distribution, with n-1 degrees of freedom. We can use this distribution to create estimation testing procedures for the population mean .
6
6 Properties of Student ’ s t degrees of freedom, n-1.Shape depends on the sample size n or the degrees of freedom, n-1. As n increase, variability of t decrease because the estimate s is based on more and more information. As n increases the shapes of the t and z distributions become almost identical. Mound-shapedMound-shaped and symmetric about 0. More variable than zMore variable than z, with “heavier tails” Applet
7
7 Using the t-Table Table 4 gives the values of t that cut off certain critical values in the tail of the t distribution. df a t a,Index df and the appropriate tail area a to find t a,the value of t with area a to its right. For a random sample of size n = 10, find a value of t that cuts off.025 in the right tail. Row = df = n –1 = 9 t.025 = 2.262 Column subscript = a =.025
8
8 Small Sample Inference for a Population Mean The basic procedures are the same as those used for large samples. For a test of hypothesis:
9
9 Example A sprinkler system is designed so that the average time for the sprinklers to activate after being turned on is no more than 15 seconds. A test of 5 systems gave the following times: 17, 31, 12, 17, 13, 25 Is the system working as specified? Test using =.05.
10
10 Example Data: Data: 17, 31, 12, 17, 13, 25 First, calculate the sample mean and standard deviation, using your calculator or the formulas in Chapter 2.
11
11 Example Data: Data: 17, 31, 12, 17, 13, 25 Calculate the test statistic and find the rejection region for =.05. Rejection Region: Reject H 0 if t > 2.015. If the test statistic falls in the rejection region, its p-value will be less than =.05.
12
12 Conclusion Data: Data: 17, 31, 12, 17, 13, 25 Compare the observed test statistic to the rejection region, and draw conclusions. Conclusion: For our example, t = 1.38 does not fall in the rejection region and H 0 is not rejected. There is insufficient evidence to indicate that the average activation time is greater than 15.
13
13 Approximating the p-value You can only approximate the p-value for the test using Table 4. Since the observed value of t = 1.38 is smaller than t.10 = 1.476, p-value >.10.
14
14 The exact p-value You can get the exact p-value using some calculators or a computer. One-Sample T: Times Test of mu = 15 vs mu > 15 Variable N Mean StDev SE Mean Times 6 19.17 7.39 3.02 Variable 95.0% Lower Bound T P Times 13.09 1.38 0.113 One-Sample T: Times Test of mu = 15 vs mu > 15 Variable N Mean StDev SE Mean Times 6 19.17 7.39 3.02 Variable 95.0% Lower Bound T P Times 13.09 1.38 0.113 p-value =.113 which is greater than.10 as we approximated using Table 4. Applet
15
15 Testing the Difference between Two Means To test: H 0 : D 0 versus H a : one of three where D 0 is some hypothesized difference, usually 0.
16
16 Testing the Difference between Two Means The test statistic used in Chapter 9 does not have either a z or a t distribution, and cannot be used for small-sample inference. We need to make one more assumption, that the population variances, although unknown, are equal.
17
17 Testing the Difference between Two Means Instead of estimating each population variance separately, we estimate the common variance with has a t distribution with n 1 +n 2 -2=(n 1 - 1)+(n 2 -1) degrees of freedom. And the resulting test statistic,
18
18 Example Two training procedures are compared by measuring the time that it takes trainees to assemble a device. A different group of trainees are taught using each method. Is there a difference in the two methods? Use =.01. Time to Assemble Method 1Method 2 Sample size1012 Sample mean3531 Sample Std Dev 4.94.5
19
19 Example Solve this problem by approximating the p- value using Table 4. Time to Assemble Method 1Method 2 Sample size1012 Sample mean3531 Sample Std Dev 4.94.5 Applet
20
20 Example.025 < ½( p-value) <.05.05 < p-value <.10 Since the p-value is greater than =.01, H 0 is not rejected. There is insufficient evidence to indicate a difference in the population means..025 < ½( p-value) <.05.05 < p-value <.10 Since the p-value is greater than =.01, H 0 is not rejected. There is insufficient evidence to indicate a difference in the population means. df = n 1 + n 2 – 2 = 10 + 12 – 2 = 20
21
21 Testing the Difference between Two Means If the population variances cannot be assumed equal, the test statistic has an approximate t distribution with degrees of freedom given above. This is most easily done by computer.
22
22 The Paired-Difference Test matched-pairspaired-difference testSometimes the assumption of independent samples is intentionally violated, resulting in a matched-pairs or paired-difference test. By designing the experiment in this way, we can eliminate unwanted variability in the experiment by analyzing only the differences, d i = x 1i – x 2i to see if there is a difference in the two population means,
23
23 Example One Type A and one Type B tire are randomly assigned to each of the rear wheels of five cars. Compare the average tire wear for types A and B using a test of hypothesis. Car12345 Type A10.69.812.39.78.8 Type B10.29.411.89.18.3 But the samples are not independent. The pairs of responses are linked because measurements are taken on the same car.
24
24 The Paired-Difference Test
25
25 Example Car12345 Type A10.69.812.39.78.8 Type B10.29.411.89.18.3 Difference.4.5.6.5
26
26 Example Car12345 Type A10.69.812.39.78.8 Type B10.29.411.89.18.3 Difference.4.5.6.5 Rejection region: Reject H 0 if t > 2.776 or t < -2.776. Conclusion: Since t = 12.8, H 0 is rejected. There is a difference in the average tire wear for the two types of tires.
27
27 Key Concepts
28
28 The test statistic is a value computed from the sample data, and it is used in making the decision about the rejection of the null hypothesis. z = p - p pq n Test statistic for proportions
29
Solution: The preceding example showed that the given claim results in the following null and alternative hypotheses: H 0 : p = 0.5 and H 1 : p > 0.5. Because we work under the assumption that the null hypothesis is true with p = 0.5, we get the following test statistic: n pq z = p – p = 0.56 - 0.5 (0.5)(0.5) 880 = 3.56
30
Figure 7-3
31
31 Inferences about Two Proportions Assumptions 1. We have proportions from two independent simple random samples. 2. For both samples, the conditions np 5 and nq 5 are satisfied.
32
32 population 2. The corresponding meanings are attached to p 2, n 2, x 2, p 2. and q 2, which come from ^^ Notation for Two Proportions For population 1, we let: p 1 = population proportion n 1 = size of the sample x 1 = number of successes in the sample p 1 = x 1 (the sample proportion) q 1 = 1 – p 1 ^^ n1n1 ^
33
33 The pooled estimate of p 1 and p 2 is denoted by p. Pooled Estimate of p 1 and p 2 q = 1 – p = p n 1 + n 2 x 1 + x 2
34
34 Test Statistic for Two Proportions For H 0 : p 1 = p 2, H 0 : p 1 p 2, H 0 : p 1 p 2 H 1 : p 1 p 2, H 1 : p 1 p 2 + z =z = ( p 1 – p 2 ) – ( p 1 – p 2 ) ^ ^ n1n1 pq n2n2
35
35 Test Statistic for Two Proportions where p 1 – p 2 = 0 (assumed in the null hypothesis) = p1p1 ^ x1x1 n1n1 p2p2 ^ x2x2 n2n2 = and q = 1 – p n 1 + n 2 p = x 1 + x 2 For H 0 : p 1 = p 2, H 0 : p 1 p 2, H 0 : p 1 p 2 H 1 : p 1 p 2, H 1 : p 1 p 2
36
Example:
37
200n1n1 n 1 = 200 x 1 = 24 p 1 = x 1 = 24 = 0.120 ^ n2n2 n 2 = 1400 x 2 = 147 p 2 = x 2 = 147 = 0.105 1400 ^ H 0 : p 1 = p 2, H 1 : p 1 > p 2 p = x 1 + x 2 = 24 + 147 = 0.106875 n 1 + n 2 200+1400 q = 1 – 0.106875 = 0.893125.
38
Example: 200n1n1 n 1 = 200 x 1 = 24 p 1 = x 1 = 24 = 0.120 ^ n2n2 n 2 = 1400 x 2 = 147 p 2 = x 2 = 147 = 0.105 1400 ^ z = (0.120 – 0.105) – 0 (0.106875)(0.893125) + (0.106875)(0.893125) 200 1400 z = 0.64
39
Example: 200n1n1 n 1 = 200 x 1 = 24 p 1 = x 1 = 24 = 0.120 ^ n2n2 n 2 = 1400 x 2 = 147 p 2 = x 2 = 147 = 0.105 1400 ^ z = 0.64 This is a right-tailed test, so the P- value is the area to the right of the test statistic z = 0.64. The P-value is 0.2611. Because the P-value of 0.2611 is greater than the significance level of = 0.05, we fail to reject the null hypothesis.
40
Example: For the sample data listed in Table 8-1, use a 0.05 significance level to test the claim that the proportion of black drivers stopped by the police is greater than the proportion of white drivers who are stopped. 200n1n1 n 1 = 200 x 1 = 24 p 1 = x 1 = 24 = 0.120 ^ n2n2 n 2 = 1400 x 2 = 147 p 2 = x 2 = 147 = 0.105 1400 ^ z = 0.64 Because we fail to reject the null hypothesis, we conclude that there is not sufficient evidence to support the claim that the proportion of black drivers stopped by police is greater than that for white drivers. This does not mean that racial profiling has been disproved. The evidence might be strong enough with more data.
41
Example: 200n1n1 n 1 = 200 x 1 = 24 p 1 = x 1 = 24 = 0.120 ^ n2n2 n 2 = 1400 x 2 = 147 p 2 = x 2 = 147 = 0.105 1400 ^
42
42 SELAMAT BELAJAR SEMOGA SUKSES SELALU
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.