Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 9 Hypothesis Testing Understandable Statistics Ninth Edition

Similar presentations


Presentation on theme: "Chapter 9 Hypothesis Testing Understandable Statistics Ninth Edition"— Presentation transcript:

1 Chapter 9 Hypothesis Testing Understandable Statistics Ninth Edition
By Brase and Brase Prepared by Yixun Shi Bloomsburg University of Pennsylvania

2 Methods for Drawing Inference
We can draw inference on a population parameter in two ways: Estimation (Chapter 8) Hypothesis Testing (Chapter 9)

3 Hypothesis Testing In essence, hypothesis testing is the process of making decisions about the value of a population parameter.

4 Establishing the Hypotheses
Null Hypothesis: A hypothesis about the parameter in question that often denotes a theoretical value, an historical value, or a production specification. Denoted as H0 Alternate Hypothesis: A hypothesis that differs from the null hypothesis, such that if we reject the null hypothesis, we will accept the alternate hypothesis. Denoted as H1 (in other sources HA).

5 Hypotheses Restated

6 Types of Tests The null hypothesis is always a statement of equality.
H0: μ = k, where k is a specified value The alternate hypothesis states that the parameter (μ, p) is less than, greater than, or not equal to a specified value.

7 Types of Tests Left-Tailed Tests: H1: μ < k H1: p < k
Right-Tailed Tests: H1: μ > k H1: p > k Two-Tailed Tests: H1: μ ≠ k H1: p ≠ k

8 Hypothesis Testing Procedure
Select appropriate hypotheses. Draw a random sample. Calculate the test statistic. Assess the compatibility of the test statistic with H0. Make a conclusion in the context of the problem.

9 Hypothesis Test of μ x is Normal, σ is Known

10 P-Value P-values are sometimes called the probability of chance.
Low P-values are a good indication that your test results are not due to chance.

11 P-Value for Left-Tailed Test

12 P-Value for Right-Tailed Test

13 P-Value for Two-Tailed Test

14 Types of Errors in Statistical Testing
Since we are making decisions with incomplete information (sample data), we can make the wrong conclusion!! Type I Error: Rejecting the null hypothesis when the null hypothesis is true. Type II Error: Accepting the null hypothesis when the null hypothesis is false.

15 Errors in Statistical Testing
Unfortunately, we usually will not know when we have made an error!! We can only talk about the probability of making an error. Decreasing the probability of making a type I error will increase the probability of making a type II error (and vice versa). We can only decrease the probability of both types of errors by increasing the sample size (obtain more information), but this may not be feasible in practice.

16 Type I and Type II Errors

17 Level of Significance Good practice is to specify in advance the level of type I error we are willing to risk. The probability of type I error is the level of significance for the test, denoted by α (alpha).

18 Type II Error The probability of making a type II error is denoted by β (Beta). 1 – β is called the power of the test. 1 – β is the probability of rejecting H0 when H0 is false (a correct decision).

19 The Probabilities Associated with Testing

20 Concluding a Statistical Test
For our purposes, significant is defined as follows: At our predetermined α level of risk, the evidence against H0 is sufficient to discredit H0. Thus we adopt H1.

21 Statistical Testing Comments
In most statistical applications, α = 0.05 or α = 0.01 is used When we “accept” the null hypothesis, we are not proving the null hypothesis to be true. We are only saying that the sample evidence is not strong enough to justify the rejection of the null Some statisticians prefer to say “fail to reject the null” rather than “accept the null.”

22 Interpretation of Testing Terms

23 Testing µ When σ is Known
State the null hypothesis, alternate hypothesis, and level of significance. If x is normally distributed, any sample size will suffice. If not, n ≥ 30 is required. Calculate:

24 Testing µ When σ is Known
Use the standard normal table and the type of test (one or two-tailed) to determine the P-value. Make a statistical conclusion: If P-value ≤ α, reject H0. If P-value > α, do not reject H0. Make a context-specific conclusion.

25 Testing µ When σ is Unknown
State the null hypothesis, alternate hypothesis, and level of significance. If x is normally distributed (or mound-shaped), any sample size will suffice. If not, n ≥ 30 is required. Calculate:

26 Testing µ When σ is Unknown
Use the Student’s t table and the type of test (one or two-tailed) to determine (or estimate) the P-value. Make a statistical conclusion: If P-value ≤ α, reject H0. If P-value > α, do not reject H0. Make a context-specific conclusion.

27 Using Table 6 to Estimate P-Values
Suppose we calculate t = 2.22 for a one-tailed test from a sample size of 6. Thus, df = n – 1 = 5. We obtain: < P-Value < 0.050

28 Testing µ Using the Critical Value Method
The values of that will result in the rejection of the null hypothesis are called the critical region of the distribution When we use a predetermined significance level α, the Critical Value Method and the P-Value Method are logically equivalent.

29 Critical Regions for H0: µ = k

30 Critical Regions for H0: µ = k

31 Critical Regions for H0: µ = k

32 Testing µ When σ is Known (Critical Region Method)
State the null hypothesis, alternate hypothesis, and level of significance. If x is normally distributed, any sample size will suffice. If not, n ≥ 30 is required. Calculate:

33 Testing µ When σ is Known (Critical Region Method)
Show the critical region and critical value(s) on a graph (determined by the alternate hypothesis and α). Conclude in favor of the alternate hypothesis if z is in the critical region. State a conclusion within the context of the problem.

34 Left-Tailed Tests

35 Right-Tailed Tests

36 Two-Tailed Tests

37 Testing a Proportion p Test assumptions: r is a binomial variable
n is the number of independent trials p is the probability of success on each trial np > 5 and n(1-p) > 5

38 Types of Proportion Tests

39 The Distribution of the Sample Proportion

40 Converting the Sample Proportion to z

41 Testing p State the null hypothesis, alternate hypothesis, and level of significance. Check np > 5 and nq > 5 (recall q = 1 – p). Compute: p = the specified value in H0

42 Testing p Use the standard normal table and the type of test (one or two-tailed) to determine the P-value. Make a statistical conclusion: If P-value ≤ α, reject H0. If P-value > α, do not reject H0. Make a context-specific conclusion.

43 Using the Critical Value Method for p
As when testing for means, we can use the critical value method when testing for p. Use the critical value graphs exactly as when testing µ.

44 Critical Thinking: Issues Related to Hypothesis Testing
Central question – Is the value of sample test statistics too far away from the value of the population parameter proposed in H0 to occur by chance alone? – P-value tells the probability for that to occur by chance alone.

45 Critical Thinking: Issues Related to Hypothesis Testing
If the P-value is so close to α, then we might attempt to clarify the results by - Increasing the sample size - controlling the experiment to reduce the standard deviation. How reliable is the study and the measurements in the sample? – consider the source of the data and the reliability of the organization doing the study.

46 Tests Involving Paired Differences
Data pairs occur naturally in many settings: Before and after measurements on the same observation after a treatment. Be sure to have a definite and uniform method for creating pairs of data points.

47 Advantages to Using Paired Data
Reduces the danger of extraneous or uncontrollable variables Theoretically reduces measurement variability. Increases the accuracy of statistical conclusions.

48 Testing for Differences
We take the difference between each pair of data points. Denoted by d We then test the average difference against the Student’s t distribution.

49 Hypotheses for Differences
H0: µd = 0

50 Sample Test Statistic for Differences
with

51 Finding the P-Value Just as in the test for µ when σ is unknown, use Table 6 to estimate the P-Value of the test.

52 Testing d State the null hypothesis, alternate hypothesis, and level of significance. If you can assume d is normal (mound-shaped), any sample size will do. If not, make sure n ≥ 30. Calculate: df = n-1

53 Testing d Use the Student’s t table and the type of test (one or two-tailed) to determine (or estimate) the P-value. Make a statistical conclusion: If P-value ≤ α, reject H0. If P-value > α, do not reject H0. Make a context-specific conclusion.

54 Testing the Differences Between Independent Samples
Many practical applications involve testing the difference between population means or population proportions.

55 Testing µ1 - µ2 when σ1, σ2 are Known

56 Hypotheses for Testing µ1 - µ2 when σ1, σ2 are Known

57 Testing µ1 - µ2 when σ1, σ2 are Known

58 Testing µ1 - µ2 when σ1, σ2 are Known

59 Testing µ1 - µ2 when σ1, σ2 are Known

60 Testing µ1 - µ2 when σ1, σ2 are Unknown
Just as in the one-sample test for the mean, we will resort to the Student’s t distribution and proceed in a similar fashion. Remark: in practice, the population standard deviation will be unknown in most cases .

61 Testing µ1 - µ2 when σ1, σ2 are Unknown

62 Testing µ1 - µ2 when σ1, σ2 are Unknown

63 Testing µ1 - µ2 when σ1, σ2 are Unknown

64 Deciding Which Test to Use for H0: µ1 - µ2

65 Testing for a Difference in Proportions, p1 – p2
Suppose we have two independent binomial experiments. We would like to test if the two population proportions are equal.

66 Testing for a Difference in Proportions

67 Testing for a Difference in Proportions

68 Testing for a Difference in Proportions
The test statistic is as follows:

69 The Test Procedure

70 The Test Procedure

71 The Test Procedure

72 The Test Procedure

73 Critical Regions For Tests of Differences
Recall, our emphasis is on the P-Value method. Most scientific studies use this technique. Also, for a fixed α-level test, the methods are equivalent and lead to identical results.


Download ppt "Chapter 9 Hypothesis Testing Understandable Statistics Ninth Edition"

Similar presentations


Ads by Google