Inference for Distributions 7.2 Comparing Two Means © 2012 W.H. Freeman and Company.

Slides:



Advertisements
Similar presentations
BPS - 5th Ed. Chapter 241 One-Way Analysis of Variance: Comparing Several Means.
Advertisements

CHAPTER 25: One-Way Analysis of Variance Comparing Several Means
CHAPTER 25: One-Way Analysis of Variance: Comparing Several Means ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner.
Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample.
Lecture 11/7. Inference for Proportions 8.2 Comparing Two Proportions © 2012 W.H. Freeman and Company.
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Means Chapter 13.
Inference for distributions: - Comparing two means IPS chapter 7.2 © 2006 W.H. Freeman and Company.
PSY 307 – Statistics for the Behavioral Sciences
Inference for Distributions - for the Mean of a Population IPS Chapter 7.1 © 2009 W.H Freeman and Company.
6/18/2015Two Sample Problems1 Chapter 18 Two Sample Problems.
6/22/2015Two Sample Problems1 Chapter 19 Two Sample Problems.
Inference for Distributions - for the Mean of a Population
Chapter 11: Inference for Distributions
Two-sample problems for population means BPS chapter 19 © 2006 W.H. Freeman and Company.
CHAPTER 19: Two-Sample Problems
AP Statistics Section 13.1 A. Which of two popular drugs, Lipitor or Pravachol, helps lower bad cholesterol more? 4000 people with heart disease were.
Inference for Distributions
Inferences Based on Two Samples
Education 793 Class Notes T-tests 29 October 2003.
Ch 11 – Inference for Distributions YMS Inference for the Mean of a Population.
AP STATISTICS LESSON 11 – 2 (DAY 1) Comparing Two Means.
More About Significance Tests
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Statistical Inferences Based on Two Samples Chapter 9.
Confidence Intervals and Hypothesis Tests for the Difference between Two Population Means µ 1 - µ 2 : Independent Samples Inference for  1  2 1.
Comparing Two Population Means
Confidence Intervals and Hypothesis Tests for the Difference between Two Population Means µ 1 - µ 2 Inference for  1  Independent Samples.
Chapter 10 Comparing Two Means Target Goal: I can use two-sample t procedures to compare two means. 10.2a h.w: pg. 626: 29 – 32, pg. 652: 35, 37, 57.
IPS Chapter 7 © 2012 W.H. Freeman and Company  7.1: Inference for the Mean of a Population  7.2: Comparing Two Means  7.3: Optional Topics in Comparing.
Inference for distributions: - Comparing two means IPS chapter 7.2 © 2006 W.H. Freeman and Company.
Lecture 5 Two population tests of Means and Proportions.
Business Statistics for Managerial Decision Comparing two Population Means.
For 95 out of 100 (large) samples, the interval will contain the true population mean. But we don’t know  ?!
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 10 Comparing Two Populations or Groups 10.2.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Two sample problems:  compare the responses in two groups  each group is a sample from a distinct population  responses in each group are independent.
Objectives (BPS chapter 19) Comparing two population means  Two-sample t procedures  Examples of two-sample t procedures  Using technology  Robustness.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 1.
Inference for Distributions - for the Mean of a Population IPS Chapter 7.1 © 2009 W.H Freeman and Company.
CHAPTER 27: One-Way Analysis of Variance: Comparing Several Means
Chapter 7 Inference for Distributions. Inference for the mean of a population  So far, we have assumed that  was known.  If  is unknown, we can use.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Chapter 24 Independent Samples Chapter.
+ Unit 6: Comparing Two Populations or Groups Section 10.2 Comparing Two Means.
Learning Objectives After this section, you should be able to: The Practice of Statistics, 5 th Edition1 DESCRIBE the shape, center, and spread of the.
Comparing Two Means Ch. 13. Two-Sample t Interval for a Difference Between Two Means.
Inference for distributions: - Comparing two means.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Inference for the difference Between.
Objectives (PSLS Chapter 18) Comparing two means (σ unknown)  Two-sample situations  t-distribution for two independent samples  Two-sample t test 
Analysis of Financial Data Spring 2012 Lecture 2: Statistical Inference 2 Priyantha Wijayatunga, Department of Statistics, Umeå University
Class Six Turn In: Chapter 15: 30, 32, 38, 44, 48, 50 Chapter 17: 28, 38, 44 For Class Seven: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 Read.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Chapter 22 Part 2 CI HT for m 1 -
Chapter 7 Confidence Intervals for 2 Proportions and 2 Means © 2006 W.H. Freeman and Company.
The Practice of Statistics, 5 th Edition1 Check your pulse! Count your pulse for 15 seconds. Multiply by 4 to get your pulse rate for a minute. Write that.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 10 Comparing Two Populations or Groups 10.2.
10/31/ Comparing Two Means. Does smoking damage the lungs of children exposed to parental smoking? Forced vital capacity (FVC) is the volume (in.
Statistics for Business and Economics Module 1:Probability Theory and Statistical Inference Spring 2010 Lecture 7: Tests of significance and confidence.
Objectives (PSLS Chapter 18)
CHAPTER 10 Comparing Two Populations or Groups
Basic Practice of Statistics - 5th Edition
Chapter 23 CI HT for m1 - m2: Paired Samples
18. Two-sample problems for population means (σ unknown)
Inference for distributions: - Optional topics in comparing distributions IPS chapter 7.3 © 2006 W.H. Freeman and Company.
Inference for Distributions
CHAPTER 10 Comparing Two Populations or Groups
Inference for Distributions 7.2 Comparing Two Means
CHAPTER 10 Comparing Two Populations or Groups
CHAPTER 10 Comparing Two Populations or Groups
Objectives (Section 7.2) Two sample problems:
CHAPTER 10 Comparing Two Populations or Groups
CHAPTER 10 Comparing Two Populations or Groups
Inference for Distributions
Presentation transcript:

Inference for Distributions 7.2 Comparing Two Means © 2012 W.H. Freeman and Company

Objectives 7.2Comparing two means  Two-sample z statistic  Two-samples t procedures  Two-sample t significance test  Two-sample t confidence interval  Robustness  Details of the two-sample t procedures

Comparing two samples Which is it? We often compare two treatments used on independent samples. Is the difference between both treatments due only to variations from the random sampling (B), or does it reflect a true difference in population means (A)? Independent samples: Subjects in one samples are completely unrelated to subjects in the other sample. Population 1 Sample 1 Population 2 Sample 2 (A) Population Sample 2 Sample 1 (B)

Two-sample example: Sulfate concentration (in ppm) at upstream and downstream wells: UpstreamDownstream Question about sulfate concentration to be answered? Are these two samples independent or dependent (paired)? Null hypothesis? Could we construct a confidence interval? Of what? Note the structure of the dataset for JMP to be able to use 2-sample methods…

Two-sample z statistic We have two independent SRSs (simple random samples) possibly coming from two distinct populations with (     ) and (     ). We use 1 and 2 to estimate the unknown   and  . When both populations are normal, the sampling distribution of ( 1 − 2 ) is also normal, with standard deviation : Then the two-sample z statistic has the standard normal N(0, 1) sampling distribution.

Two independent samples t distribution We have two independent SRSs (simple random samples) possibly coming from two distinct populations with (     ) and (     ) unknown. We use ( 1,s 1 ) and ( 2,s 2 ) to estimate (     ) and (     ), respectively. To compare the means, both populations should be normally distributed. However, in practice, it is enough that the two distributions have similar shapes and that the sample data contain no strong outliers.

df 1-21-2 The two-sample t statistic follows approximately the t distribution with a standard error SE (spread) reflecting variation from both samples: Conservatively, the degrees of freedom is equal to the smallest of (n 1 − 1, n 2 − 1).

Two-sample t significance test The null hypothesis is that both population means   and   are equal, thus their difference is equal to zero. H 0 :   =     −    with either a one-sided or a two-sided alternative hypothesis. We find how many standard errors (SE) away from (   −   ) is ( 1 − 2 ) by standardizing with t: Because in a two-sample test H 0 poses (   −    0, we simply use With df = smallest(n 1 − 1, n 2 − 1)

Does smoking damage the lungs of children exposed to parental smoking? Forced vital capacity (FVC) is the volume (in milliliters) of air that an individual can exhale in 6 seconds. FVC was obtained for a sample of children not exposed to parental smoking and a group of children exposed to parental smoking. We want to know whether parental smoking decreases children’s lung capacity as measured by the FVC test. Is the mean FVC lower in the population of children exposed to parental smoking? Parental smokingFVCsn Yes No

Parental smokingFVCsn Yes No The difference in sample averages follows approximately the t distribution with 29 d.f. We calculate the t statistic: In table D, for df 29 we find: |t| > => p < (one sided) It’s a very significant difference, we reject H 0. H 0 :  smoke =  no (  smoke −  no ) = 0 H a :  smoke (  smoke −  no ) < 0 (one sided) Lung capacity is significantly impaired in children of smoking parents.

Two-sample t confidence interval Because we have two independent samples we use the difference between both sample averages ( 1 − 2 ) to estimate (   −   ). C t*t*−t*−t* m C is the area between −t* and t*.  We find t* in the line of Table D for df = smallest (n 1 −1; n 2 −1) and the column for confidence level C.  The margin of error m is:

Common mistake !!! A common mistake is to calculate a one-sample confidence interval for   and then check whether   falls within that confidence interval, or vice-versa. This is WRONG because the variability in the sampling distribution for two independent samples is more complex and must take into account variability coming from both samples. Hence the more complex formula for the standard error.

Can directed reading activities in the classroom help improve reading ability? A class of 21 third-graders participates in these activities for 8 weeks while a control classroom of 23 third-graders follows the same curriculum without the activities. After 8 weeks, all children take a reading test (scores in table). 95% confidence interval for (µ 1 − µ 2 ), with df = 20 conservatively  t* = 2.086: With 95% confidence, (µ 1 − µ 2 ), falls within 9.96 ± 8.99 or 1.0 to 18.9.

Robustness The two-sample t procedures are more robust than the one-sample t procedures. They are the most robust when both sample sizes are equal and both sample distributions are similar. But even when we deviate from this, two-sample tests tend to remain quite robust.  When planning a two-sample study, choose equal sample sizes if you can. As a guideline, a combined sample size (n 1 + n 2 ) of 40 or more will allow you to work with even the most skewed distributions.

Details of the two sample t procedures The true value of the degrees of freedom for a two-sample t- distribution is quite lengthy to calculate. That’s why we use an approximate value, df = smallest(n 1 − 1, n 2 − 1), which errs on the conservative side (often smaller than the exact). Computer software, though, gives the exact degrees of freedom—or the rounded value—for your sample data.

Pooled two-sample procedures There are two versions of the two-sample t-test: one assuming equal variance (“pooled 2-sample test”) and one not assuming equal variance (“unequal” variance, as we have studied) for the two populations. They have slightly different formulas and degrees of freedom. Two normally distributed populations with unequal variances The pooled (equal variance) two- sample t-test was often used before computers because it has exactly the t distribution for degrees of freedom n 1 + n 2 − 2. However, the assumption of equal variance is hard to check, and thus the unequal variance test is safer.

When both population have the same standard deviation, the pooled estimator of σ 2 is: The sampling distribution for (x 1 − x 2 ) has exactly the t distribution with (n 1 + n 2 − 2) degrees of freedom. A level C confidence interval for µ 1 − µ 2 is (with area C between −t* and t*) To test the hypothesis H 0 : µ 1 = µ 2 against a one-sided or a two-sided alternative, compute the pooled two-sample t statistic for the t(n 1 + n 2 − 2) distribution.

Which type of test? One sample, paired samples, two samples?  Comparing vitamin content of bread immediately after baking vs. 3 days later (the same loaves are used on day one and 3 days later).  Comparing vitamin content of bread immediately after baking vs. 3 days later (tests made on independent loaves).  Average fuel efficiency for 2005 vehicles is 21 miles per gallon. Is average fuel efficiency higher in the new generation “green vehicles”?  Is blood pressure altered by use of an oral contraceptive? Comparing a group of women not using an oral contraceptive with a group taking it.  Review insurance records for dollar amount paid after fire damage in houses equipped with a fire extinguisher vs. houses without one. Was there a difference in the average dollar amount paid?

Homework  Read section 7.2 and carefully work through the examples  Work on: #7.56, 57, 61-63, 69, 70, 81, 82, 85  Use JMP where you can so that it will compute the d.f. Otherwise, use the "second approximation" in the book: smaller(n1-1, n2-1)