Presentation is loading. Please wait.

Presentation is loading. Please wait.

HW: See Web Project proposal due next Thursday. See web for more detail.

Similar presentations


Presentation on theme: "HW: See Web Project proposal due next Thursday. See web for more detail."— Presentation transcript:

1 HW: See Web Project proposal due next Thursday. See web for more detail.

2 Inference from Small Samples Chapter 10 Data from a manufacturer of child’s pajamas Want to develop materials that take longer before they burn. Run an experiment to compare four types of fabrics. (They considered other factors too, but we’ll only consider the fabrics. Source: Matt Wand)

3 4321 18 17 16 15 14 13 12 11 10 9 Fabric B u r n T i m e Fabric Data: Tried to light 4 samples of 4 different (unoccupied!) pajama fabrics on fire. Higher # means less flamable Mean=16.85 std dev=0.94 Mean=10.95 std dev=1.237 Mean=10.50 std dev=1.137 Mean=11.00 std dev=1.299

4 Confidence Intervals? Suppose we want to make confidence intervals of mean “burn time” for each fabric type. Can I use: x +/- z  /2 s/sqrt(n) for each one? Why or why not?

5 Answer: t n-1 is the “t distribution” with n-1 degrees of freedom (df) Sample size (n=4) is too small to justify central limit theorem based normal approximation. More precisely: –If x i is normal, then (x –  )/[  /sqrt(n)] is normal for any n. –x i is normal, then (x –  )/[s/sqrt(n)] is normal for n > 30. –New: Suppose x i is approximately normal (and an independent sample). Then (x –  )/[s/sqrt(n)] ~ t n-1 (number of data points used to estimate x) - 1

6 What are degrees of freedom? Think of them as a parameter t-distribution has one parameter: df Normal distribution has 2 parameters: mean and variance

7 “Student” t-distribution (like a normal distribution, but w/ “heavier tails”) t dist’t with 3df Normal dist’n As df increases, t n-1 becomes the normal dist’n. Indistinguishable for n > 30 or so. Idea: estimating std dev leads to “more variability”. More variability = higher chance of “extreme” observation

8 t-based confidence intervals 1-  level confidence interval for a mean: x +/- t  /2,n-1 s/sqrt(n) where t  /2,n-1 is a number such that Pr(T > t  /2,n-1 ) =  /2 where T~t n-1 (see table opposite normal table inside of book cover…)

9 Back to burn time example xst 0.025,3 95% CI Fabric 116.850.9403.182(15.35,18.35) Fabric 210.951.2373.182(8.98, 12.91) Fabric 310.501.1373.182(8.69, 12.31) Fabric 411.001.2993.182(8.93, 13.07)

10 t-based Hypothesis test for a single mean Mechanics: replace z  /2 cutoff with t  /2,n-1 ex: fabric 1 burn time data H 0 : mean is 15 H A : mean isn’t 15 Test stat: |(16.85-15)/(0.94/sqrt(4))| = 3.94 Reject at  =5% since 3.94>t 0.025,3 =3.182 P-value = 2*Pr(T>3.94) where T~t 3. This is between 2% and 5% since t 0.025,3 =3.182 and t 0.01,3 =4.541. (pvalue=2*0.0146) from software) See minitab: basis statistics: 1 sample t test Idea: t-based tests are harder to pass than large sample normal based test. Why does that make sense?

11 Comparison of 2 means: Example: –Is mean burn time of fabric 2 different from mean burn time of fabric 3? –Why can’t we answer this w/ the hypothesis test: H 0 : mean of fabric 2 = 10.5 H A : mean of fabric 2 doesn’t = 10.5 –What’s the appropriate hypothesis test? x for fabric 3

12 H 0 : mean fab 2 – mean fab 3 = 0 H A : mean fab 2 – mean fab 3 not = 0 Let’s do this w/ a confidence interval (  =0.05). 95% Large sample CI would be: (x 2 – x 3 ) +/- z  /2 sqrt[s 2 2 /n 2 + s 2 3 /n 3 ] Can’t use this because it will be “too narrow” (i.e. claim 95% CI but actually it’s an 89%...)

13 CI is based on small sample distribution of difference between means. That distribution is different depending on whether the variances of the two means are approximately equal equal or not Small sample CI: –If var(fabric 2) is approximately = var(fabric 3), then just replace z  /2 with t ,n2+n3-2 df = n2+n3-2 = (n2-1)+(n3-1) This is called “pooling” the variances. –If not, then use software. (Software adjusts the degrees of freedom for an “appoximate” confidence interval.) Rule of thumb: OK if 1/3<(S 2 3 /S 2 2 )<3 More conservative Read section 10.4

14 Two-sample T for f2 vs f3 N Mean StDev SE Mean f2 4 10.95 1.24 0.62 f3 4 10.50 1.14 0.57 Difference = mu f2 - mu f3 Estimate for difference: 0.450 95% CI for difference: (-1.606, 2.506) T-Test of difference = 0 (vs not =): T-Value = 0.54 P-Value = 0.611 DF = 6 Both use Pooled StDev = 1.19 Minitab: Stat: Basic statistics: 2 sample t

15 Hypothesis test: comparison of 2 means As in the 1 mean case, replace z  /2 with the appropriate “t based” cutoff value. When  2 1 approximately =  2 2 then test statistic is t=|(x 1 –x 2 )+/-sqrt(s 2 1 /n 1 +s 2 2 /n 2 )| Reject if t > t  /2,n1+n2-2 Pvalue = 2*Pr(T > t) where T~t n1+n2-2 For unequal variances, software adjusts df on cutoff.

16 “Paired T-test” In previous comparison of two means, the data from sample 1 and sample 2 were unrelated. (Fabric 2 and Fabric 3 observations are independent.) Consider following experiment: –“separated identical twins” (adoption) experiments. 15 sets of twins 1 twin raised in city and 1 raised in country measure IQ of each twin want to compare average IQ of people raised in cities versus people raised in the country since twins share common genetic make up, IQs within a pair of twins probably are not independent

17 Data: One Way of Looking At it Number IQ 2468101214 60 80 100 120 140 160 180 = city = country City mean = 110.47 Country mean = 106.33 The twins are “linked” by these #s country city [1,] 117 118 [2,] 153 156 [3,] 73 71 [4,] 64 65 [5,] 95 109 [6,] 120 123 [7,] 94 88 [8,] 106 121 [9,] 90 95 [10,] 96 110 [11,] 67 66 [12,] 102 112 [13,] 111 110 [14,] 127 133 [15,] 180 180

18 country city diff [1,] 117 118 -1 [2,] 153 156 -3 [3,] 73 71 2 [4,] 64 65 -1 [5,] 95 109 -14 [6,] 120 123 -3 [7,] 94 88 6 [8,] 106 121 -15 [9,] 90 95 -5 [10,] 96 110 -14 [11,] 67 66 1 [12,] 102 112 -10 [13,] 111 110 1 [14,] 127 133 -6 [15,] 180 180 0 country - city Index IQ 2468101214 -15 -10 -5 0 5 Mean difference = -4.14 (country – city) Of course, Mean difference = mean( country ) - mean( city ) If we want to test “difference = 0”, we need variance of differences.

19 Paired t-test One twin’s observation is dependent on the other twin’s observation, but the differences are independent across twins. As a result, estimate var(differences) with sample variance of differences. This is not the same as var( city ) + var( country) As a result, we can do an ordinary one sample t-test on the differences. This is called a “paired t-test”. When data naturally come in pairs and the pairs are related, a “paired t-test” is appropriate.

20 “Paired T-test” Minitab: basic statistics: paired t-test: Paired T for Country - City N Mean StDev SE Mean Country 15 106.33 31.03 8.01 City 15 110.47 31.73 8.19 Difference 15 -4.13 6.46 1.67 95% CI for mean difference: (-7.71, -0.56) T-Test of mean difference = 0 (vs not = 0): T-Value = -2.48 P-Value = 0.027 Compare this to a 2-sample t-test

21 Compare “Paired T-test” vs “2 sample t-test” Paired T for Country - City N Mean StDev SE Mean Country 15 106.33 31.03 8.01 City 15 110.47 31.73 8.19 Difference 15 -4.13 6.46 1.67 95% CI for mean difference: (-7.71, -0.56) T-Test of mean difference = 0 (vs not = 0): T-Value = -2.48 P-Value = 0.027 Two-sample T for Country vs City N Mean StDev SE Mean Country 15 106.3 31.0 8.0 City 15 110.5 31.7 8.2 Difference = mu Country - mu City Estimate for difference: -4.1 95% CI for difference: (-27.6, 19.3) T-Test of difference = 0 (vs not =): T-Value = -0.36 P-Value = 0.721 DF = 28 Both use Pooled StDev = 31.4

22 Estimate of difference is the same, –but the variance estimate is very different: Paired: std dev(difference) = 1.67 2 sample: sqrt[ (31.0^2 /15) + (31.7^2/15) ] = 11.46 –“cutoff” is different (df) too: t 0.025,13 for paired t 0.025,28 for 2 sample Compare “Paired T-test” vs “2 sample t-test”

23 Estimate of difference is the same, –but the variance estimate is very different: Paired: std dev(difference) = 1.67 2 sample: sqrt[ (31.0^2 /15) + (31.7^2/15) ] = 11.46 –“cutoff” is different (df) too: t 0.025,13 for paired t 0.025,28 for 2 sample Compare “Paired T-test” vs “2 sample t-test”

24 Where we’ve been We can use data to address following questions: 1.Question:Is a mean = some number a. Answer:If n>30, large sample “Z” test and confidence interval for means (chapters 8 and 9) b. Answer:If n<=30 and data is approximately normal, then “t” test and confidence intervals for means (chapter 10) 2.Question:Is a proportion = some percentage Answer:If n>30, large sample “Z” test and confidence interval for proportions (chapters 8 and 9) If n<=30, t-test is not appropriate

25 Where we’ve been 3.Question:Is a difference between two means = some # a. Answer:If n>30 and samples are independent (not paired), large sample “Z” test and confidence interval for means (chapters 8 and 9) b. Answer:If n<=30 and samples are independent (not paired), large sample “t” test and confidence interval for means (chapter 10) c. Answer: If samples are dependent, paired t-test (chap 10) 4.Question:Is a difference between two proportions = some % Answer:If n>30 and samples are independent, “Z” test for proportions (chapters 8 and 9) (no t-test…)


Download ppt "HW: See Web Project proposal due next Thursday. See web for more detail."

Similar presentations


Ads by Google