HW: See Web Project proposal due next Thursday. See web for more detail.
Inference from Small Samples Chapter 10 Data from a manufacturer of child’s pajamas Want to develop materials that take longer before they burn. Run an experiment to compare four types of fabrics. (They considered other factors too, but we’ll only consider the fabrics. Source: Matt Wand)
Fabric B u r n T i m e Fabric Data: Tried to light 4 samples of 4 different (unoccupied!) pajama fabrics on fire. Higher # means less flamable Mean=16.85 std dev=0.94 Mean=10.95 std dev=1.237 Mean=10.50 std dev=1.137 Mean=11.00 std dev=1.299
Confidence Intervals? Suppose we want to make confidence intervals of mean “burn time” for each fabric type. Can I use: x +/- z /2 s/sqrt(n) for each one? Why or why not?
Answer: t n-1 is the “t distribution” with n-1 degrees of freedom (df) Sample size (n=4) is too small to justify central limit theorem based normal approximation. More precisely: –If x i is normal, then (x – )/[ /sqrt(n)] is normal for any n. –x i is normal, then (x – )/[s/sqrt(n)] is normal for n > 30. –New: Suppose x i is approximately normal (and an independent sample). Then (x – )/[s/sqrt(n)] ~ t n-1 (number of data points used to estimate x) - 1
What are degrees of freedom? Think of them as a parameter t-distribution has one parameter: df Normal distribution has 2 parameters: mean and variance
“Student” t-distribution (like a normal distribution, but w/ “heavier tails”) t dist’t with 3df Normal dist’n As df increases, t n-1 becomes the normal dist’n. Indistinguishable for n > 30 or so. Idea: estimating std dev leads to “more variability”. More variability = higher chance of “extreme” observation
t-based confidence intervals 1- level confidence interval for a mean: x +/- t /2,n-1 s/sqrt(n) where t /2,n-1 is a number such that Pr(T > t /2,n-1 ) = /2 where T~t n-1 (see table opposite normal table inside of book cover…)
Back to burn time example xst 0.025,3 95% CI Fabric (15.35,18.35) Fabric (8.98, 12.91) Fabric (8.69, 12.31) Fabric (8.93, 13.07)
t-based Hypothesis test for a single mean Mechanics: replace z /2 cutoff with t /2,n-1 ex: fabric 1 burn time data H 0 : mean is 15 H A : mean isn’t 15 Test stat: |( )/(0.94/sqrt(4))| = 3.94 Reject at =5% since 3.94>t 0.025,3 =3.182 P-value = 2*Pr(T>3.94) where T~t 3. This is between 2% and 5% since t 0.025,3 =3.182 and t 0.01,3 = (pvalue=2*0.0146) from software) See minitab: basis statistics: 1 sample t test Idea: t-based tests are harder to pass than large sample normal based test. Why does that make sense?
Comparison of 2 means: Example: –Is mean burn time of fabric 2 different from mean burn time of fabric 3? –Why can’t we answer this w/ the hypothesis test: H 0 : mean of fabric 2 = 10.5 H A : mean of fabric 2 doesn’t = 10.5 –What’s the appropriate hypothesis test? x for fabric 3
H 0 : mean fab 2 – mean fab 3 = 0 H A : mean fab 2 – mean fab 3 not = 0 Let’s do this w/ a confidence interval ( =0.05). 95% Large sample CI would be: (x 2 – x 3 ) +/- z /2 sqrt[s 2 2 /n 2 + s 2 3 /n 3 ] Can’t use this because it will be “too narrow” (i.e. claim 95% CI but actually it’s an 89%...)
CI is based on small sample distribution of difference between means. That distribution is different depending on whether the variances of the two means are approximately equal equal or not Small sample CI: –If var(fabric 2) is approximately = var(fabric 3), then just replace z /2 with t ,n2+n3-2 df = n2+n3-2 = (n2-1)+(n3-1) This is called “pooling” the variances. –If not, then use software. (Software adjusts the degrees of freedom for an “appoximate” confidence interval.) Rule of thumb: OK if 1/3<(S 2 3 /S 2 2 )<3 More conservative Read section 10.4
Two-sample T for f2 vs f3 N Mean StDev SE Mean f f Difference = mu f2 - mu f3 Estimate for difference: % CI for difference: (-1.606, 2.506) T-Test of difference = 0 (vs not =): T-Value = 0.54 P-Value = DF = 6 Both use Pooled StDev = 1.19 Minitab: Stat: Basic statistics: 2 sample t
Hypothesis test: comparison of 2 means As in the 1 mean case, replace z /2 with the appropriate “t based” cutoff value. When 2 1 approximately = 2 2 then test statistic is t=|(x 1 –x 2 )+/-sqrt(s 2 1 /n 1 +s 2 2 /n 2 )| Reject if t > t /2,n1+n2-2 Pvalue = 2*Pr(T > t) where T~t n1+n2-2 For unequal variances, software adjusts df on cutoff.
“Paired T-test” In previous comparison of two means, the data from sample 1 and sample 2 were unrelated. (Fabric 2 and Fabric 3 observations are independent.) Consider following experiment: –“separated identical twins” (adoption) experiments. 15 sets of twins 1 twin raised in city and 1 raised in country measure IQ of each twin want to compare average IQ of people raised in cities versus people raised in the country since twins share common genetic make up, IQs within a pair of twins probably are not independent
Data: One Way of Looking At it Number IQ = city = country City mean = Country mean = The twins are “linked” by these #s country city [1,] [2,] [3,] [4,] [5,] [6,] [7,] [8,] [9,] [10,] [11,] [12,] [13,] [14,] [15,]
country city diff [1,] [2,] [3,] [4,] [5,] [6,] [7,] [8,] [9,] [10,] [11,] [12,] [13,] [14,] [15,] country - city Index IQ Mean difference = (country – city) Of course, Mean difference = mean( country ) - mean( city ) If we want to test “difference = 0”, we need variance of differences.
Paired t-test One twin’s observation is dependent on the other twin’s observation, but the differences are independent across twins. As a result, estimate var(differences) with sample variance of differences. This is not the same as var( city ) + var( country) As a result, we can do an ordinary one sample t-test on the differences. This is called a “paired t-test”. When data naturally come in pairs and the pairs are related, a “paired t-test” is appropriate.
“Paired T-test” Minitab: basic statistics: paired t-test: Paired T for Country - City N Mean StDev SE Mean Country City Difference % CI for mean difference: (-7.71, -0.56) T-Test of mean difference = 0 (vs not = 0): T-Value = P-Value = Compare this to a 2-sample t-test
Compare “Paired T-test” vs “2 sample t-test” Paired T for Country - City N Mean StDev SE Mean Country City Difference % CI for mean difference: (-7.71, -0.56) T-Test of mean difference = 0 (vs not = 0): T-Value = P-Value = Two-sample T for Country vs City N Mean StDev SE Mean Country City Difference = mu Country - mu City Estimate for difference: % CI for difference: (-27.6, 19.3) T-Test of difference = 0 (vs not =): T-Value = P-Value = DF = 28 Both use Pooled StDev = 31.4
Estimate of difference is the same, –but the variance estimate is very different: Paired: std dev(difference) = sample: sqrt[ (31.0^2 /15) + (31.7^2/15) ] = –“cutoff” is different (df) too: t 0.025,13 for paired t 0.025,28 for 2 sample Compare “Paired T-test” vs “2 sample t-test”
Estimate of difference is the same, –but the variance estimate is very different: Paired: std dev(difference) = sample: sqrt[ (31.0^2 /15) + (31.7^2/15) ] = –“cutoff” is different (df) too: t 0.025,13 for paired t 0.025,28 for 2 sample Compare “Paired T-test” vs “2 sample t-test”
Where we’ve been We can use data to address following questions: 1.Question:Is a mean = some number a. Answer:If n>30, large sample “Z” test and confidence interval for means (chapters 8 and 9) b. Answer:If n<=30 and data is approximately normal, then “t” test and confidence intervals for means (chapter 10) 2.Question:Is a proportion = some percentage Answer:If n>30, large sample “Z” test and confidence interval for proportions (chapters 8 and 9) If n<=30, t-test is not appropriate
Where we’ve been 3.Question:Is a difference between two means = some # a. Answer:If n>30 and samples are independent (not paired), large sample “Z” test and confidence interval for means (chapters 8 and 9) b. Answer:If n<=30 and samples are independent (not paired), large sample “t” test and confidence interval for means (chapter 10) c. Answer: If samples are dependent, paired t-test (chap 10) 4.Question:Is a difference between two proportions = some % Answer:If n>30 and samples are independent, “Z” test for proportions (chapters 8 and 9) (no t-test…)