Power and Sample Size IF IF the null hypothesis H 0 : μ = μ 0 is true, then we should expect a random sample mean to lie in its “acceptance region” with.

Slides:



Advertisements
Similar presentations
Topics Today: Case I: t-test single mean: Does a particular sample belong to a hypothesized population? Thursday: Case II: t-test independent means: Are.
Advertisements

Is it statistically significant?
STATISTICAL INFERENCE PART V
T-tests Computing a t-test  the t statistic  the t distribution Measures of Effect Size  Confidence Intervals  Cohen’s d.
Estimating the Population Mean Assumptions 1.The sample is a simple random sample 2.The value of the population standard deviation (σ) is known 3.Either.
Final Jeopardy $100 $200 $300 $400 $500 $100 $200 $300 $400 $500 $100 $200 $300 $400 $500 $100 $200 $300 $400 $500 $100 $200 $300 $400 $500 LosingConfidenceLosingConfidenceTesting.
Lecture 13: Review One-Sample z-test and One-Sample t-test 2011, 11, 1.
Hypothesis Testing for Population Means and Proportions
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
Inference about a Mean Part II
T-Tests Lecture: Nov. 6, 2002.
Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 7 th Edition Chapter 9 Hypothesis Testing: Single.
Inferences About Process Quality
8-5 Testing a Claim About a Standard Deviation or Variance This section introduces methods for testing a claim made about a population standard deviation.
5-3 Inference on the Means of Two Populations, Variances Unknown
“There are three types of lies: Lies, Damn Lies and Statistics” - Mark Twain.
UWHC Scholarly Forum April 17, 2013 Ismor Fischer, Ph.D. UW Dept of Statistics, UW Dept of Biostatistics and Medical Informatics
Confidence Intervals for the Mean (σ Unknown) (Small Samples)
AM Recitation 2/10/11.
Chapter 7 Using sample statistics to Test Hypotheses about population parameters Pages
Statistical inference: confidence intervals and hypothesis testing.
1/2555 สมศักดิ์ ศิวดำรงพงศ์
Chapter 9.3 (323) A Test of the Mean of a Normal Distribution: Population Variance Unknown Given a random sample of n observations from a normal population.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Education 793 Class Notes T-tests 29 October 2003.
1 Level of Significance α is a predetermined value by convention usually 0.05 α = 0.05 corresponds to the 95% confidence level We are accepting the risk.
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
More About Significance Tests
STATISTICAL INFERENCE PART VII
1 Power and Sample Size in Testing One Mean. 2 Type I & Type II Error Type I Error: reject the null hypothesis when it is true. The probability of a Type.
Dan Piett STAT West Virginia University
6 Chapter Confidence Intervals © 2012 Pearson Education, Inc.
One Sample Inf-1 If sample came from a normal distribution, t has a t-distribution with n-1 degrees of freedom. 1)Symmetric about 0. 2)Looks like a standard.
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
1 SMU EMIS 7364 NTU TO-570-N Inferences About Process Quality Updated: 2/3/04 Statistical Quality Control Dr. Jerrell T. Stracener, SAE Fellow.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 1): Two-tail Tests & Confidence Intervals Fall, 2008.
5.1 Chapter 5 Inference in the Simple Regression Model In this chapter we study how to construct confidence intervals and how to conduct hypothesis tests.
Confidence Intervals Lecture 3. Confidence Intervals for the Population Mean (or percentage) For studies with large samples, “approximately 95% of the.
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
Chapter 8 Parameter Estimates and Hypothesis Testing.
Chapter 9: Testing Hypotheses Overview Research and null hypotheses One and two-tailed tests Type I and II Errors Testing the difference between two means.
: An alternative representation of level of significance. - normal distribution applies. - α level of significance (e.g. 5% in two tails) determines the.
© Copyright McGraw-Hill 2004
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
AGENDA Review In-Class Group Problems Review. Homework #3 Due on Thursday Do the first problem correctly Difference between what should happen over the.
Copyright © 2015, 2012, and 2009 Pearson Education, Inc. 1 Section 6.2 Confidence Intervals for the Mean (  Unknown)
© 2010 Pearson Prentice Hall. All rights reserved Chapter Hypothesis Tests Regarding a Parameter 10.
Introduction to Basic Statistical Methods Part 1: “Statistics in a Nutshell” UWHC Scholarly Forum March 19, 2014 Ismor Fischer, Ph.D. UW Dept of Statistics.
1 Testing Statistical Hypothesis The One Sample t-Test Heibatollah Baghi, and Mastee Badii.
Introduction to Basic Statistical Methods Part 1: Statistics in a Nutshell UWHC Scholarly Forum May 21, 2014 Ismor Fischer, Ph.D. UW Dept of Statistics.
6.3 One- and Two- Sample Inferences for Means. If σ is unknown Estimate σ by sample standard deviation s The estimated standard error of the mean will.
Lecture 8 Estimation and Hypothesis Testing for Two Population Parameters.
Inference for distributions: - Comparing two means.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Inference about the mean of a population of measurements (  ) is based on the standardized value of the sample mean (Xbar). The standardization involves.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Business Statistics: A First Course 5 th Edition.
Hypothesis Testing and Statistical Significance
Chapter 9 Introduction to the t Statistic
STAT 312 Chapter 7 - Statistical Intervals Based on a Single Sample
Inference for the Mean of a Population
Chapter 9 Hypothesis Testing.
Chapter 4 Continuous Random Variables and Probability Distributions
STAT 312 Chapter 7 - Statistical Intervals Based on a Single Sample
CHAPTER 6 Statistical Inference & Hypothesis Testing
STAT 312 Chapter 7 - Statistical Intervals Based on a Single Sample
Elementary Statistics
Hypothesis Testing.
Statistical Inference for the Mean: t-test
Presentation transcript:

Power and Sample Size IF IF the null hypothesis H 0 : μ = μ 0 is true, then we should expect a random sample mean to lie in its “acceptance region” with probability 1 – α, the “confidence level.” That is, P(Accept H 0 | H 0 is true) = 1 – α. Therefore, we should expect a random sample mean to lie in its “rejection region” with probability α, the “significance level.” That is, P(Reject H 0 | H 0 is true) = α. 1   H 0 :  =  0 Acceptance Region for H 0 Rejection Region  /2 “Null Distribution” “Type 1 Error” μ 0 + z α/2 (σ /

Power and Sample Size 1   H 0 :  =  0 Acceptance Region for H 0 Rejection Region  /2 “Null Distribution” “Type 2 Error” μ 0 + z α/2 (σ / “Alternative Distribution” 1 –   H A : μ = μ 1 μ 1 – z  (σ / IF IF the null hypothesis H 0 : μ = μ 0 is false, then the “power” to correctly reject it in favor of a particular alternative H A : μ = μ 1 is P(Reject H 0 | H 0 is false) = 1 – . Thus, P(Accept H 0 | H 0 is false) = . Set them equal to each other, and solve for n …

N(0, 1) X ~ N( μ, σ )Normally-distributed population random variable, with unknown mean, but known standard deviation H 0 : μ = μ 0 Null Hypothesis value H A : μ = μ 1 Alternative Hypothesis specific value  significance level (or equivalently, confidence level 1 –  ) 1 –  power (or equivalently, Type 2 error rate  ) Then the minimum required sample size is: Given: Example: σ = 1.5 yrs, μ 0 = 25.4 yrs,  =.05 1    zz Suppose it is suspected that currently, μ 1 = 26 yrs. Want 90% power of correctly rejecting H 0 in favor of H A, if it is false  z.025 = 1.96  1 –  =.90   =.10  z.10 = 1.28  = |26 – 25.4| / 1.5 = 0.4 So… minimum sample size required is n  66 Want more power!

n  66 n  82 N(0, 1) X ~ N( μ, σ )Normally-distributed population random variable, with unknown mean, but known standard deviation H 0 : μ = μ 0 Null Hypothesis value H A : μ = μ 1 Alternative Hypothesis specific value  significance level (or equivalently, confidence level 1 –  ) 1 –  power (or equivalently, Type 2 error rate  ) Then the minimum required sample size is: Given: 1    zz Want 90% power of correctly rejecting H 0 in favor of H A, if it is false  1 –  =.90 So… minimum sample size required is Want 95% power of correctly rejecting H 0 in favor of H A, if it is false  1 –  =.95   =.10   =.05  z.10 = 1.28  z.05 = Example: σ = 1.5 yrs, μ 0 = 25.4 yrs,  =.05 Suppose it is suspected that currently, μ 1 = 26 yrs.  z.025 = 1.96  = |26 – 25.4| / 1.5 = 0.4 Change μ 1

 = |26 – 25.4| / 1.5 = 0.4  = |25.7 – 25.4| / 1.5 = 0.2 Suppose it is suspected that currently, μ 1 = 26 yrs.Suppose it is suspected that currently, μ 1 = 25.7 yrs. n  82 N(0, 1) X ~ N( μ, σ )Normally-distributed population random variable, with unknown mean, but known standard deviation H 0 : μ = μ 0 Null Hypothesis value H A : μ = μ 1 Alternative Hypothesis specific value  significance level (or equivalently, confidence level 1 –  ) 1 –  power (or equivalently, Type 2 error rate  ) Then the minimum required sample size is: Given: Example: σ = 1.5 yrs, μ 0 = 25.4 yrs,  =.05 1    zz  z.025 = 1.96 So… minimum sample size required is Want 95% power of correctly rejecting H 0 in favor of H A, if it is false  1 –  =.95   =.05  z.05 = n  325

N(0, 1) X ~ N( μ, σ )Normally-distributed population random variable, with unknown mean, but known standard deviation H 0 : μ = μ 0 Null Hypothesis value H A : μ = μ 1 Alternative Hypothesis specific value  significance level (or equivalently, confidence level 1 –  ) 1 –  power (or equivalently, Type 2 error rate  ) Then the minimum required sample size is: Given: 1    zz With n = 400, how much power exists to correctly reject H 0 in favor of H A, if it is false? Power = 1 –  = = , i.e., 98% Example: σ = 1.5 yrs, μ 0 = 25.4 yrs,  =.05 Suppose it is suspected that currently, μ 1 = 25.7 yrs.  z.025 = 1.96

But this introduces additional variability from one sample to another… PROBLEM! X ~ N( μ, σ )Normally-distributed population random variable, with unknown mean, but known standard deviation H 0 : μ = μ 0 Null Hypothesis H A : μ ≠ μ 0 Alternative Hypothesis (2-sided)  significance level (or equivalently, confidence level 1 –  ) n sample size From this, we obtain… “standard error” s.e. sample mean sample standard deviation …with which to test the null hypothesis (via CI, AR, p-value). In practice however, it is far more common that the true population standard deviation σ is unknown. So we must estimate it from the sample! Given: x 1, x 2,…, x n x 1, x 2,…, x n Recall that (estimate)

X ~ N( μ, σ )Normally-distributed population random variable, with unknown mean, but known standard deviation H 0 : μ = μ 0 Null Hypothesis H A : μ ≠ μ 0 Alternative Hypothesis (2-sided)  significance level (or equivalently, confidence level 1 –  ) n sample size From this, we obtain… “standard error” s.e. sample mean sample standard deviation …with which to test the null hypothesis (via CI, AR, p-value). SOLUTION: follows a different sampling distribution from before. Given: x 1, x 2,…, x n But this introduces additional variability from one sample to another… PROBLEM! (estimate)

William S. Gossett ( ) … is actually a family of distributions, indexed by the degrees of freedom, labeled t df. As the sample size n gets large, t df converges to the standard normal distribution Z ~ N(0, 1). So the T-test is especially useful when n < 30. t1t1 t2t2 t3t3 t 10 Z ~ N(0, 1)

William S. Gossett ( ) … is actually a family of distributions, indexed by the degrees of freedom, labeled t df. As the sample size n gets large, t df converges to the standard normal distribution Z ~ N(0, 1). So the T-test is especially useful when n < 30. Z ~ N(0, 1) t4t

Lecture Notes Appendix… or… qt(.025, 4, lower.tail = F) [1]

William S. Gossett ( ) … is actually a family of distributions, indexed by the degrees of freedom, labeled t df. Because any t-distribution has heavier tails than the Z-distribution, it follows that for the same right-tailed area value, t-score > z-score. Z ~ N(0, 1) t4t

X = Age at first birth ~ N( μ, σ ) H 0 : μ = 25.4 yrs Null Hypothesis H A : μ ≠ 25.4 yrs Alternative Hypothesis Given: Previously… σ = 1.5 yrs, n = 400, statistically significant at  =.05 Example: n = 16, s = 1.22 yrs Now suppose that σ is unknown, and n < 30. standard error (estimate) =.025 critical value = t 15,.025

Lecture Notes Appendix…

p-value = X = Age at first birth ~ N( μ, σ ) H 0 : μ = 25.4 yrs Null Hypothesis H A : μ ≠ 25.4 yrs Alternative Hypothesis Given: Previously… σ = 1.5 yrs, n = 400, statistically significant at  =.05 Example: n = 16, s = 1.22 yrs Now suppose that σ is unknown, and n < 30. standard error (estimate) = 95% Confidence Interval =.025 critical value = t 15, % margin of error = (2.131)(0.305 yrs) = 0.65 yrs (25.9 – 0.65, ) = (25.25, 26.55) yrs  Test Statistic: = 2.131

Lecture Notes Appendix…

X = Age at first birth ~ N( μ, σ ) H 0 : μ = 25.4 yrs Null Hypothesis H A : μ ≠ 25.4 yrs Alternative Hypothesis Given: Example: n = 16, s = 1.22 yrs Now suppose that σ is unknown, and n < 30. standard error (estimate) = 95% Confidence Interval =.025 critical value = t 15,.025 = % margin of error = (2.131)(0.305 yrs) = 0.65 yrs p-value = = 2 (between.05 and.10) The 95% CI does contain the null value μ = The p-value is between.10 and.20, i.e., >.05. (Note: The R command 2 * pt(1.639, 15, lower.tail = F) gives the exact p-value as.122.) Not statistically significant; small n gives low power! = between.10 and.20. Previously… σ = 1.5 yrs, n = 400, statistically significant at  =.05 (25.9 – 0.65, ) = (25.25, 26.55) yrs

Assuming X ~ N( , σ), test H 0 :  =  0 vs. H A :  ≠  0, at level α… Lecture Notes Appendix A3.3…Lecture Notes Appendix A3.3… (click for details on this section) If the population variance  2 is known, then use it with the Z-distribution, for any n. If the population variance  2 is unknown, then estimate it by the sample variance s 2, and use: either T-distribution (more accurate), or the Z-distribution (easier), if n  30, T-distribution only, if n < 30. To summarize… Assuming X ~ N( , σ) ALSO SEE PAGE

Assuming X ~ N( , σ), test H 0 :  =  0 vs. H A :  ≠  0, at level α… If the population variance  2 is known, then use it with the Z-distribution, for any n. If the population variance  2 is unknown, then estimate it by the sample variance s 2, and use: either T-distribution (more accurate), or the Z-distribution (easier), if n  30, T-distribution only, if n < 30. To summarize… Lecture Notes Appendix A3.3…Lecture Notes Appendix A3.3… (click for details on this section) Assuming X ~ N( , σ) ALSO SEE PAGE

Assuming X ~ N( , σ) And what do we do if it’s not, or we can’t tell? Z ~ N(0, 1) IF our data approximates a bell curve, then its quantiles should “line up” with those of N(0, 1). How do we check that this assumption is reasonable, when all we have is a sample?

Z ~ N(0, 1) Assuming X ~ N( , σ) How do we check that this assumption is reasonable, when all we have is a sample? And what do we do if it’s not, or we can’t tell? IF our data approximates a bell curve, then its quantiles should “line up” with those of N(0, 1). Sample quantiles Q-Q plot Normal scores plot Normal probability plot

Assuming X ~ N( , σ) How do we check that this assumption is reasonable, when all we have is a sample? And what do we do if it’s not, or we can’t tell? IF our data approximates a bell curve, then its quantiles should “line up” with those of N(0, 1). Q-Q plot Normal scores plot Normal probability plot (R uses a slight variation to generate quantiles…) qqnorm(mysample)

Assuming X ~ N( , σ) How do we check that this assumption is reasonable, when all we have is a sample? IF our data approximates a bell curve, then its quantiles should “line up” with those of N(0, 1). qqnorm(mysample) (R uses a slight variation to generate quantiles…) Q-Q plot Normal scores plot Normal probability plot Formal statistical tests exist; see notes. And what do we do if it’s not, or we can’t tell?

x = rchisq(1000, 15) hist(x) y = log(x) hist(y) Assuming X ~ N( , σ) How do we check that this assumption is reasonable, when all we have is a sample? And what do we do if it’s not, or we can’t tell?  Use a mathematical “transformation” of the data (e.g., log, square root,…). X is said to be “log-normal.”

 Use a mathematical “transformation” of the data (e.g., log, square root,…). Assuming X ~ N( , σ) How do we check that this assumption is reasonable, when all we have is a sample? And what do we do if it’s not, or we can’t tell? qqnorm(x, pch = 19, cex =.5) qqline(x) qqnorm(y, pch = 19, cex =.5) qqline(y)

Assuming X ~ N( , σ) How do we check that this assumption is reasonable, when all we have is a sample? And what do we do if it’s not, or we can’t tell?  Use a mathematical “transformation” of the data (e.g., log, square root,…).  Use a “nonparametric test” (e.g., Sign Test, Wilcoxon Signed Rank Test). These tests make no assumptions on the underlying population distribution! = Mann-Whitney Test Based on “ranks” of the ordered data; tedious by hand… Has less power than Z-test or T-test (when appropriate)… but not bad. In R, see ?wilcox.test for details…. SEE LECTURE NOTES, PAGE FOR FLOWCHART OF METHODS

See… _Statistical_Inference/HYPOTHESIS_TESTING_SUMMARY.pdf