Presentation is loading. Please wait.

Presentation is loading. Please wait.

SAMPLE SIZE AND POWER Adapted from slides attributed to Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Adapted from slides attributed.

Similar presentations


Presentation on theme: "SAMPLE SIZE AND POWER Adapted from slides attributed to Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Adapted from slides attributed."— Presentation transcript:

1 SAMPLE SIZE AND POWER Adapted from slides attributed to Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Adapted from slides attributed to Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics

2 Sample Size Planning  Forces specification of outcome variables, clinically meaningful effect sizes, and planned statistical procedures  Leads to a specific recruitment goal  Encourages development of appropriate timelines and budgets  Discourages the performance of small, inconclusive studies  Forces specification of outcome variables, clinically meaningful effect sizes, and planned statistical procedures  Leads to a specific recruitment goal  Encourages development of appropriate timelines and budgets  Discourages the performance of small, inconclusive studies

3 Sample Size Planning (cont) To estimate sample size requirements, you need to: 1.Specify your null hypothesis and alternative hypothesis (1-tailed or 2-tailed). 2.Select the appropriate statistical test, based on the types of dependent and independent variables you are using. To estimate sample size requirements, you need to: 1.Specify your null hypothesis and alternative hypothesis (1-tailed or 2-tailed). 2.Select the appropriate statistical test, based on the types of dependent and independent variables you are using.

4 Sample Size Planning (cont) Estimating sample size requirements (cont): 3.Determine the minimum effect size (relative or absolute risk difference) that you would like to be able to detect. 4.For continuous outcomes, estimate the standard deviation. For dichotomous outcomes, estimate the baseline risk or incidence/prevalence of the event. 5.Set limits for Type I ( α ) and Type II ( β ) error. Estimating sample size requirements (cont): 3.Determine the minimum effect size (relative or absolute risk difference) that you would like to be able to detect. 4.For continuous outcomes, estimate the standard deviation. For dichotomous outcomes, estimate the baseline risk or incidence/prevalence of the event. 5.Set limits for Type I ( α ) and Type II ( β ) error.

5 Type I and II Errors When the conclusions of your study, based on your data, differ from the truth in your intended sample (e.g., lack of internal validity): –Type I error ( α )—Concluding that a difference exists (i.e., rejecting the null hypothesis) when there actually is no difference (i.e., the null hypothesis is true). –If you set type I error at.05, then all results with p<.05 will be labeled statistically significant. When the conclusions of your study, based on your data, differ from the truth in your intended sample (e.g., lack of internal validity): –Type I error ( α )—Concluding that a difference exists (i.e., rejecting the null hypothesis) when there actually is no difference (i.e., the null hypothesis is true). –If you set type I error at.05, then all results with p<.05 will be labeled statistically significant.

6 Type I and II Errors (cont) Type II error ( β )—Concluding that a difference does not exist (i.e., failing to reject the null hypothesis) when there actually is a difference. Power (1- β )—The probability of finding a difference (i.e., rejecting the null hypothesis) when a difference exists. You can fix your sample size (post hoc) and estimate power, or fix power (a priori) and estimate your requisite sample size. Type II error ( β )—Concluding that a difference does not exist (i.e., failing to reject the null hypothesis) when there actually is a difference. Power (1- β )—The probability of finding a difference (i.e., rejecting the null hypothesis) when a difference exists. You can fix your sample size (post hoc) and estimate power, or fix power (a priori) and estimate your requisite sample size.

7

8

9 Sample Size Software Commonly used programs EpiInfo for cohort and unmatched case control studies: http://www.cdc.gov/epiinfo/ Southwest Oncology Group for clinical trials: http://www.swogstat.org/statoolsout.html UCLA options for Poisson analysis, correlation coefficients, Fisher's exact test, and more: http://calculators.stat.ucla.edu/powercalc/ Commercial products offer great flexibility (see also SAS, SPSS, Stata): http://www.power-analysis.com/ http://www.statsol.ie/nquery/nquery.htm http://www.ncss.com/pass.html EpiInfo for cohort and unmatched case control studies: http://www.cdc.gov/epiinfo/ Southwest Oncology Group for clinical trials: http://www.swogstat.org/statoolsout.html UCLA options for Poisson analysis, correlation coefficients, Fisher's exact test, and more: http://calculators.stat.ucla.edu/powercalc/ Commercial products offer great flexibility (see also SAS, SPSS, Stata): http://www.power-analysis.com/ http://www.statsol.ie/nquery/nquery.htm http://www.ncss.com/pass.html

10 How to Get By with a Smaller Sample 1.Double-check your assumptions: –Type I ( α ) error rate (0.10 instead of 0.05?) –Type II ( β ) error rate (0.30 instead of 0.20?) 2.Double-check your hypothesis: –One-tailed instead of two-tailed? 3.Double-check your desired effect size: –Is the intervention likely to have an effect of this size? –Can I settle for being able to find a larger effect? 1.Double-check your assumptions: –Type I ( α ) error rate (0.10 instead of 0.05?) –Type II ( β ) error rate (0.30 instead of 0.20?) 2.Double-check your hypothesis: –One-tailed instead of two-tailed? 3.Double-check your desired effect size: –Is the intervention likely to have an effect of this size? –Can I settle for being able to find a larger effect?

11 How to Get By with a Smaller Sample (cont) 4.Use a more frequent dichotomous outcome variable: –Less serious outcome (don’t sacrifice clinical face validity) –Composite outcome (watch for undesired heterogeneity) –Lengthen follow-up period –Select higher-risk study subjects 4.Use a more frequent dichotomous outcome variable: –Less serious outcome (don’t sacrifice clinical face validity) –Composite outcome (watch for undesired heterogeneity) –Lengthen follow-up period –Select higher-risk study subjects

12 How to Get By with a Smaller Sample (cont) 5.To decrease variability (increase reliability) of continuous outcome measures, use: –A continuous variable instead of a dichotomous variable, if possible, but don't give up clinical face validity to gain statistical power (e.g., mean birthweight). –More precise, reliable equipment or data collection techniques; consider repeated/ multiple measurements. –Paired (pre-post or matched) measurements to reduce variability; consider crossover design. 5.To decrease variability (increase reliability) of continuous outcome measures, use: –A continuous variable instead of a dichotomous variable, if possible, but don't give up clinical face validity to gain statistical power (e.g., mean birthweight). –More precise, reliable equipment or data collection techniques; consider repeated/ multiple measurements. –Paired (pre-post or matched) measurements to reduce variability; consider crossover design.

13 How to Get By with a Smaller Sample (cont) 6.Increase your sample size cheaply: –Consider unequal group sizes (2–3 controls per case) to increase efficiency in a case-control study 7.Compromise or get more money 6.Increase your sample size cheaply: –Consider unequal group sizes (2–3 controls per case) to increase efficiency in a case-control study 7.Compromise or get more money

14 Common Mistakes Related to Sample Size  Failure to discuss sample size or power in published papers (especially studies with negative results)  Unrealistic assumptions related to disease incidence or prevalence, or effect size  Failure to explore sample size requirements for a range of possible values of key variables (sensitivity analysis)  Failure to discuss sample size or power in published papers (especially studies with negative results)  Unrealistic assumptions related to disease incidence or prevalence, or effect size  Failure to explore sample size requirements for a range of possible values of key variables (sensitivity analysis)

15 Common Mistakes Related to Sample Size (cont)  Treating Type I (<0.05) and Type II (<0.2) threshold error rates as if they were divinely inspired  Failure to account for attrition by boosting planned sample size  Treating Type I (<0.05) and Type II (<0.2) threshold error rates as if they were divinely inspired  Failure to account for attrition by boosting planned sample size


Download ppt "SAMPLE SIZE AND POWER Adapted from slides attributed to Patrick S. Romano, MD, MPH Professor of Medicine and Pediatrics Adapted from slides attributed."

Similar presentations


Ads by Google