Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ) www.ahrq.gov.

Slides:



Advertisements
Similar presentations
High Resolution studies
Advertisements

Introduction to Hypothesis Testing
Sample size calculation
Agency for Healthcare Research and Quality (AHRQ)
Intro to Statistics Part2 Arier Lee University of Auckland.
Hypothesis Testing Goal: Make statement(s) regarding unknown population parameter values based on sample data Elements of a hypothesis test: Null hypothesis.
Power and sample size.
Study Objectives and Questions for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Understanding p-values Annie Herbert Medical Statistician Research and Development Support Unit
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Sample size estimation
November 10, 2010DSTS meeting, Copenhagen1 Power and sample size calculations Michael Væth, University of Aarhus Introductory remarks Two-sample problem.
1 1 Slide STATISTICS FOR BUSINESS AND ECONOMICS Seventh Edition AndersonSweeneyWilliams Slides Prepared by John Loucks © 1999 ITP/South-Western College.
Sensitivity Analysis for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Elements of a clinical trial research protocol
SAMPLE SIZE ESTIMATION
9-1 Hypothesis Testing Statistical Hypotheses Statistical hypothesis testing and confidence interval estimation of parameters are the fundamental.
Basic Elements of Testing Hypothesis Dr. M. H. Rahbar Professor of Biostatistics Department of Epidemiology Director, Data Coordinating Center College.
Inferences About Process Quality
Sample size calculations
Sample Size Determination
Selection of Data Sources for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Sample Size Determination Ziad Taib March 7, 2014.
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
Are the results valid? Was the validity of the included studies appraised?
STrengthening the Reporting of OBservational Studies in Epidemiology
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
CI - 1 Cure Rate Models and Adjuvant Trial Design for ECOG Melanoma Studies in the Past, Present, and Future Joseph Ibrahim, PhD Harvard School of Public.
Power and Sample Size Determination Anwar Ahmad. Learning Objectives Provide examples demonstrating how the margin of error, effect size and variability.
Exposure Definition and Measurement in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Research Skills Basic understanding of P values and Confidence limits CHE Level 5 March 2014 Sian Moss.
9-1 Hypothesis Testing Statistical Hypotheses Definition Statistical hypothesis testing and confidence interval estimation of parameters are.
Challenges of Non-Inferiority Trial Designs R. Sridhara, Ph.D.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
Therapeutic Equivalence & Active Control Clinical Trials Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute.
1 Statistics in Drug Development Mark Rothmann, Ph. D.* Division of Biometrics I Food and Drug Administration * The views expressed here are those of the.
Introduction to sample size and power calculations Afshin Ostovar Bushehr University of Medical Sciences.
Chapter 8 Delving Into The Use of Inference 8.1 Estimating with Confidence 8.2 Use and Abuse of Tests.
Understanding Study Design & Statistics Dr Malachy O. Columb FRCA, FFICM University Hospital of South Manchester NWRAG Workshop, Bolton, May 2015.
How to Read Scientific Journal Articles
1 9 Tests of Hypotheses for a Single Sample. © John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger. 9-1.
Medical Statistics as a science
Issues concerning the interpretation of statistical significance tests.
Framework of Preferred Evaluation Methodologies for TAACCCT Impact/Outcomes Analysis Random Assignment (Experimental Design) preferred – High proportion.
Data Analysis: Analyzing Individual Variables and Basics of Hypothesis Testing Chapter 20.
© Copyright McGraw-Hill 2004
T Test for Two Independent Samples. t test for two independent samples Basic Assumptions Independent samples are not paired with other observations Null.
Statistical Inference Drawing conclusions (“to infer”) about a population based upon data from a sample. Drawing conclusions (“to infer”) about a population.
Sample Size Determination
THE ROLE OF SUBGROUPS IN CLINICAL TRIALS Ralph B. D’Agostino, Sr., PhD Boston University September 13, 2005.
Review: Stages in Research Process Formulate Problem Determine Research Design Determine Data Collection Method Design Data Collection Forms Design Sample.
Chapter 13 Understanding research results: statistical inference.
Chapter ?? 7 Statistical Issues in Research Planning and Evaluation C H A P T E R.
1 Chapter 6 SAMPLE SIZE ISSUES Ref: Lachin, Controlled Clinical Trials 2:93-113, 1981.
Critical Appraisal Course for Emergency Medicine Trainees Module 2 Statistics.
Sample size calculation
Sample Size Determination
BIOST 513 Discussion Section - Week 10
Hypothesis Testing: One Sample Cases
How many study subjects are required ? (Estimation of Sample size) By Dr.Shaik Shaffi Ahamed Associate Professor Dept. of Family & Community Medicine.
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Critical Reading of Clinical Study Results
9 Tests of Hypotheses for a Single Sample CHAPTER OUTLINE
BU Career Development Grant Writing Course- Session 3, Approach
Interpreting Epidemiologic Results.
Type I and Type II Errors
Statistical Power.
Medical Statistics Exam Technique and Coaching, Part 2 Richard Kay Statistical Consultant RK Statistics Ltd 22/09/2019.
Presentation transcript:

Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

This presentation will:  Describe all relevant assumptions and decisions  Specify the type of hypothesis, the clinically important inferiority margin or minimum clinically important excess/difference, and the level for the confidence interval  Specify the statistical software and command or the formula to calculate the expected confidence interval  Specify the expected precision (or statistical power) for any subgroup analyses  Specify the expected precision (or statistical power) as sensitivity analyses in special situations Outline of Material

 Study feasibility relies on whether the projected number of accrued patients is adequate to address the scientific aims of the study.  Many journal editorial boards endorse reporting of study size rationale.  However, this rationale is often missing from study protocols and proposals.  Interpreting study findings in terms of statistical significance in relation to the null hypothesis implies a prespecified hypothesis and adequate statistical power.  Without the context of a numeric rationale for the study size, readers may misinterpret the results. Introduction

 Reporting on study size rationale in the study protocol is often required by institutional review boards before data collection can begin.  The rationale for study size depends on calculations of the study size needed to achieve a specified level of statistical power.  Statistical power is defined as the probability of rejecting the null hypothesis when an alternative hypothesis is true.  Software packages and online tools can assist with these calculations. Study Size and Power Calculations in Randomized Controlled Trials (1 of 3)

 Specify the clinically meaningful or minimum detectable difference.  Identify the size of the smallest potential treatment effect that would be of clinical relevance.  Calculate the study size, assuming the value represents the true treatment effect.  Specify a measure of data variability.  For continuous outcomes, make assumptions about the standard deviation.  For occurrence of event outcomes (e.g., death), estimation of the assumed event rate in the control group is necessary. Study Size and Power Calculations in Randomized Controlled Trials (2 of 3)

 Needed study size depends on the chosen type 1 error rate (α) and required statistical power.  Use a conventional statistical significance cutoff of α = 0.05 and a standard required power of 80 percent.  Consider potential reductions in the number of recruited patients available for analysis. Study Size and Power Calculations in Randomized Controlled Trials (3 of 3) Scenario Effect of Interest Therapy 1 Risk Therapy 2 Risk Desired Power Needed Study Size Needed Recruitment %10,79513, % 2,005 2, % % An example of adequately reported consideration of study size under several potential scenarios that vary the baseline risk of the outcome, the minimum clinically relevant treatment effect, and the required power.

 Sample size and power calculations in the context of randomized controlled trials are relevant for observational studies, but their application may differ.  Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines  Funding agencies often ask for statistical power calculations, while journal editors ask for confidence intervals. Considerations for Observational Comparative Effectiveness Research Study Size Planning

 Confounding bias, measurement error, and other biases should concern investigators more than the expected precision when they consider the feasibility of an observational comparative effectiveness study.  Controlling for confounding can also reduce the precision of estimated effects (often seen in studies with propensity score matching).  Retrospective studies often suffer from a higher frequency of missing data, which can limit precision and power. Considerations That Differ From Nonrandomized Studies

 To ensure adequate study size and appropriate interpretation of results, provide a rationale for study size during the planning and reporting stages.  All definitions and assumptions should be specified, including primary study outcome, clinically important minimum effect size, variability measure, and type I and type II error rates.  Consider loss to followup, reductions due to statistical methods to control for confounding, and missing data to ensure the sample size is adequate to detect clinically meaningful differences. Conclusions

Summary Checklist (1 of 2) GuidanceKey Considerations Describe all relevant assumptions and decisions. Report the primary outcome on which the study size or power estimate is based. Report the clinically important minimum effect size (e.g., hazard ratio ≥1.20). Report the type I error level. Report the statistical power or type II error level (for study size calculations) or the assumed sample size (for power calculations). Report the details of the sample size formulas and calculations including correction for loss to followup, treatment discontinuation, and other forms of censoring. Report the expected absolute risk or rate for the reference or control cohort, including the expected number of events. Specify the type of hypothesis, the clinically important inferiority margin or minimum clinically important excess/difference, and the level of confidence for the interval (e.g., 95%). Types of hypotheses include equivalence, noninferiority, and inferiority.

Summary Checklist (2 of 2) GuidanceKey Considerations Specify the statistical software and command or the formula to calculate the expected confidence interval. Examples include Stata , Confidence Interval Analysis, and Power Analysis and Sample Size (PASS). Specify the expected precision (or statistical power) for any planned subgroup analyses. Specify the expected precision (or statistical power) as sensitivity analyses in special situations. Special situations include: The investigators anticipate strong confounding that will eliminate many patients from the analysis (e.g., when matching or trimming on propensity scores). The investigators anticipate a high frequency of missing data that cannot (or will not) be imputed, which would eliminate many patients from the analysis.