SMALL SAMPLE SIZE CLINICAL TRIALS Christopher S. Coffey Professor, Department of Biostatistics Director, Clinical Trials Statistical and Data Management.

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Breakout Session 4: Personalized Medicine and Subgroup Selection Christopher Jennison, University of Bath Robert A. Beckman, Daiichi Sankyo Pharmaceutical.
Data Monitoring Models and Adaptive Designs: Some Regulatory Experiences Sue-Jane Wang, Ph.D. Associate Director for Adaptive Design and Pharmacogenomics,
Safety and Extrapolation Steven Hirschfeld, MD PhD Office of Cellular, Tissue and Gene Therapy Center for Biologics Evaluation and Research FDA.
Statistical Analysis for Two-stage Seamless Design with Different Study Endpoints Shein-Chung Chow, Duke U, Durham, NC, USA Qingshu Lu, U of Science and.
Clinical Trial Designs for the Evaluation of Prognostic & Predictive Classifiers Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer.
Statistical Issues in Research Planning and Evaluation
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Chapter 11: Sequential Clinical Trials Descriptive Exploratory Experimental Describe Find Cause Populations Relationships and Effect Sequential Clinical.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol Liying XU CCTER CUHK.
Introduction to Research
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Basic Business Statistics.
Clinical Trials Hanyan Yang
Today Concepts underlying inferential statistics
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Sample Size Determination
EVAL 6970: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Dr. Anne Cullen Spring 2012.
Adaptive Designs for Clinical Trials
Chapter 14 Inferential Data Analysis
Sample Size Determination Ziad Taib March 7, 2014.
Thoughts on Biomarker Discovery and Validation Karla Ballman, Ph.D. Division of Biostatistics October 29, 2007.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
RESEARCH DESIGNS FOR QUANTITATIVE STUDIES. What is a research design?  A researcher’s overall plan for obtaining answers to the research questions or.
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Business Statistics,
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
CHP400: Community Health Program- lI Research Methodology STUDY DESIGNS Observational / Analytical Studies Case Control Studies Present: Disease Past:
Chapter 8 Introduction to Hypothesis Testing
Nursing Research Prof. Nawal A. Fouad (5) March 2007.
Some terms Parametric data assumptions(more rigorous, so can make a better judgment) – Randomly drawn samples from normally distributed population – Homogenous.
How much can we adapt? An EORTC perspective Saskia Litière EORTC - Biostatistician.
Audit Sampling: An Overview and Application to Tests of Controls
Clinical Writing for Interventional Cardiologists.
1 OTC-TFM Monograph: Statistical Issues of Study Design and Analyses Thamban Valappil, Ph.D. Mathematical Statistician OPSS/OB/DBIII Nonprescription Drugs.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Fundamentals of Hypothesis Testing: One-Sample Tests Statistics.
Federal Institute for Drugs and Medical Devices The BfArM is a Federal Institute within the portfolio of the Federal Ministry of Health (BMG) The use of.
Using Predictive Classifiers in the Design of Phase III Clinical Trials Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute.
Experimental Research Methods in Language Learning Chapter 10 Inferential Statistics.
Issues concerning the interpretation of statistical significance tests.
IMPORTANCE OF STATISTICS MR.CHITHRAVEL.V ASST.PROFESSOR ACN.
Introduction to Research. Purpose of Research Evidence-based practice Validate clinical practice through scientific inquiry Scientific rational must exist.
Hypothesis Testing Introduction to Statistics Chapter 8 Feb 24-26, 2009 Classes #12-13.
Handout Six: Sample Size, Effect Size, Power, and Assumptions of ANOVA EPSE 592 Experimental Designs and Analysis in Educational Research Instructor: Dr.
European Patients’ Academy on Therapeutic Innovation Ethical and practical challenges of organising clinical trials in small populations.
European Patients’ Academy on Therapeutic Innovation The Purpose and Fundamentals of Statistics in Clinical Trials.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Introduction to Biostatistics, Harvard Extension School, Fall, 2005 © Scott Evans, Ph.D.1 Sample Size and Power Considerations.
Christopher S. Coffey Professor, Department of Biostatistics Director, Clinical Trials Statistical and Data Management Center May 30, 2014 STUDY DESIGN.
Adaptive trial designs in HIV vaccine clinical trials Morenike Ukpong Obafemi Awolowo University Ile-Ife, Nigeria.
Uses of Diagnostic Tests Screen (mammography for breast cancer) Diagnose (electrocardiogram for acute myocardial infarction) Grade (stage of cancer) Monitor.
Study Design: Making research pretty Adam P. Sima, PhD July 13, 2016
Logic of Hypothesis Testing
Present: Disease Past: Exposure
CLINICAL PROTOCOL DEVELOPMENT
Statistical Approaches to Support Device Innovation- FDA View
Randomized Trials: A Brief Overview
Critical Reading of Clinical Study Results
SMALL SAMPLE SIZE CLINICAL TRIALS
Common Problems in Writing Statistical Plan of Clinical Trial Protocol
Issues in Hypothesis Testing in the Context of Extrapolation
Interpreting Epidemiologic Results.
Type I and Type II Errors
Assessing Similarity to Support Pediatric Extrapolation
Presentation transcript:

SMALL SAMPLE SIZE CLINICAL TRIALS Christopher S. Coffey Professor, Department of Biostatistics Director, Clinical Trials Statistical and Data Management Center University of Iowa April 22, 2016

In this webinar, we will:  Discuss the importance of adequate study planning for small clinical trials  Describe some analytical approaches that have merit with small clinical trials  Describe several proposed designs for small clinical trials 2 OUTLINE

 The presenter has no commercial or financial interests, relationships, activities, or other conflicts of interest to disclose  This presentation will not include information on unlabeled use of any commercial products or investigational use that is not yet approved for any purpose 3 DISCLOSURES/OFF-LABEL STATEMENT

 This webinar is being recorded.  In accordance with our open access mission, we will be posting a video and the slides to our website and they will be made publically available  You can talk to us using the “chat” function, and we will speak our responses  Also, please use audience response when prompted 4 NOTICE OF RECORDING

The wonderful land of Asymptopia: OVERVIEW 5

QUESTION: “What is a small clinical trial?” ANSWER: Depends on the context.  A stroke researcher may think of a ‘small clinical trial’ as an early phase trial to develop a new compound.  An ALS researcher may think of a ‘small clinical trial’ as a confirmatory phase III clinical trial that is limited in size. We will address both types of studies. OVERVIEW 6

There is no magic – we want the “right” answer Small study ≠ little version of large study. We must know what we are sacrificing: - Less precision?? - Less definitive outcome?? OVERVIEW 7

Before addressing some possible designs of interest, it is useful to review some key recommendations from the Executive Summary in the National Academy of Sciences document. OVERVIEW 8

Recommendation #1: Define the research question. Before undertaking a small clinical trial it is particularly important that the research question be well defined and that the outcomes and conditions to be evaluated be selected in a manner that will most likely help clinicians make therapeutic decisions. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, OVERVIEW 9

Recommendation #2: Tailor the design. Careful consideration of alternative statistical design and analysis methods should occur at all stages in the multistep process of planning a clinical trial. When designing a small clinical trial, it is particularly important that the statistical design and analysis methods be customized to address the clinical research question and study population. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, OVERVIEW 10

Recommendation #3: Clarify methods of reporting of results of clinical trials. In reporting the results of a small clinical trial, with its inherent limitations, it is particularly important to carefully describe all sample characteristics and methods of data collection and analysis for synthesis of the data from the research. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, OVERVIEW 11

Recommendation #4: Perform corroborative statistical analyses. Given the greater uncertainties inherent in small clinical trials, several alternative statistical analyses should be performed to evaluate the consistency and robustness of the results of a small clinical trial. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, OVERVIEW 12

Recommendation #5: Exercise caution in interpretation. One should exercise caution in the interpretation of the results of small clinical trials before attempting to extrapolate or generalize those results. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, OVERVIEW 13

Recommendation #6: More research on alternative designs is needed. Appropriate federal agencies should increase support for expanded theoretical and empirical research on the performances of alternative study designs and analysis methods that can be applied to small studies. Areas worthy of more study may include theory development, simulated and actual testing including comparison of existing and newly developed or modified alternative designs and methods of analysis, simulation models, study of limitations of trials with different sample sizes, and modification of a trial during its conduct. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, OVERVIEW 14

Summary: 1)Define the research question 2)Tailor the design 3)Clarify methods when reporting trial results 4)Perform corroborative statistical analysis 5)Exercise caution in interpretation 6)More research on alternative designs is needed OVERVIEW 15

Summary: 1)Define the research question 2)Tailor the design 3)Clarify methods for reporting trial results 4)Perform corroborative statistical analysis 5)Exercise caution in interpretation 6)More research on alternative designs is needed So, why is this any different from other trials? OVERVIEW 16

Three basic requirements for any clinical trial: 1)Trial should examine an important research question 2)Trial should use rigorous methodology to answer the question of interest 3)Trial must be based on ethical considerations and assure that risks to subjects are minimized 17 OVERVIEW

It may become necessary to relax one or more of these quality criteria when conducting a small trial. However, there is a big difference between the following two approaches: Researchers should aim for the latter approach!  Retrospectively estimating the extent to which the requirements were relaxed.  Prospectively determining which requirements to relax and controlling the relaxed limits in the design OVERVIEW 18

Two general approaches:  Use a methodological approach that enhances the efficiency of standard statistical properties  Use an alternate/innovative design OVERVIEW 19

Use efficient outcome measures & measure precisely. In general, the ‘detectable effect’ for a study is related to the ratio of the variance to the sample size: If the population we are studying is big (e.g. cardiology, breast cancer, etc.):  Just increase N to reduce   And – a little sloppiness is not harmful ANALYTICAL APPROACHES 20

BUT: If our population is small (e.g., genetic disease, rare cancers):  Cannot increase N  Only solution is to decrease the variance ANALYTICAL APPROACHES 21

Type of Outcome Measure  Different types of outcome measures exhibit different levels of accuracy  Using outcomes that provide higher accuracy generally increases statistical power Continuous outcomes most efficient  Beware statistically, not clinically, significant Binary outcomes are least efficient  Sometimes the only outcome of real interest (elimination of disease, restoration of function,…) Time-to-event may be more efficient than binary ANALYTICAL APPROACHES 22

Some examples:  ALS Continuous:Change in ALSFRS-R score Binary:10% decrease in ALSFRS-R score Time-to-event:Time to 10% decrease in ALSFRS-R  Pain Continuous:Pain Score Binary:Pain Score > 4 Time-to-Event:Time to pain relief ANALYTICAL APPROACHES 23

Parametric vs. Nonparametric Approaches:  A nonparametric approach does not require any distributional assumptions Generally more robust  A parametric approach can lead to higher power, if the distributional results are satisfied Thus, in a small trial, it is very important to know whether the distributional assumptions (i.e., normality) are satisfied. ANALYTICAL APPROACHES 24

How to increase power?  Usual RCT – As model-free as possible: Have large sample sizes Do Intent-to-Treat Analysis Don’t worry about noise ANALYTICAL APPROACHES 25

How to increase power?  Usual RCT – As model-free as possible  Small populations Use models (but pre-specify) Check EACH observation before you unblind Carefully evaluate alternative designs ANALYTICAL APPROACHES 26

Historical Controls are useful when:  Comparing a new treatment for a well studied area  Data from published studies remains relevant  Randomized controls are not feasible ANALYTICAL APPROACHES 27

Historical Controls:  Advantages: Inexpensive (…not always!) All subjects get desired treatment You often find a BIG difference  Disadvantages: Current & historical populations may be different Current treatment may be different (even if there is no ‘therapy’) ANALYTICAL APPROACHES 28

Designs of interest in small clinical trials:  Repeated measures design  Crossover design  N-of-1 design  Futility design  Ranking/Selection design SMALL TRIAL DESIGNS 29

Multiple observations or response variables are obtained for each subject. - Repeated measurements over time (longitudinal) - Multiple measurements on same subject Allows both between-subject and within-subject comparisons. Can reduce the required sample size needed to obtain a specific target power. REPEATED MEASURES DESIGNS 30

Suppose you are measuring over time:  STANDARD: Final value – Baseline value  BETTER: Final value, with baseline value as a covariate  STILL BETTER: Longitudinal Differentiate “through” vs. “at” Think about variance/covariance structure Think how you want to model time REPEATED MEASURES DESIGNS 31

Each subject exposed to all treatments - Order of treatments randomized - First may show better (or worse) effect Prognostic factors balanced – self vs. self Required sample size reduced considerably due to self vs. self comparisons Each participant receives the active treatment at some point during the study CROSSOVER DESIGN 32

Disadvantages:  Disease needs to be long-term  Treatment must be taken regularly over time  Relevant outcomes must occur and be measured over time  Not relevant for acute treatments  Concerns due to a ‘carryover effect’ CROSSOVER DESIGN 33

N-OF-1 DESIGN Special case of a crossover/repeated measures design, where a single subject undergoes treatment for several pairs of periods. For each pair:  Subject receives experimental treatment for one part of each pair  Subject receives alternative treatment for other pair  Order of two treatments within each pair is randomized 34

Final outcome of the trial is a determination about the best treatment for the particular subject under study. Most feasible for treatments with rapid onset that stop acting soon after discontinuation. Results of a series of N-of-1 trials may be combined using meta-analysis. N-OF-1 DESIGN 35

Selection (ranking) designs compare parameters of multiple (k) study populations. Generally require smaller sample sizes than trials designed to estimate and test treatment effects. Selection designs can be used to:  Select the treatment with the best response out of k potential treatments  Rank treatments in order of preference  Rule out poor treatments for further study (Helpful with ‘pipeline’ problem) 36 SELECTION DESIGNS

FUTILITY DESIGN A futility (non-superiority) design is a screening tool to identify whether agents should be candidates for phase III trials while minimizing costs/sample size.  If “futility” is declared, results would imply not cost effective to conduct a future phase III trial  If “futility” is not declared, suggests that there could be a clinically meaningful effect which should be explored in a larger, phase III trial 37

For example, suppose a 10% increase in favorable response rates over placebo is clinically meaningful. A futility design would assess following hypothesis: H 0 : Treatment improves outcome by at least 10% compared to placebo versus H A : Treatment does not improve outcome by at least 10% compared to placebo (p T – p P < 0.10 – futile to consider in a phase III trial) 38 FUTILITY DESIGN

Statistical Properties: Null Hypothesis (H 0 ) Alternative Hypothesis (H A ) Rejecting H 0 Type I Error (α) Type II Error (β) Usual Design μ T = μ P μ T ≠ μ P New Treatment is Effective (Harmful) Ineffective Therapy is Effective Effective Therapy is Ineffective Futility Design μ T – μ P ≥ 0μ T – μ P < 0 New Treatment is Futile Effective Therapy is Ineffective Ineffective Therapy is Effective FUTILITY DESIGN 39

Thus, futility design appropriate when error of failing to go to phase III with superior treatment is considered more serious than error of going to phase III with ineffective treatment. Improvement over running underpowered efficacy trials in phase II or conducting phase III trials as first rigorous test of efficacy for a new treatment. FUTILITY DESIGN 40  High negative predictive values: If “futility” declared, treatment likely not effective  Low positive predictive values: Lack of “futility” does not imply treatment is effective

ADAPTIVE DESIGNS The traditional approach to clinical trials tends to be large, costly, and time-consuming. There is a need for more efficient clinical trial design, which should lead to an increased chance of a “successful” trial that answers the question of interest. Hence, there is increasing interest in innovative trial designs. For example, adaptive designs allow reviewing accumulating information during an ongoing clinical trial to possibly modify trial characteristics. 41

ADAPTIVE DESIGNS Adaptive Design Working Group (Gallo et al, 2006): “By adaptive design we refer to a clinical study design that uses accumulating data to modify aspects of the study as it continues, without undermining the validity and integrity of the trial.” “…changes are made by design, and not on an ad hoc basis” “…not a remedy for inadequate planning.” 42 FDA “Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics” (2010): “… a study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of data (usually interim data) from subjects in the study.”

Thus, both groups support the notion that changes are based on pre-specified decision rules. ADAPTIVE DESIGNS 43 “Adaptive By Design” Properly designed simulations often needed to confirm adaptations preserve the integrity and validity of study. In order to properly define the simulations, adaptation rules must be clearly specified in advance. Thus, only planned adaptations can be guaranteed to avoid any unknown bias due to the adaptation.

SUMMARY An appropriate study design has sufficient sample size, adequate power, and proper control of bias to allow a meaningful interpretation of the results. Although small clinical trials pose important limitations, the above issues cannot be ignored. The majority of methods research for clinical trials is based on large sample theory. Additional research into innovative designs for small clinical trials is needed. 44

SUMMARY One of the objectives of this course is to give researchers the tools and connections they need to successfully design these types of trials. Please consider going to the following website to evaluate the webinar: bit.ly/r25eval 45