SMALL SAMPLE SIZE CLINICAL TRIALS

Slides:



Advertisements
Similar presentations
Breakout Session 4: Personalized Medicine and Subgroup Selection Christopher Jennison, University of Bath Robert A. Beckman, Daiichi Sankyo Pharmaceutical.
Advertisements

Statistical Analysis for Two-stage Seamless Design with Different Study Endpoints Shein-Chung Chow, Duke U, Durham, NC, USA Qingshu Lu, U of Science and.
LSU-HSC School of Public Health Biostatistics 1 Statistical Core Didactic Introduction to Biostatistics Donald E. Mercante, PhD.
Chapter 11: Sequential Clinical Trials Descriptive Exploratory Experimental Describe Find Cause Populations Relationships and Effect Sequential Clinical.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol Liying XU CCTER CUHK.
Introduction to Research
Today Concepts underlying inferential statistics
Sample Size Determination
Adaptive Designs for Clinical Trials
Sample Size Determination Ziad Taib March 7, 2014.
Thoughts on Biomarker Discovery and Validation Karla Ballman, Ph.D. Division of Biostatistics October 29, 2007.
RESEARCH DESIGNS FOR QUANTITATIVE STUDIES. What is a research design?  A researcher’s overall plan for obtaining answers to the research questions or.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
Chapter 8 Introduction to Hypothesis Testing
Some terms Parametric data assumptions(more rigorous, so can make a better judgment) – Randomly drawn samples from normally distributed population – Homogenous.
Using Predictive Classifiers in the Design of Phase III Clinical Trials Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute.
Experimental Research Methods in Language Learning Chapter 10 Inferential Statistics.
Introduction to Research. Purpose of Research Evidence-based practice Validate clinical practice through scientific inquiry Scientific rational must exist.
Hypothesis Testing Introduction to Statistics Chapter 8 Feb 24-26, 2009 Classes #12-13.
European Patients’ Academy on Therapeutic Innovation Ethical and practical challenges of organising clinical trials in small populations.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Introduction to Biostatistics, Harvard Extension School, Fall, 2005 © Scott Evans, Ph.D.1 Sample Size and Power Considerations.
SMALL SAMPLE SIZE CLINICAL TRIALS Christopher S. Coffey Professor, Department of Biostatistics Director, Clinical Trials Statistical and Data Management.
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses pt.1.
Christopher S. Coffey Professor, Department of Biostatistics Director, Clinical Trials Statistical and Data Management Center May 30, 2014 STUDY DESIGN.
Adaptive trial designs in HIV vaccine clinical trials Morenike Ukpong Obafemi Awolowo University Ile-Ife, Nigeria.
Study Design: Making research pretty Adam P. Sima, PhD July 13, 2016
Inference for a Single Population Proportion (p)
Defining the research problem
Core Research Competencies:
Welcome Clinical Trials October 11, 2016 Vernon M. Chinchilli, PhD
Logic of Hypothesis Testing
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Research Methods in Psychology
Types of Research Studies Architecture of Clinical Research
Present: Disease Past: Exposure
CLINICAL PROTOCOL DEVELOPMENT
Unit 3 Hypothesis.
Statistical Core Didactic
Statistical Approaches to Support Device Innovation- FDA View
Within Trial Decisions: Unblinding and Termination
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Deputy Director, Division of Biostatistics No Conflict of Interest
Understanding Results
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Randomized Trials: A Brief Overview
Critical Reading of Clinical Study Results
Strategies for Implementing Flexible Clinical Trials Jerald S. Schindler, Dr.P.H. Cytel Pharmaceutical Research Services 2006 FDA/Industry Statistics Workshop.
Crucial Statistical Caveats for Percutaneous Valve Trials
Chapter Three Research Design.
Aiying Chen, Scott Patterson, Fabrice Bailleux and Ehab Bassily
Common Problems in Writing Statistical Plan of Clinical Trial Protocol
Issues in Hypothesis Testing in the Context of Extrapolation
Data Monitoring committees and adaptive decision-making
Review – First Exam Chapters 1 through 5
Elements of a statistical test Statistical null hypotheses
Introduction to Hypothesis Testing
What are their purposes? What kinds?
Critical Appraisal วิจารณญาณ
Interpreting Epidemiologic Results.
Chapter 18 The Binomial Test
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Type I and Type II Errors
CS 594: Empirical Methods in HCC Experimental Research in HCI (Part 1)
Use of Piecewise Weighted Log-Rank Test for Trials with Delayed Effect
David Manner JSM Presentation July 29, 2019
Assessing Similarity to Support Pediatric Extrapolation
Presentation transcript:

SMALL SAMPLE SIZE CLINICAL TRIALS Christopher S. Coffey Professor, Department of Biostatistics Director, Clinical Trials Statistical and Data Management Center University of Iowa May 26, 2017

OUTLINE In this webinar, we will: Discuss the importance of adequate study planning for small clinical trials Describe some analytical approaches that have merit with small clinical trials Describe several proposed designs for small clinical trials

DISCLOSURES/OFF-LABEL STATEMENT The presenter has no commercial or financial interests, relationships, activities, or other conflicts of interest to disclose This presentation will not include information on unlabeled use of any commercial products or investigational use that is not yet approved for any purpose

NOTICE OF RECORDING This webinar is being recorded. In accordance with our open access mission, we will be posting a video and the slides to our website and they will be made publically available You can talk to us using the “chat” function, and we will speak our responses Also, please use audience response when prompted

OVERVIEW The wonderful land of Asymptopia:

OVERVIEW QUESTION: “What is a small clinical trial?” ANSWER: Depends on the context. A stroke researcher may think of a ‘small clinical trial’ as an early phase trial to develop a new compound. An ALS researcher may think of a ‘small clinical trial’ as a confirmatory phase III clinical trial that is limited in size. We will address both types of studies.

OVERVIEW There is no magic – we want the “right” answer Small study ≠ little version of large study. We must know what we are sacrificing: - Less precision?? - Less definitive outcome??

OVERVIEW Before addressing some possible designs of interest, it is useful to review some key recommendations from the Executive Summary in the National Academy of Sciences document.

OVERVIEW Recommendation #1: Define the research question. Before undertaking a small clinical trial it is particularly important that the research question be well defined and that the outcomes and conditions to be evaluated be selected in a manner that will most likely help clinicians make therapeutic decisions. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, 2001.

OVERVIEW Recommendation #2: Tailor the design. Careful consideration of alternative statistical design and analysis methods should occur at all stages in the multistep process of planning a clinical trial. When designing a small clinical trial, it is particularly important that the statistical design and analysis methods be customized to address the clinical research question and study population. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, 2001.

OVERVIEW Recommendation #3: Clarify methods of reporting of results of clinical trials. In reporting the results of a small clinical trial, with its inherent limitations, it is particularly important to carefully describe all sample characteristics and methods of data collection and analysis for synthesis of the data from the research. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, 2001.

OVERVIEW Recommendation #4: Perform corroborative statistical analyses. Given the greater uncertainties inherent in small clinical trials, several alternative statistical analyses should be performed to evaluate the consistency and robustness of the results of a small clinical trial. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, 2001.

OVERVIEW Recommendation #5: Exercise caution in interpretation. One should exercise caution in the interpretation of the results of small clinical trials before attempting to extrapolate or generalize those results. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, 2001.

OVERVIEW Recommendation #6: More research on alternative designs is needed. Appropriate federal agencies should increase support for expanded theoretical and empirical research on the performances of alternative study designs and analysis methods that can be applied to small studies. Areas worthy of more study may include theory development, simulated and actual testing including comparison of existing and newly developed or modified alternative designs and methods of analysis, simulation models, study of limitations of trials with different sample sizes, and modification of a trial during its conduct. From: Small Clinical Trials: Issues and Challenges; National Academy of Science, 2001.

OVERVIEW Summary: Define the research question Tailor the design Clarify methods when reporting trial results Perform corroborative statistical analysis Exercise caution in interpretation More research on alternative designs is needed

OVERVIEW Summary: Define the research question Tailor the design Clarify methods for reporting trial results Perform corroborative statistical analysis Exercise caution in interpretation More research on alternative designs is needed So, why is this any different from other trials?

OVERVIEW Three basic requirements for any clinical trial: Trial should examine an important research question Trial should use rigorous methodology to answer the question of interest Trial must be based on ethical considerations and assure that risks to subjects are minimized

OVERVIEW It may become necessary to relax one or more of these quality criteria when conducting a small trial. However, there is a big difference between the following two approaches: Researchers should aim for the latter approach! Retrospectively estimating the extent to which the requirements were relaxed. Prospectively determining which requirements to relax and controlling the relaxed limits in the design

OVERVIEW Two general approaches: Use a methodological approach that enhances the efficiency of standard statistical properties Use an alternate/innovative design

ANALYTICAL APPROACHES Use efficient outcome measures & measure precisely. In general, the ‘detectable effect’ for a study is related to the ratio of the variance to the sample size: If the population we are studying is big (e.g. cardiology, breast cancer, etc.): Just increase N to reduce  And – a little sloppiness is not harmful

ANALYTICAL APPROACHES BUT: If our population is small (e.g., genetic disease, rare cancers): Cannot increase N Only solution is to decrease the variance

ANALYTICAL APPROACHES Type of Outcome Measure Different types of outcome measures exhibit different levels of accuracy Using outcomes that provide higher accuracy generally increases statistical power Continuous outcomes most efficient Beware statistically, not clinically, significant Binary outcomes are least efficient Sometimes the only outcome of real interest (elimination of disease, restoration of function,…) Time-to-event may be more efficient than binary

ANALYTICAL APPROACHES Some examples: ALS Continuous: Change in ALSFRS-R score Binary: 10% decrease in ALSFRS-R score Time-to-event: Time to 10% decrease in ALSFRS-R Pain Continuous: Pain Score Binary: Pain Score > 4 Time-to-Event: Time to pain relief

ANALYTICAL APPROACHES Parametric vs. Nonparametric Approaches: A nonparametric approach does not require any distributional assumptions Generally more robust A parametric approach can lead to higher power, if the distributional results are satisfied Thus, in a small trial, it is very important to know whether the distributional assumptions (i.e., normality) are satisfied.

ANALYTICAL APPROACHES How to increase power? Usual RCT – As model-free as possible: Have large sample sizes Do Intent-to-Treat Analysis Don’t worry about noise

ANALYTICAL APPROACHES How to increase power? Usual RCT – As model-free as possible Small populations Use models (but pre-specify) Check EACH observation before you unblind Carefully evaluate alternative designs

ANALYTICAL APPROACHES Historical Controls are useful when: Comparing a new treatment for a well studied area Data from published studies remains relevant Randomized controls are not feasible

ANALYTICAL APPROACHES Historical Controls: Advantages: Inexpensive (…not always!) All subjects get desired treatment You often find a BIG difference Disadvantages: Current & historical populations may be different Current treatment may be different (even if there is no ‘therapy’)

SMALL TRIAL DESIGNS Designs of interest in small clinical trials: Repeated measures design Crossover design N-of-1 design Futility design Ranking/Selection design

REPEATED MEASURES DESIGNS Multiple observations or response variables are obtained for each subject. - Repeated measurements over time (longitudinal) - Multiple measurements on same subject Allows both between-subject and within-subject comparisons. Can reduce the required sample size needed to obtain a specific target power.

REPEATED MEASURES DESIGNS Suppose you are measuring over time: STANDARD: Final value – Baseline value BETTER: Final value, with baseline value as a covariate STILL BETTER: Longitudinal Differentiate “through” vs. “at” Think about variance/covariance structure Think how you want to model time

CROSSOVER DESIGN Each subject exposed to all treatments - Order of treatments randomized - First may show better (or worse) effect Prognostic factors balanced – self vs. self Required sample size reduced considerably due to self vs. self comparisons Each participant receives the active treatment at some point during the study

CROSSOVER DESIGN Disadvantages: Disease needs to be long-term Treatment must be taken regularly over time Relevant outcomes must occur and be measured over time Not relevant for acute treatments Concerns due to a ‘carryover effect’

N-OF-1 DESIGN Special case of a crossover/repeated measures design, where a single subject undergoes treatment for several pairs of periods. For each pair: Subject receives experimental treatment for one part of each pair Subject receives alternative treatment for other pair Order of two treatments within each pair is randomized

N-OF-1 DESIGN Final outcome of the trial is a determination about the best treatment for the particular subject under study. Most feasible for treatments with rapid onset that stop acting soon after discontinuation. Results of a series of N-of-1 trials may be combined using meta-analysis.

SELECTION DESIGNS Selection (ranking) designs compare parameters of multiple (k) study populations. Generally require smaller sample sizes than trials designed to estimate and test treatment effects. Selection designs can be used to: Select the treatment with the best response out of k potential treatments Rank treatments in order of preference Rule out poor treatments for further study (Helpful with ‘pipeline’ problem)

FUTILITY DESIGN A futility (non-superiority) design is a screening tool to identify whether agents should be candidates for phase III trials while minimizing costs/sample size. If “futility” is declared, results would imply not cost effective to conduct a future phase III trial If “futility” is not declared, suggests that there could be a clinically meaningful effect which should be explored in a larger, phase III trial

H0: Treatment improves outcome by at least 10% compared to placebo FUTILITY DESIGN For example, suppose a 10% increase in favorable response rates over placebo is clinically meaningful. A futility design would assess following hypothesis: H0: Treatment improves outcome by at least 10% compared to placebo versus HA: Treatment does not improve outcome by at least 10% compared to placebo (pT – pP < 0.10 – futile to consider in a phase III trial)

FUTILITY DESIGN Statistical Properties: Null Hypothesis (H0) Alternative Hypothesis (HA) Rejecting H0 Type I Error (α) Type II Error (β) Usual Design μT = μP μT ≠ μP New Treatment is Effective (Harmful) Ineffective Therapy is Effective Effective Therapy is Ineffective Futility Design μT – μP ≥ 0 μT – μP < 0 New Treatment is Futile

FUTILITY DESIGN High negative predictive values: If “futility” declared, treatment likely not effective Low positive predictive values: Lack of “futility” does not imply treatment is effective Thus, futility design appropriate when error of failing to go to phase III with superior treatment is considered more serious than error of going to phase III with ineffective treatment. Improvement over running underpowered efficacy trials in phase II or conducting phase III trials as first rigorous test of efficacy for a new treatment.

ADAPTIVE DESIGNS The traditional approach to clinical trials tends to be large, costly, and time-consuming. There is a need for more efficient clinical trial design, which should lead to an increased chance of a “successful” trial that answers the question of interest. Hence, there is increasing interest in innovative trial designs. For example, adaptive designs allow reviewing accumulating information during an ongoing clinical trial to possibly modify trial characteristics.

ADAPTIVE DESIGNS Adaptive Design Working Group (Gallo et al, 2006): “By adaptive design we refer to a clinical study design that uses accumulating data to modify aspects of the study as it continues, without undermining the validity and integrity of the trial.” “…changes are made by design, and not on an ad hoc basis” “…not a remedy for inadequate planning.” FDA “Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics” (2010): “… a study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of data (usually interim data) from subjects in the study.”

“Adaptive By Design” ADAPTIVE DESIGNS Thus, both groups support the notion that changes are based on pre-specified decision rules. “Adaptive By Design” Properly designed simulations often needed to confirm adaptations preserve the integrity and validity of study. In order to properly define the simulations, adaptation rules must be clearly specified in advance. Thus, only planned adaptations can be guaranteed to avoid any unknown bias due to the adaptation.

SUMMARY An appropriate study design has sufficient sample size, adequate power, and proper control of bias to allow a meaningful interpretation of the results. Although small clinical trials pose important limitations, the above issues cannot be ignored. The majority of methods research for clinical trials is based on large sample theory. Additional research into innovative designs for small clinical trials is needed.

SUMMARY One of the objectives of this course is to give researchers the tools and connections they need to successfully design these types of trials. Please consider going to the following website to evaluate the webinar: bit.ly/r25eval