Updates on Regulatory Requirements for Missing Data

Slides:



Advertisements
Similar presentations
Industry Issues: Dataset Preparation for Time to Event Analysis Davis Gates Schering Plough Research Institute.
Advertisements

Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
1 QOL in oncology clinical trials: Now that we have the data what do we do?
Systematic Review of Literature Part XIX Analyzing and Presenting Results.
Mitigating Risk of Out-of-Specification Results During Stability Testing of Biopharmaceutical Products Jeff Gardner Principal Consultant 36 th Annual Midwest.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
The ICH E5 Question and Answer Document Status and Content Robert T. O’Neill, Ph.D. Director, Office of Biostatistics, CDER, FDA Presented at the 4th Kitasato-Harvard.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol Liying XU CCTER CUHK.
Non-Experimental designs: Developmental designs & Small-N designs
ABCWINRisk and Statistics1 Risk and Statistics Risk Assessment in Clinical Decision Making Ulrich Mansmann Medical Statistics Branch University of Heidelberg.
Raymond J. Carroll Texas A&M University LOCF and MMRM: Thoughts on Comparisons.
Sample Size Determination Ziad Taib March 7, 2014.
Accredited Member of the Association of Clinical Research Professionals, USA Tips on clinical trials Maha Al-Farhan B.Sc, M.Phil., M.B.A., D.I.C.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Exposure Definition and Measurement in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
MISSING DATA DSBS Meeting 28 May 2009 Kristian Windfeld, Genmab.
Delivering Robust Outcomes from Multinational Clinical Trials: Principles and Strategies Andreas Sashegyi, PhD Eli Lilly and Company.
Biostatistics Case Studies 2007 Peter D. Christenson Biostatistician Session 3: Incomplete Data in Longitudinal Studies.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
1 f02kitchenham5 Preliminary Guidelines for Empirical Research in Software Engineering Barbara A. Kitchenham etal IEEE TSE Aug 02.
Biostatistics Case Studies 2008 Peter D. Christenson Biostatistician Session 5: Choices for Longitudinal Data Analysis.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
1 Updates on Regulatory Requirements for Missing Data Ferran Torres, MD, PhD Hospital Clinic Barcelona Universitat Autònoma de Barcelona.
1 f02laitenberger7 An Internally Replicated Quasi- Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents Laitenberger, etal.
Special Topics in Educational Data Mining HUDK5199 Spring term, 2013 March 13, 2013.
1 Handling of Missing Data. A regulatory view Ferran Torres, MD, PhD IDIBAPS. Hospital Clinic Barcelona Autonomous University of Barcelona (UAB)
International Atomic Energy Agency Regulatory Review of Safety Cases for Radioactive Waste Disposal Facilities David G Bennett 7 April 2014.
Matching Analyses to Decisions: Can we Ever Make Economic Evaluations Generalisable Across Jurisdictions? Mark Sculpher Mike Drummond Centre for Health.
CONFIDENTIAL © 2012 | 1 Writing a Statistical Analysis Plan DIA Medical Writing SIAC July 12, 2012 Peter Riebling, MS, RAC Associate Director, Regulatory.
DATA STRUCTURES AND LONGITUDINAL DATA ANALYSIS Nidhi Kohli, Ph.D. Quantitative Methods in Education (QME) Department of Educational Psychology 1.
CHAPTER ONE EDUCATIONAL RESEARCH. THINKING THROUGH REASONING (INDUCTIVELY) Inductive Reasoning : developing generalizations based on observation of a.
Missing data: Why you should care about it and what to do about it
FDA’s IDE Decisions and Communications
Presented by Rob Hemmings
MISSING DATA AND DROPOUT
Statistical Approaches to Support Device Innovation- FDA View
BULGARIA Istanbul, February, Turkey
Data Managers’ Forum What’s in it for us?
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Supplementary Table 1. PRISMA checklist
Deputy Director, Division of Biostatistics No Conflict of Interest
Strategies to incorporate pharmacoeconomics into pharmacotherapy
Medical Device Regulatory Essentials: An FDA Division of Cardiovascular Devices Perspective Bram Zuckerman, MD, FACC Director, FDA Division of Cardiovascular.
Critical Reading of Clinical Study Results
Unit 4 Introducing the Study.
Crucial Statistical Caveats for Percutaneous Valve Trials
Aligning Estimands and Estimators – A Case Study Sept 13, 2018 Elena Polverejan Vladimir Dragalin Quantitative Sciences Janssen R&D, Johnson & Johnson.
Regulatory perspective
S1316 analysis details Garnet Anderson Katie Arnold
11/20/2018 Study Types.
USE OF COMPOSITE VARIABLES: AN EPIDEMIOLOGICAL PERSPECTIVE
Common Problems in Writing Statistical Plan of Clinical Trial Protocol
Pest Risk Analysis (PRA) Stage 2: Pest Risk Assessment
Analysis of missing responses to the sexual experience question in evaluation of an adolescent HIV risk reduction intervention Yu-li Hsieh, Barbara L.
What Do We Know About Estimators for the Treatment Policy Estimand
Handling Missing Not at Random Data for Safety Endpoint in the Multiple Dose Titration Clinical Pharmacology Trial Li Fan*, Tian Zhao, Patrick Larson Merck.
Non-Experimental designs
Managerial Decision Making and Evaluating Research
Clinical prediction models
Misc Internal Validity Scenarios External Validity Construct Validity
Use of Piecewise Weighted Log-Rank Test for Trials with Delayed Effect
GL 51 – Statistical evaluation of stability data
Missing data: Is it all the same?
Considerations for the use of multiple imputation in a noninferiority trial setting Kimberly Walters, Jie Zhou, Janet Wittes, Lisa Weissfeld Joint Statistical.
How Should We Select and Define Trial Estimands
Jared Christensen and Steve Gilbert Pfizer, Inc July 29th, 2019
2019 Joint Statistical Meetings at Denver
Presentation transcript:

Updates on Regulatory Requirements for Missing Data Ferran Torres, MD, PhD Hospital Clinic Barcelona Universitat Autònoma de Barcelona

Documentation http://ferran.torres.name/edu/dia Power Point presentation Direct links to guidelines List of selected relevant references

Disclaimer The views expressed here are those of the author and may not necessary reflect those of any of the following institutions he is related to: Spanish Medical Agency - AEMPS EMEA (SAWP; EWP) Hospital Clinic Barcelona Autonomous University of Barcelona The views expressed in this presentation are my personal views, and may not be understood or quoted as being made on behalf of or reflecting the position of any of the institutions …

Regulatory guidance concerning MD 1998: ICHE9. Statistical Principles for Clinical Trials 2001: PtC on Missing Data Dec-2007-2008: Recommendation for the Revision of the PtC on MD 2009: Release for consultation

ICH-E9 (3,6) Key points: Potential source of bias Common in Clinical Trials Avoiding MD Importance of the methods Pre-specification Lack of universally accepted method for handling Sensitivity analysis Identification and description of missingness These are the key points included in ICHE9, and some of them were also marginally described in E3 and E6

Status in early 2000s In general, MD was not seen as a source of bias: considered mostly as a loss of power issue little efforts in avoiding MD Importance of the methods for dealing with: Available Data Only Handling of missingness: Mostly LOCF, Worst Case

Status in early 2000s Very few information on the handling of MD in protocols and SAP (little pre-specification) Lack of Sensitivity analysis, or only one, and no justification Lack (little) identification and description of missingness in reports

PtC on MD Structure Introduction The effect of MD on data analysis Handling of MD General recommendations

Main Points Avoidance of MD Bias: specially when MD was related to the outcome Methods: Warning on the LOCF Open the door to other methods: Multiple imputation, Mixed Models… Sensitivity analysis

Current status in 2008-9 Missing data remains a problem in protocols and final reports: Little or no critical discussion on pattern of MD data and withdrawals None / only one sensitivity analysis Methods: Inappropriate methods for the handling of MD LOCF: Still used as a general approach for too many situations Methods with very little use in early 2000 are now common (Mixed Models) MMRM sometimes the only approach in some submissions

New Draft PtC 1. Executive Summary 2. Introduction 3. The Effect of MD on the Analysis & the Interpretation 4. General Recommendations 4.1 Avoidance of Missing Data 4.2 Design of the Study. Relevance Of Predefinition 4.3 Final Report 5. Handling of Missing Data 5.1 Theoretical Framework 5.2 Complete Case Analysis 5.3 Methods for Handling Missing Data 6. Sensitivity Analyses Enlarged, extended

Statistical framework applicability of methods based on a classification according to missingness generation mechanisms: missing completely at random (MCAR) missing at random (MAR) missing not at random (MNAR) We have included the statistical classification of the MD according to their generation mechanism as well as its implication on the applicability of the different methods.

Options after withdrawal > Worse 36 32 28 24 20 16 12 8 4 We tackled any kind of missing but probably missings due to withdrawals are the most relevant and worrying < Better 0 2 4 6 8 10 12 14 16 18 Time (months)

Options after withdrawal Ignore that information completely: Available Data Only approach To “force” data retrieval?: “Pure” estimates valid only when no treatment alternatives are available Otherwise the effect will be contaminated by the effect of other treatments Single Imputation methods MAR methods: Mixed-effect models for repeated measures (MMRM) MNAR methods the guideline sets the generals principles and describes the main considerations for the methods of handling MD, but unfortunately it is not a cooking recipe To force: but, up to what extent? Outcomes will be somehow contaminated when there are alternatives MNAR methods: for which there is still little experience in regulatory submissions and their role should be probably focused to sensitivity analysis

Single imputation methods LOCF, BOCF and others Many problems described in the previous PtC Their potential for bias depends on many factors including true evolutions after dropout Time, reason for withdrawal and proportion of missingness in the treatment arm they do not necessarily yield a conservative estimation of the treatment effect The imputation may distort the variance and the correlations between variables There is very little innovation in this part - Avoid the overuse of LOCF

MMRM (and others MAR) MAR assumption MD depends on the observed data the behaviour of the post drop-out observations can be predicted with the observed data It seems reasonable and it is not a strong assumption, at least a priori In RCT, the reasons for withdrawal are known Other assumptions seem stronger and more arbitrary MAR methods are extensively treated in this update In RCT, the reason are always recorded, so, the assumptions under this methods work can be somehow assessed.

However… It is reasonable to consider that the treatment effect will somehow cease/attenuate after withdrawal If there is a good response, MAR will not “predict” a bad response =>MAR assumption not suitable for early drop-outs because of safety issues In this context MAR seems likely to be anti-conservative Imagine the case of a very effective but also relatively highly toxic treatment. Early drop-outs because of safety will not penalise the final treatment effect since the observed data is favourable

The main analysis: What should reflect ? A) The “pure” treatment effect: Estimation using the “on treatment” effect after withdrawal Ignore effects (changes) after treatment discontinuation Does not mix up efficacy and safety B) The expected treatment effect in “usual clinical practice” conditions People who is very much in favour say: … Imputation Using Dropout Reason (IUDR) Good sensitivity analyses Differential imputation according to withdrawals Penalising treatment related withdrawals (i.e., lack of efficacy, safety)

MAR MMRM aims to estimate the treatment effect that would be seen if all patients had continued on the study as planned. In that sense MMRM results could be seen as not fully compliant with the ITT principle Regulatory assessment is focused on what could be expected "on average" in a population, where not all patients have complied with the assigned treatment  for the full duration of the trial

Description of MD Detailed description (numerical and graphical): Pattern of MD Rate and time of withdrawal By reason, time/visit and treatment Some withdrawals will occur between visits: use survival methods Outcome By reason of withdrawal and also for completers These data could be highly informative to assess the potential bias and the adequacy of the assumptions for the handling of MID

General recommendations Sensitivity analysis (there is a new separate section) Avoidance of MD Design Relevance of predefinition (avoid data-driven methods ) detailed description and justification of absence of bias in favour of experimental treatment Final Report Detailed description of the planned and amendments of the predefined methods Emphasize, stress, highlight

Sensitivity Analyses One specific section a set of analyses showing the influence of different methods of handling missing data on the study results Pre-defined and designed to assess the repercussion on the results of the particular assumptions made in the handling of missingness Responder analysis Sensitivity analyses may give robustness to the conclusions Responder analysis: Commonly, the primary analysis of a continuous variable is supported by a responder analysis. How missing data are going to be categorised in this analysis should be pre-specified and justified. Though sub optimal from a statistical perspective, in some cases… missing data as failures and conducting a responder analysis could be the only viable option.

Concluding Remarks Avoid and foresee MD Sensitivity analyses Methods for handling: No gold standard for every situation In principle, “almost any method may be valid”: =>But their appropriateness has to be justified MNAR

The regulatory view is sometimes difficult to understand and probably too conservative

But the aim is to avoid a free lunch for everything and to set some reasonable rules for that

Basically, to predefine, justify, discuss and to make clear the analysis and assumptions

… in the end, to avoid any bias, basically that favouring the experimental arm

http://ferran.torres.name/edu/dia