EFPSI-BBS Meeting on M&S, Basel 13 September 2012

Slides:



Advertisements
Similar presentations
Standardized Scales.
Advertisements

Phase II/III Design: Case Study
Exploring uncertainty in cost effectiveness analysis NICE International and HITAP copyright © 2013 Francis Ruiz NICE International (acknowledgements to:
Mitigating Risk of Out-of-Specification Results During Stability Testing of Biopharmaceutical Products Jeff Gardner Principal Consultant 36 th Annual Midwest.
Estimation of Sample Size
Sample size optimization in BA and BE trials using a Bayesian decision theoretic framework Paul Meyvisch – An Vandebosch BAYES London 13 June 2014.
What role should probabilistic sensitivity analysis play in SMC decision making? Andrew Briggs, DPhil University of Oxford.
Chapter 10: Sampling and Sampling Distributions
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
1 A Bayesian Non-Inferiority Approach to Evaluation of Bridging Studies Chin-Fu Hsiao, Jen-Pei Liu Division of Biostatistics and Bioinformatics National.
1 PERFORMANCE EVALUATION H Often one needs to design and conduct an experiment in order to: – demonstrate that a new technique or concept is feasible –demonstrate.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Basic Business Statistics 10 th Edition.
1 A heart fills with loving kindness is a likeable person indeed.
Bayesian Methods for Benefit/Risk Assessment
Statistics for Managers Using Microsoft Excel, 5e © 2008 Pearson Prentice-Hall, Inc.Chap 8-1 Statistics for Managers Using Microsoft® Excel 5th Edition.
Adaptive Designs for Clinical Trials
Literature databases: integrating information on diseases and their treatments Vandemeulebroecke M, Demin I, Luttringer O, McDevitt H, Ramakrishna R, Sander.
Sampling. Concerns 1)Representativeness of the Sample: Does the sample accurately portray the population from which it is drawn 2)Time and Change: Was.
EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
 1  Outline  stages and topics in simulation  generation of random variates.
Adaptive designs as enabler for personalized medicine
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
Power and Sample Size Determination Anwar Ahmad. Learning Objectives Provide examples demonstrating how the margin of error, effect size and variability.
Analysis and Visualization Approaches to Assess UDU Capability Presented at MBSW May 2015 Jeff Hofer, Adam Rauk 1.
1 Institute of Engineering Mechanics Leopold-Franzens University Innsbruck, Austria, EU H.J. Pradlwarter and G.I. Schuëller Confidence.
Influence of the size of the cohorts in adaptive design for nonlinear mixed effect models: an evaluation by simulation for a pharmacokinetic (PK) and pharmacodynamic.
How much can we adapt? An EORTC perspective Saskia Litière EORTC - Biostatistician.
1 Presented by Eugene Laska, Ph.D. at the Arthritis Advisory Committee meeting 07/30/02.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
Federal Institute for Drugs and Medical Devices The BfArM is a Federal Institute within the portfolio of the Federal Ministry of Health (BMG) The use of.
Medical Statistics as a science
1 Understanding and Measuring Uncertainty Associated with the Mid-Year Population Estimates Joanne Clements Ruth Fulton Alison Whitworth.
Bayesian Statistics & Innovative Trial Design April 3, 2006 Jane Perlmutter
Sample Size Determination
1 Estimation of Population Mean Dr. T. T. Kachwala.
POLS 7000X STATISTICS IN POLITICAL SCIENCE CLASS 5 BROOKLYN COLLEGE-CUNY SHANG E. HA Leon-Guerrero and Frankfort-Nachmias, Essentials of Statistics for.
Simulation in Healthcare Ozcan: Chapter 15 ISE 491 Fall 2009 Dr. Burtner.
Lecture PowerPoint Slides Basic Practice of Statistics 7 th Edition.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 11 Section 3 – Slide 1 of 27 Chapter 11 Section 3 Inference about Two Population Proportions.
Individual Bioequivalence: Have the Opinions of the Scientific Community Changed? Leslie Z. Benet, Ph.D. University of California San Francisco.
1. Objectives Novartis is developing a new triple fixed-dose combination product. As part of the clinical pharmacology program, pharmacokinetic (PK) drug-drug.
Building Valid, Credible & Appropriately Detailed Simulation Models
WELCOME TO BIOSTATISTICS! WELCOME TO BIOSTATISTICS! Course content.
1 Life Cycle Assessment A product-oriented method for sustainability analysis UNEP LCA Training Kit Module k – Uncertainty in LCA.
HCS 465 OUTLET Experience Tradition /hcs465outlet.com FOR MORE CLASSES VISIT
Analytical Similarity Assessment: Practical Challenges and Statistical Perspectives Richard Montes, Ph.D. Hospira, a Pfizer company Biosimilars Pharmaceutical.
Acceptable changes in quality attributes of glycosylated biopharmaceuticals
Sample Size Determination
Biostatistics Case Studies 2007
The Importance of Adequately Powered Studies
Randomized Trials: A Brief Overview
Clinical Study Results Publication
Meta-analysis of joint longitudinal and event-time outcomes
DUET.
Critical Reading of Clinical Study Results
Strategies for Implementing Flexible Clinical Trials Jerald S. Schindler, Dr.P.H. Cytel Pharmaceutical Research Services 2006 FDA/Industry Statistics Workshop.
This teaching material has been made freely available by the KEMRI-Wellcome Trust (Kilifi, Kenya). You can freely download,
Pilot Studies: What we need to know
Aiying Chen, Scott Patterson, Fabrice Bailleux and Ehab Bassily
Issues in Hypothesis Testing in the Context of Extrapolation
Longitudinal Analysis Beyond effect size
Who are the Subjects? Intro to Sampling
WLTP CoP Procedure for CO2/FC
Biomarkers as Endpoints
Björn Bornkamp, Georgina Bermann
Evidence Based Diagnosis
David Manner JSM Presentation July 29, 2019
Aparna Raychaudhuri, Ph. D
Assessing Similarity to Support Pediatric Extrapolation
Presentation transcript:

EFPSI-BBS Meeting on M&S, Basel 13 September 2012 Using M&S to develop a novel longitudinal model-based approach for efficacy assessments, with an application to biosimilars in rheumatoid arthritis D Renard1, B Bieth1, F Mentré2, G Heimann1, I Demin1, B Hamrén1, S Balser3 1Modeling and Simulation, Novartis, Basel (Switzerland), 2 UMR 738, INSERM and Universite Paris Diderot, Paris (France), 3 Sandoz Biopharmaceuticals, Holzkirchen (Germany)

Outline Background M&S approach Simulation models Statistical methodology Some results Next steps | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Problem that we attempt to address Question Can the size of a Phase III study be reduced by optimizing the choice of analysis methodology, as compared to using the standard approach (end-point analysis) ? An innovative model-based approach was developed Aim to maintain strict regulatory standards for phase III Sound statistical properties Fully pre-specified analysis Note: this is NOT how pharmacometric (M&S) analyses are routinely applied in drug development! | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Application: Biosimilars in rheumatoid arthritis (RA) In RA studies, standard efficacy assessments rely on the ACR20 after 24 weeks of treatment (ACR20=American College of Rheumatology 20% response criterion) Objective is to demonstrate similar efficacy between the reference and biosimilar products Formally achieved through statistical equivalence testing Standard method: Based on response rates (#ACR20 responders/#patients) Only uses data collected at Week 24 Equivalence is inferred when the 95% confidence interval (CI) for the treatment effect is included within the equivalence margins | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

M&S approach Step 1. Establish simulation models Requirement: simulate realistic study outcomes for ACR20 (including compound specific characteristics) Simulation models were built using a mix of literature (summary level) and internal (patient level) data Models components: time course of response, different sources of variability, and dropout characteristics Step 2: Development of statistical methodology Longitudinal model-based approach for equivalence testing Rooted to nonlinear mixed effect (NLME) modeling, entails pre-specifying different candidate models and relies on model averaging techniques Step 3. Simulations and assessment of performance An extensive simulation program was conducted to assess performance of the longitudinal model-based approach More is needed! | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Longitudinal model-based meta-analysis as basis for setting up realistic simulation models 9 anti-TNF agents, 37 studies ACR20 responses to methotrexate (MTX) Symbols = study results (size proportional to # patients) Solid lines connect points from each study Broken lines = smooth curves for each patient population Demin et al, Clin Pharmacol Ther (2012) | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Simulation models Different simulation models were considered to ensure robustness of the conclusions Models were set up to mimick different compounds based on meta-analytic characterization Internal data were used to inform variability parameters in the models An example of model is given on slide 10 | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Literature database was used to build assumptions about dropout Dropout increases proportionally over time to reach 15% at week 24 Patients not responding at the previous visit were twice more likely to drop out compared to patients who were responders Dropout for non responders assumed to be due to lack of efficacy | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Longitudinal model-based testing (1) NLME models are pre-specified to describe transition probabilities based on the Markov assumption linking the entire longitudinal data. Pre-specified candidate models Model averaging Equivalence testing Lacroix BD et al, Clin Pharmacol Ther 2009, 86: 387-395. ACR20 Visits (week) 2 4 20 24 Pr10 1 Pr01 The response rate at week 24 is a function of transition probabilities: | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Longitudinal model-based test (2) Markov models are specified for two independent transition probabilities, e.g. 10 candidate models are pre-specified, which differ through constraints in parameters and function of time Range of simple up to more complex models Estimating ACR20 response rates for a given model Individual response probabilities are derived from the Markov property (previous slide) using modeled transition probabilities Population response rates are derived by integrating out the random effects (η10, η11) Pre-specified candidate models Model averaging Equivalence testing i= subject k=study visit | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Longitudinal model-based test (3) Key concept: model averaging Used to estimate the response rates at week 24 by combining results from the different candidate models Point estimate = weighted average of the individual model estimates Weights are functions of a statistical criterion (BIC) Larger weights are assigned to models that fit the data better. Pre-specified candidate models Model averaging Equivalence testing k = treatments m = models | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Longitudinal model-based test (4) Pre-specified candidate models Model averaging Equivalence testing Example: candidate models 10 candidate models 3 models shown with corresponding weights (w) W=.18 W=.75 W=.00 | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Longitudinal model-based test (5) Pre-specified candidate models Model averaging Equivalence testing Example: model averaging 10 candidate models 3 models shown with corresponding weights (w) Model average estimate (thick black) W=.18 W=.75 W=.00 | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Longitudinal model-based test (6) Pre-specified candidate models Model averaging Equivalence testing Bootstrap is used to derive a confidence interval for the treatment difference at week 24. Bootstrap datasets are built by resampling over subjects. The 95% CI is compared with the equivalence margins for equivalence testing. | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Model-based analysis versus classical test Model-based approach does not change the nature of the comparability testing and makes it more efficient! Difference in response rates (%) Simulation # Symbols correspond to point estimates and bars to 95% CI 10 simulation runs assuming strict equivalence (n=180/arm) | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Simulation results: Power 40% reduction in sample size compared to the classical test at power levels of 80 and 90% Power assessed based on 1000 simulations | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Simulation results: Type 1 error Type I error rate is close to the 2.5% nominal level Assessment based on 1000 simulations (bars represent Monte-Carlo error) | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Summary of methodology The model-based analysis uses all data collected to derive an estimate and its confidence interval of the treatment effect at the end of the study (week 24) Model averaging is used to prevent against model misspecification Number of patients was reduced up to 40% with the longitudinal model-based analysis – confirmed by additional simulation scenarios and sensitivity analyses Extensions: the principles could be applied to other types of endpoints in RA (e.g. DAS28), other therapeutic areas for biosimilarity assessments, or for efficacy assessments in late stage clinical development of new drugs | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments

Summary of health authority interactions Initial project feedback from EMA was negative Overall encouraging feedback obtained at EMA/EFPIA workshop (Dec 2011) Absence of theoretical results to justify type I error control appears to be a critical concern deserving careful consideration How can regulatory acceptance be gained ? Planned interaction with EMA through Innovation Task Force process Perform large simulation study to evaluate type 1 error control (ongoing) Use model-based approach as supportive analysis in future studies Present and discuss the method with the scientific community | EFPSI-BBS meeting | D Renard | 13 Sep 2012 | Longitudinal model-based approach for efficacy assessments