Tobias Mielke QS Consulting Janssen Pharmaceuticals

Slides:



Advertisements
Similar presentations
Interim Analysis in Clinical Trials: A Bayesian Approach in the Regulatory Setting Telba Z. Irony, Ph.D. and Gene Pennello, Ph.D. Division of Biostatistics.
Advertisements

Hypothesis Testing Goal: Make statement(s) regarding unknown population parameter values based on sample data Elements of a hypothesis test: Null hypothesis.
Bayesian posterior predictive probability - what do interim analyses mean for decision making? Oscar Della Pasqua & Gijs Santen Clinical Pharmacology Modelling.
A new group-sequential phase II/III clinical trial design Nigel Stallard and Tim Friede Warwick Medical School, University of Warwick, UK
Sample size optimization in BA and BE trials using a Bayesian decision theoretic framework Paul Meyvisch – An Vandebosch BAYES London 13 June 2014.
Impact of Dose Selection Strategies on the Probability of Success in the Phase III Zoran Antonijevic Senior Director Strategic Development, Biostatistics.
Probability & Statistical Inference Lecture 7 MSc in Computing (Data Analytics)
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Hypothesis Testing.
Sample Size Determination
Phase II Design Strategies Sally Hunsberger Ovarian Cancer Clinical Trials Planning Meeting May 29, 2009.
Adaptive Designs for Clinical Trials
Simple Linear Regression Analysis
Sample Size Determination Ziad Taib March 7, 2014.
General Linear Model & Classical Inference
Evaluating and quantifying benefit of exposure-response modeling for dose finding José Pinheiro and Chyi-Hung Hsu Novartis Pharmaceuticals PAGE Satellite.
Copyright © 2012 by Nelson Education Limited. Chapter 8 Hypothesis Testing II: The Two-Sample Case 8-1.
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
Testing and Estimation Procedures in Multi-Armed Designs with Treatment Selection Gernot Wassmer, PhD Institut für Medizinische Statistik, Informatik und.
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
Biostatistics Class 6 Hypothesis Testing: One-Sample Inference 2/29/2000.
STATISTICAL ANALYSIS OF FATIGUE SIMULATION DATA J R Technical Services, LLC Julian Raphael 140 Fairway Drive Abingdon, Virginia.
Economics 173 Business Statistics Lecture 4 Fall, 2001 Professor J. Petry
One-way ANOVA: - Comparing the means IPS chapter 12.2 © 2006 W.H. Freeman and Company.
Proof of Concept and Dose Estimation in Phase II Clinical Trials Bernhard Klingenberg Asst. Prof. of Statistics Williams College Slides, paper and R-code.
Bayesian Approach For Clinical Trials Mark Chang, Ph.D. Executive Director Biostatistics and Data management AMAG Pharmaceuticals Inc.
1 Chihiro HIROTSU Meisei (明星) University Estimating the dose response pattern via multiple decision processes.
Date | Presenter Case Example: Bayesian Adaptive, Dose-Finding, Seamless Phase 2/3 Study of a Long-Acting Glucagon-Like Peptide-1 Analog (Dulaglutide)
European Patients’ Academy on Therapeutic Innovation The Purpose and Fundamentals of Statistics in Clinical Trials.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 6 –Multiple hypothesis testing Marshall University Genomics.
ENGR 610 Applied Statistics Fall Week 7 Marshall University CITE Jack Smith.
Chapter ?? 7 Statistical Issues in Research Planning and Evaluation C H A P T E R.
Introduction to Biostatistics, Harvard Extension School, Fall, 2005 © Scott Evans, Ph.D.1 Sample Size and Power Considerations.
Hypothesis Testing and Statistical Significance
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Critical Appraisal Course for Emergency Medicine Trainees Module 2 Statistics.
Confidential and Proprietary Business Information. For Internal Use Only. Statistical modeling of tumor regrowth experiment in xenograft studies May 18.
Multiple Regression Analysis: Inference
HYPOTHESIS TESTING.
Logic of Hypothesis Testing
Market-Risk Measurement
Chapter 5 STATISTICAL INFERENCE: ESTIMATION AND HYPOTHESES TESTING
Multiple Regression Analysis: Inference
General Linear Model & Classical Inference
Hypothesis testing using contrasts
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Strategies for Implementing Flexible Clinical Trials Jerald S. Schindler, Dr.P.H. Cytel Pharmaceutical Research Services 2006 FDA/Industry Statistics Workshop.
More about Tests and Intervals
A practical trial design for optimising treatment duration
Hypothesis Tests for a Population Mean in Practice
Multiple Choice Review 
Aiying Chen, Scott Patterson, Fabrice Bailleux and Ehab Bassily
Chapter 11: The ANalysis Of Variance (ANOVA)
More About ANOVA BPS 7e Chapter 30 © 2015 W. H. Freeman and Company.
I. Statistical Tests: Why do we use them? What do they involve?
DOSE SPACING IN EARLY DOSE RESPONSE CLINICAL TRIAL DESIGNS
Interval Estimation and Hypothesis Testing
Chapter 7: The Normality Assumption and Inference with OLS
Psych 231: Research Methods in Psychology
Subhead Calibri 14pt, White
Assessing MCP-Mod relative to pairwise comparisons and trend tests in dose-ranging design and analysis Fang Liu, Anran Wang, Meihua Wang, Man Jin, Akshita.
Optimal Basket Designs for Efficacy Screening with Cherry-Picking
Type I and Type II Errors
Hui Quan, Yi Xu, Yixin Chen, Lei Gao and Xun Chen Sanofi June 28, 2019
Design and Analysis of Survival Trials with Treatment Crossover, delayed treatment effect and treatment dilution Presenter: Xiaodong Luo– R&D-SANOFI US.
Inference about Population Mean
Medical Statistics Exam Technique and Coaching, Part 2 Richard Kay Statistical Consultant RK Statistics Ltd 22/09/2019.
Quantitative Decision Making (QDM) in Phase I/II studies
Theis Lange and Shanmei Liao
Quantitative Decision Making (QDM) in Phase I/II studies
Presentation transcript:

Tobias Mielke QS Consulting Janssen Pharmaceuticals Considerations on Model-Based Approaches for Proof of Concept in Multi-Armed Studies JSM – 1st of August 2018 Tobias Mielke QS Consulting Janssen Pharmaceuticals

Problem statement Phase 2 of drug development: Show that the drug works Model the dose-response relation and pick dose for confirmatory testing Typical approach: Study 1: Top dose vs. Control to establish proof-of-concept Study 2: Multiple doses vs. Control to model dose-response Problem: Time, ressources and use of information How to use data better by combining PoC and Dose-Finding?

Dose Finding vs. PoC Good PoC designs: Study only most promsing (top) dose vs. control Good dose-finding designs: Evaluate the dose-response where the „action“ takes place. Problem: This place is not known in advance PoC only with (top) dose vs. control -> no information on dose-response

Adaptive PoC-DF example in OA (Miller et al. 2014) Endpoint: WOMAC painscore Assumption: Effect on top dose: 8mm, Standard deviation 22mm Design options evaluated One study: 3 doses vs. Placebo - 440 patients for 90% power at α=10% (Tukey). Too risky, as not enough knowledge on efficacy of compound Two studies: PoC (N=140) followed by separate DF (N=440) Too expensive, if DF initiated. No data available guiding selection of DF-doses. One combined study: PoC (N=140) + 2 new arms and all arms filled up to N=440. No data available guiding selection of additional doses One combined study: PoC (N=175 on 3 arms) with dose selection based on interim.

Adaptive PoC-DF example in OA (Miller et al. 2014) One combined study: PoC (N=175 on 3 arms) with dose selection based on interim: Design idea: medium study dose shall guide dose selection: ADDPLAN DF 4.0: Simulating & Analyzing adaptive dose-finding studies

Testing PoC: Test for any difference using MCPMod MCPMod for dose-finding (Bretz et al. (2005)) Test 𝐻 0 : 𝜇 0 = 𝜇 1 =…= 𝜇 𝐺 using optimized contrast test (e.g. Tukey test in Miller) Use „Optimized contrasts coefficients“ to test against flat dose-response: Coefficients 𝑐 0 ,…, 𝑐 𝐺 with 𝑖=0 𝐺 𝑐 𝑖 =0 Test 𝐻 0;𝑐 : 𝑐 𝑇 𝜇=0 to reject 𝐻 0 Given assumption 𝜇 ∗ on 𝜇: 𝑃 𝜇 ∗ =𝜇 𝑐 𝑇 𝑋 𝑐 𝑇 𝛴𝑐 > 𝑐 1−𝛼 =1−𝛷 𝑐 1−𝛼 − 𝑐 𝑇 𝜇 ∗ 𝑐 𝑇 𝛴𝑐 → 𝑚𝑎𝑥 𝑐 Benefits: Data is shared between doses Multiplicity doesn‘t depend on number of doses, but number of contrast vectors.

Testing PoC: Test for any difference using MCPMod Example: 𝑋~𝑁(𝑑,1), 𝑑∈ 0,1 Sample size required for 90% power at one sided α=2.5%: Optimal PoC design: Look only at „best dose“ vs. Control Interimediate dose possible, but will cost power/patients. Ways to retrieve some of the lost power: Unequal allocation: 40%, 20%, 40%: 55 patients required for linear trend Dose selection: e.g., „0.0, 0.1, 1.0“: 54 patients required for linear trend Study arms Pairwise comparisons Linear trend 2 (0,1) 46 3 (0, 0.5, 1) 75 66 4 (0, 0.33, 0.67, 1) 104 80 5 (0, 0.25, 0.5, 0.75, 1) 130 90

Testing PoC: Test for any difference Some problems with the conventional PoC: Each extra dose costs „power“ Proof of concept stage needs to be appropriately powered for „Go“ Uncertainty on true effect -> potentially under- or overpowered study Result only based on missing significance – no information on magnitude of effects A potential alternative: Lalonde et al.(2007): Set „minimum acceptable effect“ and „target effect“ Two error levels: α1 and α2: Stop, if effect significantly (α2) below target effect Go, if effect significantly (α1) above minimum acceptable effect and no „Stop“ Pause, else. Controls risk of dropping promising development at level α2

Testing PoC: Test for a target difference Generalization of Lalonde framework to multi-armed studies: Stop, if effect significantly below target (at α2) – you may not reach the target. 𝐻 0;𝑇𝑉 : 𝑖=1 𝐺 𝜇 𝑖 ≥ 𝜇 0 +𝑇𝑉 at α2 All hypothesis to be rejected at α2 -> no multiplicity correction required. Go, if some sign of efficacy (at α1): 𝐻 0;𝑀𝐴𝑉 : 𝑖=1 𝐺 𝜇 𝑖 ≤ 𝜇 0 +𝑀𝐴𝑉 at α1 At least one hypothesis to be rejected -> multiplicity correction required. Could use for this part MCPMod to test for any difference … But where is now the problem?

Testing PoC: Test for a target difference Testing: 𝐻 0;𝑇𝑉 : 𝑖=1 𝐺 𝜇 𝑖 ≥ 𝜇 0 +𝑇𝑉 at α2 If just one arm on the „Null“: Good. If multiple arms: Loss in power. 1, 2, 3, 4 active arms vs. Placebo Linear dose response 1, 2, 3, 4 active arms vs. Placebo All active doses on plateau (constant max. effect)

Testing PoC: Model based test for a target difference Testing: 𝐻 0;𝑇𝑉 : 𝑖=1 𝐺 𝜇 𝑖 ≥ 𝜇 0 +𝑇𝑉 at α2 Alternative approach: Use MCPMod models Dose response function: η 𝑑,𝜃 = 𝜃 0 + 𝜃 1 𝑓(𝑑, 𝜃 ∗ ) with 𝜃 ∗ defined in MCPMod BLUE: 𝐹 𝑇 𝛴 −1 𝐹 −1 𝐹 𝑇 𝛴 −1 𝑌 with 𝐹 𝑇 ≔ 1 … 1 𝑓( 𝑑 0 , 𝜃 ∗ ) … 𝑓( 𝑑 𝐺 , 𝜃 ∗ ) Estimator for difference at maximum dose: η 𝑑 𝐺 ,𝜃 −η 𝑑 0 ,𝜃 = 0,𝑓 𝑑 𝐺 , 𝜃 ∗ −𝑓 𝑑 0 , 𝜃 ∗ 𝐹 𝑇 𝛴 −1 𝐹 −1 𝐹 𝑇 𝛴 −1 𝑌= 𝛾 𝑇 𝜃 Distribution of estimator of difference: 𝛾 𝑇 𝜃 ~𝑁( 𝛾 𝑇 𝜃, 𝛾 𝑇 𝐹 𝑇 𝛴 −1 𝐹 −1 𝛾) Test for target effect: 𝑍= ∆−𝛾 𝑇 𝜃 𝛾 𝑇 𝐹 𝑇 𝛴 −1 𝐹 −1 𝛾 > 𝑧 1−𝛼 2 -> Maximum effect significantly below Δ. Model-based effect estimates and confidence intervals instead of pairwise comparisons. Shortcut:

Testing PoC: Model based test for a target difference Dashed line: Power for Linear Dotted line: Power for EMax Test for target effect: 𝑍= ∆−𝛾 𝑇 𝜃 𝛾 𝑇 𝐹 𝑇 𝛴 −1 𝐹 −1 𝛾 > 𝑧 1−𝛼 2 Model based test: Improves power but doesn‘t control error (biased estimator!) 1, 2, 3, 4 active arms vs. Placebo Linear dose response 1, 2, 3, 4 active arms vs. Placebo All active doses on plateau (constant max. effect)

Testing PoC: Model based test under model uncertainty … including model uncertainty Dose response functions: η 𝑑,𝜃 = 𝜃 0 + 𝜃 1 𝑓(𝑑, 𝜃 ∗ ) with 𝜃 ∗ from MCPMod BLUE: 𝐹 𝑇 𝛴 −1 𝐹 −1 𝐹 𝑇 𝛴 −1 𝑌 with 𝐹 𝑇 ≔ 1 … 1 𝑓( 𝑑 0 , 𝜃 ∗ ) … 𝑓( 𝑑 𝐺 , 𝜃 ∗ ) Estimator for difference at maximum dose: η 𝑑 𝐺 ,𝜃 −η 𝑑 0 ,𝜃 = 0,𝑓 𝑑 𝐺 , 𝜃 ∗ −𝑓 𝑑 0 , 𝜃 ∗ 𝐹 𝑇 𝛴 −1 𝐹 −1 𝐹 𝑇 𝛴 −1 𝑌= 𝛾 𝑇 𝜃 Distribution of estimator of difference: 𝛾 𝑇 𝜃 ~𝑁( 𝛾 𝑇 𝜃, 𝛾 𝑇 𝐹 𝑇 𝛴 −1 𝐹 −1 𝛾) Test for target effect: 𝑍= ∆−𝛾 𝑇 𝜃 𝛾 𝑇 𝐹 𝑇 𝛴 −1 𝐹 −1 𝛾 > 𝑧 1−𝛼 2 -> Maximum effect significantly below Δ. Reject only, if all model based estimates from MCPMod exclude Δ. Model-based effect estimates and confidence intervals instead of pairwise comparisons. Shortcut:

Testing PoC: Model based test under model uncertainty Dashed line: Power for Linear Dotted line: Power for EMax Dot-dash: Power for “both” Test for target effect: 𝑍= ∆−𝛾 𝑇 𝜃 𝛾 𝑇 𝐹 𝑇 𝛴 −1 𝐹 −1 𝛾 > 𝑧 1−𝛼 2 Test slightly overshoots target – but stopping probability controlled… 1, 2, 3, 4 active arms vs. Placebo Linear dose response 1, 2, 3, 4 active arms vs. Placebo All active doses on plateau (constant max. effect)

Testing PoC: Model based test under model uncertainty Dashed line: Power for Linear Dotted line: Power for EMax Dot-dash: Power for “both” … as long as true model is in the candidate set: … so what to do? 1, 2, 3, 4 active arms vs. Placebo Exponential dose response 1, 2, 3, 4 active arms vs. Placebo Sigmoidal dose response

Testing PoC: Model based test under model uncertainty … so what to do? Include more models into the candidate set of models: Higher chance for at least one model with bias in correct direction to control stopping probability Less likely to stop, as too many models need simultaneously „Stop“ Make proper dose-response modelling: Also fit the nonlinear parameters to the data Bias could be reduced, but effect estimates are now only asymptotically normally distributed Use Bayesian modelling instead: Assign prior model probability and prior distribution on all model parameters Given data, calculate credible intervals on maximum effect … will also generally not control false stopping probability

Thank you for your attention! References: Bretz, F., Pinheiro, J.C., Branson, M. (2005), “Combining Multiple Comparisons and Modeling Techniques in Dose-Response Studies” Biometrics, 61: 738-748 Lalonde, R.L., Kowalski, K.G., Hutmacher, M.M., Ewy, W., Nichols, D.J., Milligan, P.A., Corrigan, B.W., Lockwood, P.A., Marshall, S.A., Benincose, L.J., Tensfeldt, T.G., Parivar, K., Amantea, M., Glue, P., Koide, H. And Miller, R., (2007), “Model-based Drug Development”. Clinical Pharmacology & Therapeutics, 82: 21-32 Miller, F., Björnsson, M., Svensson, O. and Karlsten, R. (2014) “Experiences with an adaptive design for a dose-finding study in patients with osteoarthritis.” Contemporary Clinical Trials, 37: 189-199 Thank you for your attention!