Download presentation
Presentation is loading. Please wait.
Published byPeter Cummings Modified over 9 years ago
1
Using Predictive Biomarkers in the Design of Adaptive Phase III Clinical Trials Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://linus.nci.nih.gov/brb
2
Biometric Research Branch Website http://linus.nci.nih.gov/brb Powerpoint presentations and audio files Reprints & Technical Reports BRB-ArrayTools software BRB-ArrayTools Data Archive Sample Size Planning for Targeted Clinical Trials
3
Good Pivotal Clinical Trials Ask important questions Get reliable answers –Convincing to consumers of the trial results Regulators Practicing physicians Reimbursers
4
Important Question Does a new treatment administered in a specified manner to a specified population of patients result in improved patient benefit compared to treating the patient with a standard of care treatment
5
Reliable Answer Randomized control group Unbiased assessment of outcome Test pre-specified hypothesis Control for multiple analyses –Interim analyses –Patient subsets –Endpoints
6
Phase I and II Trials Develop information needed to plan effective pivotal trials Inference in phase I/II is local –It need only satisfy the study planners
7
Adaptive Trials Adaptiveness in phase I and II trials can help optimize the dose/schedule, regimen, patient population in order to develop the right pivotal trial Most drugs fail because –They are toxic –They are ineffective –They are not tested in the right dose/schedule/regimen for the right population of patients Rushing to do the wrong pivotal trial is a mistake Adaptiveness in phase III must be carefully structured to not interfere with the reliability and convincingness of the pivotal trial –Data used to frame a hypothesis should be distinct form the data used to frame the hypothesis
8
Many cancer treatments benefit only a small proportion of the patients to which they are administered Targeting treatment to the right patients can greatly improve the therapeutic ratio of benefit to adverse effects –Treated patients benefit –Treatment more cost-effective for society
9
“Biomarkers” Surrogate endpoints –A measurement made before and after treatment to determine whether the treatment is working –Surrogate for clinical benefit Predictive classifiers –A measurement made before treatment to select good patient candidates for the treatment
10
Surrogate Endpoints It is very difficult to properly validate a biomarker as a surrogate for clinical benefit. It requires a series of randomized trials with both the candidate biomarker and clinical outcome measured –Must demonstrate that treatment vs control differences for the candidate surrogate are concordant with the treatment vs control differences for clinical outcome –It is not sufficient to demonstrate that the biomarker responders survive longer than the biomarker non- responders
11
It is often more difficult and time consuming to properly “validate” an endpoint as a surrogate than to use the clinical endpoint in phase III trials
12
Biomarkers for use as endpoints in phase I or II studies need not be validated as surrogates for clinical outcome Unvalidated biomarkers can also be used for early “futility analyses” in phase III trials
13
Validation=Fit for Purpose FDA terminology of “valid biomarker” and “probable valid biomarker” are not applicable to predictive classifiers “Validation” has meaning only as fitness for purpose and the purpose of predictive classifiers are completely different than for surrogate endpoints Criteria for validation of surrogate endpoints should not be applied to biomarkers used for treatment selection
14
Medicine Needs Predictive Markers not Prognostic Factors Most prognostic factors are not used because they are not therapeutically relevant Most prognostic factor studies are not focused on a clear objective – they use a convenience sample of patients for whom tissue is available –often the patients are too heterogeneous to support therapeutically relevant conclusions
15
In new drug development –The focus should be on evaluating the new drug in a population defined by a predictive classifier, not on “validating” the classifier In developing a predictive classifier for restricting a widely used treatment –The focus should be on evaluating the clinical utility of the classifier; Is clinical outcome better if the classifier is used than if it is not used?
16
New Drug Developmental Strategy (I) Develop a diagnostic classifier that identifies the patients likely to benefit from the new drug Develop a reproducible assay for the classifier Use the diagnostic to restrict eligibility to a prospectively planned evaluation of the new drug Demonstrate that the new drug is effective in the prospectively defined set of patients determined by the diagnostic
17
Using phase II data, develop predictor of response to new drug Develop Predictor of Response to New Drug Patient Predicted Responsive New Drug Control Patient Predicted Non-Responsive Off Study
18
Applicability of Design I Primarily for settings where the classifier is based on a single gene whose protein product is the target of the drug With substantial biological basis for the classifier, it will often be unacceptable ethically to expose classifier negative patients to the new drug
19
Evaluating the Efficiency of Strategy (I) Simon R and Maitnourim A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clinical Cancer Research 10:6759-63, 2004; Correction 12:3229,2006 Maitnourim A and Simon R. On the efficiency of targeted clinical trials. Statistics in Medicine 24:329-339, 2005.
20
Two Clinical Trial Designs Compared Un-targeted design –Randomized comparison of T to C without screening for expression of molecular target Targeted design –Assay patients for expression of target –Randomize only patients expressing target
21
Separating Assay Performance from Treatment Specificity 1 = treatment effect for Target + patients 0 = treatment effect for Target – patients 1- Proportion of patients target positive Sensitivity = Prob{Assay+ | Target +} Specificity = Prob{Assay- | Target -}
22
Randomized Ratio
23
Randomized Ratio sensitivity=specificity=0.9 Express target 0 =0 0 = 1 /2 0.751.291.26 0.51.81.6 0.253.01.96
24
Screened Ratio
25
Screened Ratio sensitivity=specificity=0.9 Express target 0 =0 0 = 1 /2 0.750.90.88 0.50.90.80 0.250.90.59
26
Trastuzumab Metastatic breast cancer 234 randomized patients per arm 90% power for 13.5% improvement in 1-year survival over 67% baseline at 2-sided.05 level If benefit were limited to the 25% assay + patients, overall improvement in survival would have been 3.375% –4025 patients/arm would have been required If assay – patients benefited half as much, 627 patients per arm would have been required
27
Gefitinib Two negative untargeted randomized trials first line advanced NSCLC –2130 patients 10% have EGFR mutations If only mutation + patients benefit by 20% increase of 1-year survival, then 12,806 patients/arm are needed For trial targeted to patients with mutations, 138 are needed
28
Treatment Hazard Ratio for Marker Positive Patients Number of Events for Targeted Design Number of Events for Traditional Design Percent of Patients Marker Positive 20%33%50% 0.5742040720316 Comparison of Targeted to Untargeted Design Disease-Free Survival Endpoint Simon R, Development and Validation of Biomarker Classifiers for Treatment Selection, JSPI
29
Web Based Software for Comparing Sample Size Requirements http://linus.nci.nih.gov/brb/
34
Developmental Strategy (II) Develop Predictor of Response to New Rx Predicted Non- responsive to New Rx Predicted Responsive To New Rx Control New RXControl New RX
35
Developmental Strategy (II) Do not use the diagnostic to restrict eligibility, but to structure a prospective analysis plan. Compare the new drug to the control overall for all patients ignoring the classifier. –If p overall 0.04 claim effectiveness for the eligible population as a whole Otherwise perform a single subset analysis evaluating the new drug in the classifier + patients –If p subset 0.01 claim effectiveness for the classifier + patients.
36
This analysis strategy is designed to not penalize sponsors for having developed a classifier It provides sponsors with an incentive to develop genomic classifiers
37
Key Features of Design (II) The purpose of the RCT is to evaluate treatment T vs C overall and for the pre- defined subset; not to modify or refine the classifier or to re-evaluate the components of the classifier. This design assumes that the classifier is a binary classifier, not a “risk index”
38
The Roadmap 1.Develop a completely specified genomic classifier of the patients likely to benefit from a new drug 2.Establish reproducibility of measurement of the classifier 3.Use the completely specified classifier to design and analyze a new clinical trial to evaluate effectiveness of the new treatment with a pre-defined analysis plan.
39
Guiding Principle The data used to develop the classifier must be distinct from the data used to test hypotheses about treatment effect in subsets determined by the classifier –Developmental studies are exploratory And not closely regulated by FDA –Studies on which treatment effectiveness claims are to be based should be definitive studies that test a treatment hypothesis in a patient population completely pre-specified by the classifier
40
Development of Genomic Classifiers Single gene or protein based on knowledge of therapeutic target Empirically determined based on evaluation of a set of candidate genes –e.g. EGFR assays Empirically determined based on genome- wide correlating gene expression or genotype to patient outcome after treatment
41
Development of Genomic Classifiers During phase II development or After failed phase III trial using archived specimens. Adaptively during early portion of phase III trial.
42
Adaptive Signature Design An adaptive design for generating and prospectively testing a gene expression signature for sensitive patients Boris Freidlin and Richard Simon Clinical Cancer Research 11:7872-8, 2005
43
Adaptive Signature Design End of Trial Analysis Compare E to C for all patients at significance level 0.04 –If overall H 0 is rejected, then claim effectiveness of E for eligible patients –Otherwise
44
Otherwise: –Using only the first half of patients accrued during the trial, develop a binary classifier that predicts the subset of patients most likely to benefit from the new treatment E compared to control C –Compare E to C for patients accrued in second stage who are predicted responsive to E based on classifier Perform test at significance level 0.01 If H 0 is rejected, claim effectiveness of E for subset defined by classifier
45
Classifier Development Using data from stage 1 patients, fit all single gene logistic models (j=1,…,M) Select genes with interaction significant t i = treatment indicator for patient i (0,1) x ij = log expression for gene j of patient i
46
Classification of Stage 2 Patients For i’th stage 2 patient, selected gene j votes to classify patient as preferentially sensitive to T if
47
Classification of Stage 2 Patients Classify i’th stage 2 patient as differentially sensitive to T relative to C if at least G selected genes vote for differential sensitivity of that patient
48
Treatment effect restricted to subset. 10% of patients sensitive, 10 sensitivity genes, 10,000 genes, 400 patients. TestPower Overall.05 level test46.7 Overall.04 level test43.1 Sensitive subset.01 level test (performed only when overall.04 level test is negative) 42.2 Overall adaptive signature design85.3
49
Overall treatment effect, no subset effect. 10,000 genes, 400 patients. TestPower Overall.05 level test74.2 Overall.04 level test70.9 Sensitive subset.01 level test1.0 Overall adaptive signature design70.9
50
Biomarker Adaptive Threshold Design Wenyu Jiang, Boris Freidlin & Richard Simon (Submitted for publication) http://linus.nci.nih.gov/brb
51
Biomarker Adaptive Threshold Design Randomized pivotal trial comparing new treatment E to control C Quantitative predictive biomarker B Survival or DFS endpoint
52
Biomarker Adaptive Threshold Design Have identified a univariate biomarker index B thought to be predictive of patients likely to benefit from E relative to C Eligibility not restricted by biomarker No threshold for biomarker determined Biomarker value scaled to range (0,1)
53
Procedure B S(b)=log likelihood ratio statistic for treatment effect in subset of patients with B b T=max{S(0)+R, max{S(b)}} Compute null distribution of T by permuting treatment labels If the data value of T is significant at 0.05 level, then reject null hypothesis that E is ineffective Compute point and interval estimates of the threshold b
55
Estimation of Threshold
58
Traditional Phase II Trials Estimate the proportion of tumors that shrink by 50% or more when the drug is administered singly or in combination to patients with advanced stage tumors of a specific primary site
59
Problem With Traditional Approach Phase II single agent responses do not predict well for phase III success of combination regimens –Drugs active as single agents may not contribute to activity of combinations Some drugs that have effectiveness in phase III did not produce many responses in single agent phase II trials –Some molecularly targeted drugs are thought to be cytostatic rather than cytotoxic
60
Preferred Phase II Evaluation Standard regimen ± new drug Time to progression endpoint
61
Design Approaches Single arm trial of standard + new drug using historical controls –Often uninterpretable Randomized phase 2 design –Standard +- new drug –Requires larger sample size than traditional oncology phase II Randomized discontinuation design –All patients started on new drug –Randomized pts with stable disease at specified time –Requires large total sample size –Not a phase III design
62
Seamless Phase II/III Trial Randomized comparison of standard treatment ± new drug Size trial using phase III (e.g. survival) endpoint Perform interim futility analysis using phase II endpoint (e.g.biomarker or PFS) –If treatment vs control results are not significant for phase II endpoint, terminate accrual and do not claim any benefit of new treatment –If results are significant for intermediate endpoint, continue accrual and follow-up and do analysis of phase III endpoint at end of trial Interim analysis does not “consume” any of the significance level for the trial
63
Developmental Strategies Compared Perform phase III survival trial without phase II trial Perform phase III survival trial using futility analysis on survival, without phase II trial Perform randomized phase II trial and if significant for PFS then perform phase III survival trial Seamless II/III –Two-stage (i.e. suspend accrual during f/u for pfs) –Interim analysis (do not suspend accrual)
64
No Treatment Effect on Survival or PFS
65
Treatment Effect on Survival and PFS
66
No Treatment Effect on Survival or PFS Varying Significance Threshold for Interim Analysis Seamless Interim Analysis Design
67
Are Bayesian Methods More Adaptive than Frequentist Methods? With frequentist methods the algorithm for adaptivness must be specified in advance –To limit the type I error to 5% With Bayesian methods all prior distributions must be specified in advance –Subjective prior distributions are acceptable for phase I/II trials but not for pivotal trials Who cares about your prior? Bayesian inference usually does not control type I error Bayesian methods can control the type I error only if the algorithm for adaptiveness is specified in advance
68
Are Bayesian Methods More Adaptive than Frequentist Methods? Frequentist designs can adaptively vary the sample size Frequentist designs can incorporate multi-arm trials Frequentist designs do not usually adaptively modify the randomization weight but could if analysis were based on permutational significance tests Bayesian designs that modify the randomization weight are generally not robust to time trends in prognostic factors
69
Are Bayesian Methods More Adaptive than Frequentist Methods? Adaptive Bayesian designs are most likely to be useful and acceptable in pre-pivotal trials
70
Collaborators Biometric Research Branch, NCI –Boris Freidlin –Sally Hunsberger –Wenyu Jiang –Aboubakar Maitournam –Yingdong Zhao FDA –Sue Jane Wang
71
Dupuy A and Simon R. Critical review of published microarray studies for clinical outcome and guidelines for statistical analysis and reporting, Journal of the National Cancer Institute 99:147-57, 2007. Dobbin K and Simon R. Sample size planning for developing classifiers using high dimensional DNA microarray data. Biostatistics 8:101-117, 2007. Simon R. Development and validation of therapeutically relevant predictive classifiers using gene expression profiling, Journal of the National Cancer Institute 98:1169-71, 2006. Simon R. Validation of pharmacogenomic biomarker classifiers for treatment selection. Cancer Biomarkers 2:89-96, 2006. Simon R. A checklist for evaluating reports of expression profiling for treatment selection. Clinical Advances in Hematology and Oncology 4:219-224, 2006. Simon R, Lam A, Li MC, et al. Analysis of gene expression data using BRB-ArrayTools, Cancer Informatics 2:1-7, 2006 Using Genomic Classifiers In Clinical Trials
72
. Simon R and Maitnourim A. Evaluating the efficiency of targeted designs for randomized clinical trials. Clinical Cancer Research 10:6759-63, 2004; Correction 12:3229, 2006. Maitnourim A and Simon R. On the efficiency of targeted clinical trials. Statistics in Medicine 24:329-339, 2005. Simon R. When is a genomic classifier ready for prime time? Nature Clinical Practice – Oncology 1:4-5, 2004. Simon R. An agenda for Clinical Trials: clinical trials in the genomic era. Clinical Trials 1:468-470, 2004. Simon R. Development and validation of therapeutically relevant multi-gene biomarker classifiers. Journal of the National Cancer Institute 97:866-867, 2005.. Simon R. A roadmap for developing and validating therapeutically relevant genomic classifiers. Journal of Clinical Oncology 23:7332-41,2005. Freidlin B and Simon R. Adaptive signature design. Clinical Cancer Research 11:7872-78, 2005. Simon R. and Wang SJ. Use of genomic signatures in therapeutics development in oncology and other diseases, The Pharmacogenomics Journal 6:166-73, 2006. Using Genomic Classifiers In Clinical Trials
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.