An Update on Statistical Issues Associated with the International Harmonization of Technical Standards for Clinical Trials (ICH) Robert O’Neill, Ph.D.

Slides:



Advertisements
Similar presentations
CQ Deng, PhD PPD Development Research Triangle Park, NC 27560
Advertisements

Robert T. O’Neill, Ph.D. Director, Office of Biostatistics CDER, FDA
1 WORKSHOP 4: KEY COMMENTS FROM THE PANEL DISCUSSION The 3rd Kitasato University - Harvard School of Public Health Symposium Wednesday October 2nd - Thursday.
Phase II/III Design: Case Study
Development of Evaluation and Consultation on Bridging Studies: Thailand Experiences Suchart Chongprasert, Ph.D. Investigational New Drug Subdivision Food.
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
The ICH E5 Guidance: An Update on Experiences with its Implementation The ICH E5 Guidance: An Update on Experiences with its Implementation Robert T. O’Neill,
Evidence Based Advertising “Don’t accept your dog’s admiration as conclusive evidence that you are wonderful” -Ann Landers.
Role of Pharmaceutical Statistician March 10, 2009 The Role of the Pharmaceutical Statistician What can be improved? Per Larsson Head of Biostatistics.
1 Implementing Adaptive Designs in Clinical Trials: Risks and Benefits Christopher Khedouri, Ph.D.*, Thamban Valappil, Ph.D.*, Mohammed Huque, Ph.D.* *
Safety and Extrapolation Steven Hirschfeld, MD PhD Office of Cellular, Tissue and Gene Therapy Center for Biologics Evaluation and Research FDA.
Many Important Issues Covered Current status of ICH E5 and implementation in individual Asian countries Implementation at a regional level (EU) and practical.
Clinical Trial Designs for the Evaluation of Prognostic & Predictive Classifiers Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer.
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Sensitivity Analysis for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Some comments on the 3 papers Robert T. O’Neill Ph.D.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
1 A Bayesian Non-Inferiority Approach to Evaluation of Bridging Studies Chin-Fu Hsiao, Jen-Pei Liu Division of Biostatistics and Bioinformatics National.
The ICH E5 Question and Answer Document Status and Content Robert T. O’Neill, Ph.D. Director, Office of Biostatistics, CDER, FDA Presented at the 4th Kitasato-Harvard.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol Liying XU CCTER CUHK.
Clinical Trials Hanyan Yang
Dejun Tang, Novartis Pharma, China PSI Webinar July 16, 2015 Challenges and Opportunities on Multi-regional Clinical Trials Including Asian Countries.
Guidance for Industry Establishing Pregnancy Registries Pregnancy Registry Working Group Pregnancy Labeling Taskforce March, 2000 Evelyn M. Rodriguez M.D.,
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Accredited Member of the Association of Clinical Research Professionals, USA Tips on clinical trials Maha Al-Farhan B.Sc, M.Phil., M.B.A., D.I.C.
Intervention Studies Principles of Epidemiology Lecture 10 Dona Schneider, PhD, MPH, FACE.
Published in Circulation 2005 Percutaneous Coronary Intervention Versus Conservative Therapy in Nonacute Coronary Artery Disease: A Meta-Analysis Demosthenes.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Biostatistics Case Studies 2015 Youngju Pak, PhD. Biostatistician Session 2: Sample Size & Power for Inequality and Equivalence Studies.
Significance of Extrapolation of Foreign Clinical Data to Asian Countries Masahiro Takeuchi Div. of Biostatistics Kitasato University Graduate School The.
Delivering Robust Outcomes from Multinational Clinical Trials: Principles and Strategies Andreas Sashegyi, PhD Eli Lilly and Company.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Challenges of Non-Inferiority Trial Designs R. Sridhara, Ph.D.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
1 Statistical Review Dr. Shan Sun-Mitchell. 2 ENT Primary endpoint: Time to treatment failure by day 50 Placebo BDP Patients randomized Number.
Joint Meeting of Anti-Infective Drugs & Drug Safety and Risk Management Advisory Committees December 14-15, 2006 Ketek  (telithromycin) Regulatory History.
Lecture 5 Objective 14. Describe the elements of design of experimental studies: clinical trials and community intervention trials. Discuss the advantages.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
Regulatory Affairs and Adaptive Designs Greg Enas, PhD, RAC Director, Endocrinology/Metabolism US Regulatory Affairs Eli Lilly and Company.
The Discussion Section. 2 Overall Purpose : To interpret your results and justify your interpretation The Discussion.
Federal Institute for Drugs and Medical Devices The BfArM is a Federal Institute within the portfolio of the Federal Ministry of Health (BMG) The use of.
1 Updates on Regulatory Requirements for Missing Data Ferran Torres, MD, PhD Hospital Clinic Barcelona Universitat Autònoma de Barcelona.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
1 METHODS FOR DETERMINING SIMILARITY OF EXPOSURE-RESPONSE BETWEEN PEDIATRIC AND ADULT POPULATIONS Stella G. Machado, Ph.D. Quantitative Methods and Research.
European Patients’ Academy on Therapeutic Innovation Ethical and practical challenges of organising clinical trials in small populations.
CONSORT 2010 Balakrishnan S, Pondicherry Institute of Medical Sciences.
Study Designs for Acute Otitis Media: What can each design tell us? C. George Rochester, Ph.D. Anti-Infective Advisory Committee Meeting, July 11, 2002.
Introduction to Biostatistics, Harvard Extension School, Fall, 2005 © Scott Evans, Ph.D.1 Sample Size and Power Considerations.
CDER / FDA1 Clinical Study Options for locally acting nasal suspension products Robert J. Meyer, MD Director, Div. Of Pulmonary and Allergy Drug Products.
Biostatistics Case Studies 2006 Peter D. Christenson Biostatistician Session 1: Demonstrating Equivalence of Active Treatments:
Biostatistics Case Studies 2007
CLINICAL PROTOCOL DEVELOPMENT
ICH E17 General Principles for Planning and Design of MRCTs
Statistical Approaches to Support Device Innovation- FDA View
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Supplementary Table 1. PRISMA checklist
Deputy Director, Division of Biostatistics No Conflict of Interest
Randomized Trials: A Brief Overview
Critical Reading of Clinical Study Results
Strategies for Implementing Flexible Clinical Trials Jerald S. Schindler, Dr.P.H. Cytel Pharmaceutical Research Services 2006 FDA/Industry Statistics Workshop.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol
Issues in Hypothesis Testing in the Context of Extrapolation
Tim Auton, Astellas September 2014
Regulatory Perspective of the Use of EHRs in RCTs
GL 51 – Statistical evaluation of stability data
How Should We Select and Define Trial Estimands
Presentation transcript:

An Update on Statistical Issues Associated with the International Harmonization of Technical Standards for Clinical Trials (ICH) Robert O’Neill, Ph.D. Director, Office of Biostatistics, CDER, FDA 22nd Spring Symposium, New Jersey Chapter of ASA, Wed. June 6,2001

Outline of talk u International Harmonization of technical standards: efficacy, safety, quality u statistics - where does it fit in u Resources - who are the people and what are the processes u A focus on a few ICH Guidances of interest u A few issues of particular statistical concern u The future - where do we go from here

Harmonization of technical standards u ICH (Europe, Japan, United States) u Began in 1989; ICH 1 in Brussels 1991 u ICH continues today u Outside of ICH u APEC - Bridging study initiative, Teipei meeting u Canada, observers, WHO

Statistical Resources in the ICH regions u United States u CDER, CBER u Europe u U.K., Germany, Sweden u CPMP u Japan u MHW; advisors, university u China, Taiwan, Canada, Korea

Web addresses for information and guidances

ICH Guidances with statistical content u E1; Extent of population exposure to assess clinical safety u E3; structure and content of clinical study reports (CONSORT statement) u E4; Dose-response information to support drug registration u E5; Ethnic factors in the acceptability of foreign clinical data u E9; Statistical principles for clinical trials u E10; Choice of control group u E11; Clinical investigation of medicinal products in the pediatric population

ICH Guidances with statistical content u Safety u carcinogenicity u Quality u Stability (expiration dating) : Q1A, Q1E

New initiatives from the European Regulators (CPMP)- Points to Consider Documents u On Validity and Interpretation of Meta- Analyses, and One Pivotal Study (Jan, 2001) u On Missing Data (April, 2001) u On Choice of delta u On switching between superiority and non- inferiority u On some multiplicity issues and related topics in clinical trials

Efficacy Working Party (EWP) Points to Consider CPMP/EWP/1776/99CPMP/EWP/1776/99 Points to Consider on Missing Data (Released for Consultation January 2001) CPMP/EWP/2330/99CPMP/EWP/2330/99 Points to Consider on Validity and Interpretation of Meta-Analyses, and one Pivotal study ( released for consultation October 2000) CPMP/EWP/482/99 Points to Consider on Switching between Superiority and Non-inferiority (Adopted July 2000) CPMP/EWP/482/99

ICH E9 Statistical Principles for Clinical Trials: Contents u Introduction ( Purpose, scope, direction ) u Considerations for Overall Clinical Development u Study Design Considerations u Study Conduct u Data Analysis u Evaluation of safety and tolerability u Reporting u Glossary of terms

Study Design: A Major Focus of the Guideline u Prior planning u Protocol considerations

Prospective Planning u Design of the trial u Analysis of outcomes

Confirmatory Study vs. Exploratory Study u A hypothesis stated in advance and evaluated u Data driven findings

Design Issues u Endpoints u Comparisons u Choice of study type u Choice of control group u Superiority u Non-inferiority u Equivalence u Sample size u Assumptions, sensitivity analysis

Choice of Study Type u Parallel group design u Cross-over design u Factorial design u Multicenter design

Analysis: Outcome Assessment u Multiple endpoints u Adjustments

Assessing Bias and Robustness of Study Results Analysis sets

Analysis Sets u ITT principle u All randomized population u Full Analysis population u Per Protocol

Data Analysis Considerations u Prespecification of the Analysis u Analysis sets u Full analysis set u Per Protocol Set u Roles of the Different Analysis Sets u Missing Values and Outliers

Statistical Analysis Plan (SAP) u A more technical and detailed elaboration of the principal features stated in the protocol. u Detailed procedures for executing the statistical analysis of the primary and secondary variables and other data. u Should be reviewed and possibly updated during blind review, and finalized before breaking the blind. u Results from analyses envisaged in the protocol (including amendments) regarded as confirmatory. u May be written as a separate document.

Analysis Sets u The ideal: the set of subjects whose data are to be included in the analysis: u all subjects randomized into the trial u satisfied entry criteria u followed all trial procedures perfectly u no loss to follow-up u complete data records

u Used to describe the analysis set which is complete as possible and as close as possible to the intention to treat principle u May be reasonable to eliminate from the set of ALL randomized subjects, those who fail to take at least one dose, or those without data post randomization. u Reasons for eliminating any randomized subject should be justified and the analysis is not complete unless the potential biases arising from exclusions are addressed and reasonably dismissed. Full Analysis Set

u Sometimes described as: u Valid cases, efficacy sample, evaluable subjects u Defines a “subset” of the subjects in the full analysis set u May maximize the opportunity for a new treatment to show additional efficacy u May or may not be conservative u Bias arises from adherence to protocol related to treatment and/or outcome Per Protocol Set

u Advantageous to demonstrate a lack of sensitivity of the principal trial results to alternative choices of the set of subjects analyzed. u The full analysis set and per protocol set play different roles in superiority trials, and in equivalence or non-inferiority trials. u Full analysis set is primary analysis in superiority trials - avoids optimistic efficacy estimate from per protocol which excludes non-compliers. Full analysis set not always conservative in equivalence trial Roles of the Different Analysis Sets

Impact on Drug Development u On sponsor design and analysis of clinical trials used as evidence to support claims u On regulatory advice and evaluation of sponsor protocols and completed clinical trials u On maximizing quality and utility of clinical studies in later phases of drug development u On multidisciplinary understanding of key concepts and issues u Enhanced attention to planning and protocol considerations

Will the Guideline Help to Avoid Problem Areas in the Future - Maybe ! u Not a substitute for professional advice-will require professional understanding and implementation of the principles stated u Will not assure correct analysis and interpretation u Most of the guideline topics reflect areas where problems have been observed frequently in clinical trials in drug development

ICH : Chemistry u Q1E: Bracketing and Matrixing Designs for Stability Testing of Drug Substances and Drug Products: u Considerable new work, including extensive simulations to evaluate size of studies and the ability to detect important changes to expiration date setting (incomplete blocks, alias, etc).

ICH E10: Choice of Control Group and Related Design Issues in Clinical Trials u Section 1.5 is very statistically oriented involving issues like: u Assay sensitivty u Historical evidence of sensitivity to drug effects u Choice of a margin for a non- inferiority (don’t show a difference ) trial.

Assay Sensitivity in Non- inferiority designs u Assay sensitivity is a property of a clinical trial defined as the ability to distinguish an effective treatment from a less effective or ineffective treatment u Note that this property is more than just the statistical power of a study to demonstrate an effect - it also deals with the conduct and circumstances of a trial

The presence of assay sensitivity in a non-inferiority trial may be deduced from two determinations 1) Historical evidence of sensitivity to drug effects, I.e., that similarly designed trials in the past regularly distinguished effective treatments from less effective or ineffective treatments, and 2) Appropriate trial conduct, I.e. that the conduct of the trial (current) did not undermine its ability to distinguish effective treatments from less effective or ineffective treatments. [can be fully evaluated only after the active control non- inferiority trial is completed.]

Successful use of a non- inferiority trial thus involves four critical steps 1) Determining that historical evidence of sensitivity to drug effect exists. Without this determination, demonstration of efficacy from a showing of non- inferiority is not possible and should not be attempted. 2) Designing a trial. Important details of the trial design, e.g. study population, concomitant therapy, endpoints, run-in periods, should adhere closely to the design of the placebo-controlled trials for which historical sensitivity to drug effects has been determined.

Successful use of a non- inferiority trial thus involves four critical steps (cont.) 3) Setting a margin. An acceptable non- inferiority margin should be defined, taking into account the historical data and relevant clinical and statistical considerations. 4) Conducting the trial. The trial conduct should also adhere closely to that of the historical trials and should be of high quality.

Choosing the Non-inferiority margin u Prior to the trial, a non-inferiority margin, sometimes called a delta, is selected. u This margin is the degree of inferiority of the test treatments to the control that the trial will attempt to exclude statistically. u The margin chosen cannot be greater than the smallest effect size that the active drug would be reliably expected to have compared with placebo in the setting of the planned trial. [based on both statistical reasoning and clinical judgement, should reflect uncertainties in evidence and be suitably conservative.]

Outline of the Issues u What is the the non-inferiority design u What are the various objectives of the design u Complexities in choosing the margin of treatment effect - it depends upon the strength of evidence for the treatment effect of the active control u Literature on historical controls, and on the heterogeneity of treatment effects among studies u The statistical approaches to each objective, and their critical assumptions u Cautions and concluding remarks

Non-Inferiority Design u A study design used to show that a new treatment produces a therapeutic response that is no less than a pre-specified amount of a proven treatment (active control), from which it is then inferred that the new treatment is effective. The new treatment could be similar or more effective than the existing proven treatment u A non-inferiority margin  is pre-selected as the allowable reduction in therapeutic response. The margin  is chosen based on the historical evidence of the efficacy of the active control and other clinical and statistical considerations relevant to the new treatment and the current study. u ICH - E10: “This delta can not be greater than the smallest effect size that the active drug would be reliably expected to have compared with placebo in the setting of a planned trial.” - the concept of reliably and repeatedly being able demonstrate a treatment effect of a specified size !

Non-Inferiority Design (cont’d ) u A test treatment is declared clinically non- inferior to the active control if: u the trial has the necessary assay sensitivity for the trial to be valid for non-inferiority testing u the one-sided 97.5 confidence interval is entirely to the right of - 

Inference for Non-Inferiority 0 Delta Limits & 95% Confidence Intervals Control BetterTest Agent Better Non-inferiority shown Non-inferiority not shown Non-inferiority shown/ superiority could be claimed -  Treatment Difference

What are the various objectives of the non-inferiority design u To prove efficacy of test treatment by indirect inference from the active control treatment u To establish a similarity of effect to a known very effective therapy - e.g. anti-infectives * To infer that the test treatment would have been superior to an ‘imputed placebo’ ; ie. had a placebo group been included for comparison in the current trial. - a new and controversial area - choice of margin is the key

What is the Evidence supporting the treatment effect of the active control, and how convincing is it ? u Large treatment effects vs. small or modest effects u Large treatment effects - anti-infectives u Modest treatment effects - difficulties in reliably demonstrating the effect - Sensitivity to drug effects u Amount of prior study data available to estimate an effect u One single study u Several studies, of different sizes and quality u No estimate or study directly on the comparator - standard of care

How is the margin “  “ chosen based upon prior study data u For a large treatment effect, it is easier - a clinical decision of how similar a response rate is needed to justify efficacy of a test treatment - e.g. anti-infectives is an example. u For modest and variable effects, it is more difficult ; and some approaches suggest margin selection based upon several objectives.

Complexities in choosing the margin (how much of the control treatment effect to give up) u Margins can be chosen depending upon which of these questions is addressed: u how much of the treatment effect of the comparator can be preserved in order to indirectly conclude the test treatment is effective - a clinical decision for very large effects; a statistical problem for small and modest effects u how much of a treatment effect would one require for the test treatment to be superior to placebo, had a placebo been used in the current active control study - a lesser standard than the above

How convincing is the prior evidence of a treatment effect ? u Do clinical trials of the comparator treatment consistently and reliably demonstrate a treatment effect - when they do not, what is the reason ? u Study is too small to detect the effect - under powered for a modest effect size u The treatment effect is variable, and the estimate of the magnitude will vary from study to study, sometimes with NO effect in a given study - a BIG problem for active controlled studies (Sensitivity to drug effect)

How do you know which treatment effect size is appropriate for the current active control ? How much protection should be built into the choice of the margin to account for unknown bias and uncertainty in study differences ?

Inherently, the answer relies upon historical controls and their applicability to the current study u Choice of the margins should take into account all sources of variability as well as the potential biases associated with non- comparability of the current study with the historical comparisons. u A need to balance the building in of ‘bias’ in the comparison and quantifying the ‘amount of treatment effect preserved’, as a function of the relative amount of data from the historical studies and the current study

Use of historical controls in current RCT’s u Pocock,S. The combination of randomized and historical controls in clinical trials. J. Chronic Diseases 1976, 29 pp u Lists 6 conditions to be met for valid use of historical controls with controls in current trial u “Only if all these conditions are met can one safely use the historical controls as part of a randomized trial. Otherwise, the risk of a substantial bias occurring in treatment comparisons cannot be ignored.”

Importance of the assumption of constancy of the active control treatment effect derived from historical studies u It is relevant to the design and sample size of the current study, to the choice of the margin, to the amount of bias built into the comparisons, to the amount of effect size one can preserve (both of these are likely confounded), and to the statistical uncertainty of the conclusion. u Before one can decide on how much of the effect to preserve, one should estimate an effect size for which there is evidence of a consistent demonstration that effect size exists.

Explaining Heterogeneity among independent studies : Lessons from meta-analyses u Variation in baseline risk as an explanation of heterogeneity in meta- analysis, S.D. Walter, Stat. In Medicine, 16, (1997) u An empirical study of the effect of the control rate as a predictor of treatment efficacy in meta-analysis of clinical trials, Schmid,Lau,McIntosh and Cappelleri, Stat. In Medicine, 17, (1998)

Explaining Heterogeneity among independent studies : Lessons from meta-analyses (cont.) u Explaining heterogeneity in meta-analysis: a comparison of methods. Thompson and Sharp, Stat. In Medicine, 18, (1999) u Assessing the potential for bias in meta- analysis due to selective reporting of subgroup analyses within studies. Hahn, Williamson, Hutton, Garner and Flynn, Stat. In Medicine, 19, (2000)

Explaining Heterogeneity among independent studies : Lessons from meta-analyses (cont.) u Large trials vs. meta-analysis of smaller trials - How do their results compare ? Cappelleri, Ioannidis, Schmid, de Ferranti, Aubert, Chalmers, Lau. JAMA, , 1996 u Discordance between meta-analysis and large-scale randomized controlled trials: examples from the management of acute myocardial infarction. Borzak and Ridker, Ann. Internal Med.,123, (1995) u Discrepancies between meta-analysis and subsequent large randomized controlled trials. LeLorier, Gregoire, Benhaddad, Lapierre,Derderian. NEJM, 337, (1997)

Use of meta-analysis - necessary but not sufficient u Distinguish under powered studies from well powered studies for a common effect size - if possible u How many trials are consistent with no effect, rather than an effect of some size u Determine between trial variability as an additional factor to consider in choosing a conservative margin u How do you know if the current study comes from the same trial population, and where does it rest in the trial distribution - critical to assumptions for control group rate and constancy of treatment effect u Resorting to meta-analysis of all studies, when few individual studies reject null, tells you something !

Three approaches to the problem ¶ Indirect confidence interval comparisons (ICIC) (CBER/FDA type method, etc.) - thrombolytic agents in the treatment of acute MI - thrombolytic agents in the treatment of acute MI · Virtual method (Hasselblad & Kong, Fisher, etc.) - Clopidogrel, aspirin, placebo Ì Bayesian approach (Gould, Simon, etc.) - treatment of unstable angina and non-Q wave MI

When may it not be possible to estimate a margin or to use the non-inferiority design to infer efficacy ? u There is a known creep in the standard of care over time and/or the active control treatment, which renders any past estimates of active control treatment effects not comparable or valid for the current comparison, under conditions of medical practice in the new current study u e.g. use of surfactants in neonatal treatment

ICH E5 Ethnic Factors in the Acceptability of Foreign Clinical Data

Key Features of E5 u Operational definition of ethnic factors u Clinical Data Package Fulfilling Regulatory Requirements in New Region u Extrapolation of Foreign Clinical Data to New Region (role of ethnic factors) u Bridging Studies u Global Development Strategies

Ethnic Factor Definition u intrinsic factors: characteristics associated with the drug recipient (ADME studies) u race, age, gender, organ dysfunction, genetic polymorphism * extrinsic factors: characteristics associated with the environment and culture in which one lives (clinical outcomes) u clinical trial conduct, diet, tobacco and alcohol use, compliance with prescribed medications

Assessing a medicine’s sensitivity to ethnic factors (part of the screening process) u Properties of a compound making it more likely to be sensitive: u Metabolism by enzymes known to show genetic polymorphism u High likelihood of use in a setting of multiple co-medications

Assessment of the Clinical Data Package (CDP) for acceptability u Question 1: Meets regulatory requirements - yes/no u Question 2: Extrapolation of foreign data appropriate - yes/no u Question 3: Further clinical study (ies) needed for acceptability by the new region - yes/no u Question 4: Acceptability in the new region - yes/no

Meets regulatory requirements u Issues of evidence u Confirmatory evidence; two or more studies showing treatment effects u Interpreting results of foreign clinical trials which provide that evidence (may be one study, or all studies, or part of a study) u Which study designs provide evidence u Active control / non-inferiority designs u Placebo or active control / show a difference designs

The sources of data for an application (implementation) u All clinical studies for efficacy performed in foreign region u One study in the United States, one or more foreign clinical studies u Multi-center/ multi-region clinical trials form the basis for efficacy

Considerations for evaluating clinical efficacy between regions u Study design differences u Magnitude of treatment effect sizes u Effect size variability; subgroup differences u Impact of intrinsic factors - determined when ? u Impact of Extrinsic factors u trial conduct and monitoring u usage of concomitant medications u protocol adherence

Bridging Studies u When u Why u What type E5 is purposely vague on how to do this or what their design should be

Study design and study objectives (need examples and experience) u What type of bridging study would be helpful for extrapolation - u PK/PD u Another clinical trial of the primary clinical endpoint u equivalence/non-inferiority: treatment effect acceptably close - margin or delta u dose response study u superiority design - estimate treatment effect size for comparison

E5 allows for a new study in the new region - why is that needed ? u When all the clinical data is derived from a foreign region and extrapolation is an issue u When the experience with clinical trials in that region is minimal u When there is concern with ability to confirm a finding from a study(ies) u A confirmatory clinical trial is the bridging study

Developmental Strategies for Global Development u Early vs. later strategies u Designing population pk/pd into clinical studies u Planning to explain effect size differences among regions u Design of bridging studies early in development

Study Design u Better planning in Phase I, II, III and more efficient study designs to address several subgroup questions simultaneously u Design Phase III with some knowledge of PK / PD differences in Phase I / II u Address multiple questions simultaneously for efficiency (age, gender, ethnic)

Study Design u Assessing the influence of ethnic factors in each study Phase (I, II, III) and to identify earlier and account for, by design, the influence of ethnic factors u Ethnic factors as another subgroup u Age, gender, renal status, etc. u Ethnic factors integrated with u Dose response u Geriatrics u Population exposure for safety

Remarks Remarks u Little experience at this time with bridging studies u Little experience with Japanese trials in NDA applications, or trials from Asia u More experience with foreign trials from Europe - possible heterogeneity of treatment effects being evaluated; concern for experience in new regions like Eastern Europe

The future u Appears to be increasingly dependent on statistical input, methods, study design, interpretation, etc. u Statistical resources (people) are needed in the regulatory agencies in all countries/regions serious about inference - not always present, maintained - cannot develop guidance documents and consensus positions without this,nor rely on guidances alone u Global drug development is beginning to recognize the need for early planning for multi- regional inference - the questions and study designs are just unfolding