Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analytical Similarity Assessment: Practical Challenges and Statistical Perspectives Richard Montes, Ph.D. Hospira, a Pfizer company Biosimilars Pharmaceutical.

Similar presentations


Presentation on theme: "Analytical Similarity Assessment: Practical Challenges and Statistical Perspectives Richard Montes, Ph.D. Hospira, a Pfizer company Biosimilars Pharmaceutical."— Presentation transcript:

1 Analytical Similarity Assessment: Practical Challenges and Statistical Perspectives Richard Montes, Ph.D. Hospira, a Pfizer company Biosimilars Pharmaceutical Sciences (Statistics)

2 2 Presentation Outline Background on demonstration of biosimilarity Tier 1 (Equivalence Testing) Issues in setting equivalence margins Unequal variances Imbalanced samples sizes Tier 2 (Quality Range Assessment) Proposed algorithm to derive k multiplier

3 3 FDA recommends a stepwise approach to demonstrate biosimilarity Scientific Considerations in Demonstrating Biosimilarity to a Reference Product (2015) “Totality of Evidence”

4 4 Structural and Functional Characterization M 5 methods M 6 methods M 4 methods M 3 methods M 2 methods M 1 methods II. Primary Structure III. Higher Order Structure I. Functional Activity VI. Drug Product Characteristics V. Product Related Substances and Impurities IV. Post- Translational Modifications

5 5 Determination of Critical Quality Attributes (CQA) and Tiered Ranking Tiered ranking where K is the number of criteria (e.g., attribute impact to biologic activity, relative abundance, etc.) Relevance to mechanism of action (MOA) Amenability to quantitative statistical analyses

6 6 FDA Framework for Tiered Statistical Analysis of Attributes Used in Biosimilarity Assessment TierDescriptionStatistical Treatment 1 MOST RELEVANT to mechanism of action (MOA) function of product clinical effects Formal equivalence testing 2 POTENTIALLY RELEVANT to mechanism of action function of product clinical effects Evaluation versus reference product quality ranges 3 LEAST RELEVANT to mechanism of action (MOA) function of product clinical effects or not amenable to quantitative comparisons Raw data and graphical comparison Chow, Drug Des 2014, 3:3; On Assessment of Analytical Similarity in Biosimilar Studies.

7 7 Tier 1: Equivalence Testing T= Biosimilar R = Reference TT TT RR RR LL UU HYPOTHESISLOWER LIMITUPPER LIMIT Null (inequivalence) Alternate (equivalence) Conclude EQUIVALENCE if both null hypotheses rejected CHALLENGE: How to select equivalence margins [  L,  U ]? A. Two One-Sided Tests (TOST) Conclude EQUIVALENCE if 90% CI of mean difference is within [, ] B. Confidence Interval approach

8 8 Tier 2: Quality Range Assessment Quality assessment based on control charting concept. 2. If a large percentage (e.g., 90%) of Biosimilar lots are within Quality Range, conclude as HIGHLY SIMILAR CHALLENGE: How to select k?

9 9 Tier 3: Qualitative assessment graphical and tabular descriptive statistics raw analytical measurement outputs e.g., chromatograms no formal quantitative statistical analyses

10 10 Limit Tier 1 attributes to the most relevant to mechanism of action (MOA) Pairwise correlation matrix of various Functional Assays In Vivo Biopotency is most relevant to MOA Some functional assays are overlapping (orthogonal). Rank as Tier 1: In Vivo Biopotency In Vitro Specific Activity = [In Vitro Biopotency / Protein] Rank as Tier 2 or 3: all others

11 11 Practical challenges / statistical considerations – Basis for setting Equivalence Margins

12 12 Effect Size* as an alternative to Mean Difference HYPOTHESISLOWER LIMITUPPER LIMIT Null (inequivalence) Alternate (equivalence) HYPOTHESISLOWER LIMITUPPER LIMIT Null (inequivalence) Alternate (equivalence) MEAN DIFFERENCE EFFECT SIZE* * Burdick et al. (DIA Statistics Forum 2015) In practice,

13 13 Power Type I Error Case: Equal Sample Sizes, Equal Variances

14 14 Practical challenges / statistical considerations – Unequal Product Variances

15 15 T = Biosimilar R = Originator a post-approval commitment to implement control strategies to improve biosimilar variability may address this

16 16 (* ) splitting – use sample subset to establish margins, use remainder for TOST

17 17 Power Type I Error

18 18 Power Type I Error Using imbalanced sample sizes can unduly influence the similarity conclusion.

19 19 Use n R* =min(1.5n T,n R ) Power Type I Error

20 20 Summary on Tier 1 Assessment

21 21 FDA requires justification of selected k Cited assay variability, nature and criticality of attribute as determinants of k but no prescriptive approach Tsong 1 et al., recommended: k can be chosen from 2~3 based on the targeted coverage. (k =2, for 95%; k = 2.5 for 99%; and k = 3 for 99.7%) Currently requires 90% of biosimilar lots covered by the quality range ( 1 ) Tsong et al., “Development of Statistical Approaches for Analytical Biosimilarity Evaluation”. DIA Statistics Forum 2015

22 22 Issues with the Tier 2 framework

23 23 Proposed customized algorithm to derive k Part I Part II

24 24 Part I: Simulation of lots and Tier 2 assessment repeated 10,000 times Continue to Part II

25 25 Continued from Part I

26 26

27 27 Selected k for different mean shifts for 99% confidence, 90% proportion Indexed against multiplier for a 99% confidence / 90% proportion tolerance interval

28 28 Selected k for different mean shifts for 95% confidence, 90% proportion Indexed against multiplier for a 95% confidence / 90% proportion tolerance interval

29 29 Proposed algorithm is more flexible than a traditional tolerance interval

30 30 Algorithm adapted to attribute criticality and variances inequality (if justifiable) VariancesConfidence (%)k 993.8 952.9 994.6 953.5 More critical attribute uses 95% confidence (lower k).

31 31 Link Tier 1 and Tier 2 assessments (k=3 vs k=custom at 99% confidence) Varies from 4.5 to 2.4

32 32 Link Tier 1 and Tier 2 assessments (k=3 vs k=custom at 95% confidence) Varies from 3.4 to 2.1

33 33 If Biosimilar product is assessed to have <90% of lots within Quality Range, it can be deduced:

34 34 Practical utility of the algorithm Serves as grid to vet whether the underlying assumptions are met or not Points to the sponsors what areas to further investigate If biosimilar product variance is larger than reference product, what is the root cause and how can it be addressed? If mean shift is larger than allowable, what is the root cause? Is the mean shift practically relevant? What process changes can be implemented?

35 35 Summary of Tier 2 Assessment No prescriptive approach for setting k multiplier Current Control Charting concept paradigm only tests whether Biosimilar lots are from the same population as Reference products treats problem as single population instead of two distinct population comparison Proposed algorithm to set k accounts for mean shifts and product variances inequality (if justified), sampling variability, and criticality of the attribute

36 36 References Burdick, R.K. and Ramirez, J.G. (2015), “Statistical Issues in Biosimilar Analytical Assessment: Perspectives on FDA ODAC Analysis”. Presentation at DIA Statistics Forum. April 2015. North Bethesda, MD Chow, S.C. (2014). “On Assessment of Analytical Similarity in Biosimilar Studies”. Drug Des 3: 119. doi:10.4172/2169-0138.1000e124 Tsong, Y., Shen, M., Dong, C. (2015), “Development of Statistical Approaches for Analytical Biosimilarity Evaluation”. Presentation at DIA Statistics Forum. April 2015. North Bethesda, MD

37 37 Acknowledgments Richard Burdick (Elion Labs) Effect Size concept and SAS code Aili Cheng, Pfizer PharmSci & PGS Statistics Imbalanced sample size discussion


Download ppt "Analytical Similarity Assessment: Practical Challenges and Statistical Perspectives Richard Montes, Ph.D. Hospira, a Pfizer company Biosimilars Pharmaceutical."

Similar presentations


Ads by Google