1 Changing Trial Designs on the Fly Janet Wittes Statistics Collaborative ASA/FDA/Industry Workshop September 2003
2 Context Trial that is hard to redo Serious aspect of serious disease Orphan
3 Statistical rules limiting changes To preserve the Type I error rate To protect study from technical problems arising from operational meddling
4 Challenge sense rigor
5
6 Challenge senseless rigor mortis
7 Scale of rigor Over rigid Rigorous Prespecified methods for change – preserves Unprespecified but reasonable change Invalid analysis responders analysis outcome-outcome analysis completers
8 Consequences No change during the study OR Potential for the perception that change caused by effect
9 Prespecified changes Sequential analysis Stochastic curtailing Futility analysis Internal pilot studies Adaptive designs Two-stage designs
10 Problems Technical Solved Operational Risks accepted EfficiencyUnderstood
11 Add a DMC What if it acts inconsistently with guidelines? Something really unexpected happens? DMC initiates change Steering Committee initiates change
12 Reasons for unanticipated changes Unexpected high-risk group Changed standard of care Statistical method defective Too few endpoints Assumptions of trial incorrect Other
13 Examples 1.Too much censoring; DMC extends trial 2.Boundary not crossed but DMC stops 3.Unexpected adverse event 4.Statistical method defective 5.Event rate too low; DMC changes design
14 #1 Endpoint-driven trial Trial designed to stop after 200 deaths Observations different from expected Recruitment Mortality rate At 200 deaths, fu of many people<2 mo DMC: change fu to minimum 6 mo P-value: 0.20 planned; at end
15 #2. Boundary not crossed Endpoint Primary: 7 day MI Secondary: one-year mortality Very stringent boundary
16 What DMC sees Very strong result at 7 days No problem at 1 year Clear excess of serious adverse events
17 Haybittle-Peto bound (10%)
18 Haybittle-Peto bound (30%)
19 Haybittle-Peto bound (50%)
20 Haybittle-Peto bound (70%)
21 Haybittle-Peto bound (70%)
22 #3. Unexpected adverse event: PERT study of the WHI Prespecified boundaries for BenefitHarm Heart attackStroke FracturePE Colon cancerBreast cancer
23 Observations BenefitHarm -----Stroke FracturePE Colon cancerBreast cancer Heart attack
24 Actions Informed the women about increased risk of stroke, heart attack, and PE Informed them again Stopped the study
25 #4. Statistical method defective Neurological disease 20 question instrument Anticipated about 20% would not come Planned multiple imputation- results: Scale: 0 to 80 Value for ID 001: ? ? MI values: -22, 176
26 #5. Too few endpoints Example: approved drug Off-label use associated with AE Literature: SOC event rate: 20 percent Non-inferiority design - = 5 Sample size: 800/group
27 Observation 400 people randomized 0 events What does the DMC do?
28 Choices Continue to recruit 1600 Stop and declare no excess Choose some sample size Tell the Steering Committee to choose a sample size What if n=1? 2? 5? 10?
29 Conclusions Ensure that DMC understands role Separate decision-making role of DMC and Steering Committee Distinguish between reasonable changes on the fly and cheating Expect fuzzy borders
30 Technical Changing plans can increase Type I error rate We need to adjust for multiple looks How do we adjust for changes?
31 Operational Unblind assessments Subtle change in procedures In clinical trials, the FDA and SEC