Presentation is loading. Please wait.

Presentation is loading. Please wait.

MPS/MSc in StatisticsAdaptive & Bayesian - Lect 51 Lecture 5 Adaptive designs 5.1Introduction 5.2Fisher’s combination method 5.3The inverse normal method.

Similar presentations


Presentation on theme: "MPS/MSc in StatisticsAdaptive & Bayesian - Lect 51 Lecture 5 Adaptive designs 5.1Introduction 5.2Fisher’s combination method 5.3The inverse normal method."— Presentation transcript:

1 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 51 Lecture 5 Adaptive designs 5.1Introduction 5.2Fisher’s combination method 5.3The inverse normal method 5.4 Difficulties with adaptive designs

2 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 52 5.1Introduction Current interest in “adaptive designs” has grown largely from the work of Peter Bauer and his colleagues at the University of Vienna since the early 90s Some of the ideas are inherent in Lloyd Fisher’s “self- designing clinical trials” introduced independently at about the same time (Fisher, 1998) An earlier class of adaptive designs, introduced by Tom Louis (1975) are now called “response adaptive designs”

3 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 53 Motivation To allow more than simply stopping at an interim analysis For example Dropping/adding treatment arms Changing the primary endpoint Changing the trial objective (such as switching from non-inferiority to superiority) and more ….. Recent books: Chow (2006), Chang (2006)

4 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 54 The null distribution of a p-value Consider a test of H 0 :  = 0 vs H 1 :  > 0, based on a continuously distributed test statistic T(x) computed from data x Let p denote the resulting one-sided p-value p = P(T ≥ t;  = 0) where T denotes the random value and t the observed value of the test statistic p itself is an observed value of a random variable P, and the probability P(P  p;  = 0) is equal to p That is, under H 0, P is uniformly distributed on (0, 1)

5 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 55 5.2Fisher’s combination method A two-stage design, no early stopping The two datasets must be independent m 1 = n 1 : m 2 = n + = n 2  n 1 First stage m 1 observations per group Second stage m 2 observations per group

6 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 56 The data from the first stage and the new data collected at the second stage are treated separately First stageSecond stage Observationsm 1 m 2 p-value (one-sided)p 1 p 2

7 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 57 Combine the evidence from the two stages via the product of the p-values from the two stages Under H 0 (when no early stopping) This is an old idea of meta-analysis (Fisher, 1932) Sometimes referred to as Fisher’s combination method

8 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 58 Proof: under H 0, p 1 ~ U(0, 1), p 2 ~ U(0, 1) and they are independent as data from the two stages are independent Put L i =  2log(p i ), i = 1, 2, so that L 1 + L 2 =  2log(p 1 p 2 ) Then and so L i ~ EXP(½) Hence

9 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 59 A two-stage design, with early stopping Bauer and Köhne (1994) Again, the two datasets must be independent First stage m 1 observations per group Second stage m 2 observations per group Stop and reject H 0 Stop and do not reject H 0

10 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 510 The upper 0.975 point of the distribution is 11.14 Setting  = 0.025 (one-sided), we will PROCEED to claim that E > C if That is, if If after the first stage, we already know that then there is no need to conduct the second stage: we will PROCEED to claim that E > C  curtailed sampling

11 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 511 The full procedure is as shown:  for  = 0.025,  1 = 0.0038 Stop to PROCEED Stop and ABANDON PROCEED ABANDON 0  1  0 1 0 C 1 Second stage: First stage:

12 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 512 Design parameters If the early PROCEED boundary is set at  1 and the early ABANDON boundary is set at  0, then in order to achieve an overall one-sided type I error rate of , (5.1)

13 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 513 found by integration:

14 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 514 Hence so that

15 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 515 Multiple looks At the i th interim analysis calculate the product of p-values Stop and PROCEED if p 1  …  p i  C i Stop and ABANDON if p i   0i Use a recursive method to find C 1,..., C k and  01,...,  0k satisfying some chosen constraints Wassmer (1999)

16 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 516 Fisher (1998), Lehmacher and Wassmer (1999) Recall that, for group sequential trials we need test statistics B i and V i which, under the null hypothesis, satisfy  B i ~ N( , V i )  increments (B i – B i  1 ) between interims are independent Regardless of where these statistics come from, if they are plotted and compared with sequential stopping boundaries, then the required type I error will be achieved 5.3 The inverse normal method

17 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 517 Transforming the p-value Let Z =   1 (1 – P) where  denotes the N(0, 1) distribution function Then P(Z  z;  = 0) = P(   1 (1 – P)  z ;  = 0) = P( (1 – P)   z) ;  = 0) = P( P ≥ 1 –  z) ;  = 0) =  z) so that Z ~ N(0, 1)

18 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 518 Combining the p-values Consider tests of H i0 :  i = 0 vs H i1 :  i > 0, based on independent, sequentially available data sets x i, with corresponding one-sided p-values, p i, i = 1,..., k Then Z i =   1 (1 – P i ), i = 1,..., k are independent N(0, 1) random variables, and Y i = W i   1 (1 – P i ), i = 1,..., k are independent N(0, ) random variables

19 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 519 Now put B i = Y 1 +... + Y i V i = Then if all null hypotheses H i0 :  i = 0 are true,  B i ~ N( , V i )  increments (B i – B i  1 ) between interims are independent So, if these statistics are plotted and compared with sequential stopping boundaries, then the required type I error will be achieved

20 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 520 Applications 1.The hypotheses H i0 could all be the same:  = 0, based on independent data sets x i, each comprising the new data only observed between the i th and the (i – 1) th interim analyses  the sample size, allocation ratio or other design features concerning the i th dataset can depend on previous datasets 2.The hypotheses H i0 could concern different endpoints (mortality, time to progression, tumour shrinkage), or different test statistics (logrank, Wilcoxon, binary) based on independent groups of patients  the endpoint or test statistic could be changed between interim analyses, provided that H 0 : “the treatments are identical” is to be tested

21 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 521 Application to group sequential trials Suppose that H 0 :  = 0 is tested against H 1 :  > 0 using the i th batch of data, based on the test statistic (B i – B i  1 ), as used in the group sequential approach Corresponding one-sided p-values, P i, are P i = 1 –  ( (B i – B i  1 )/  (V i – V i  1 )) Put = (V i – V i  1 ), so that Y i = W i   1 (1 – P i ) =  (V i – V i  1 )   1 (  ( (B i – B i  1 )/  (V i – V i  1 )) = (B i – B i  1 )

22 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 522 It follows that B i = Y 1 +... + Y i = B i and V i = = V i so that the inverse normal approach and the group sequential approach are identical in this case This is a good thing! It means that the approach is built on solid foundations

23 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 523 Sequential t-tests Assume the following situation Treatments: Experimental (E) and Control (C) Distributions: N(  E,  2 ) and N(  C,  2 ) Hypotheses: H 0 :  E =  C H 1 :  E >  C Put  =  E  C H 0 :  = 0 is tested against H 1 :  > 0 at the i th interim, using a t-test based only on the data x i collected since the previous interim

24 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 524 The t-statistic used will be which is similar to the form given on Slide 2.4, except that  on Slide 2.4 all quantities related to the cumulative data up to the i th interim analysis (sample sizes are denoted by m)  here all quantities relate only to the data collected since the (i – 1) th interim analysis

25 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 525 The corresponding one-sided p-value, p i, will be Put, so that... rather different from the sequential t-test of Lecture 2 Lecture 2: efficient estimation of  2, approx type I error Lecture 5: inefficient estimation of  2, exact type I error

26 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 526 (a) Too much flexibility can be a bad thing If the null hypothesis changes at each interim, what does rejection mean? What have you proved? If the testing method changes at each interim, what parameter can be estimated? The wilder reaches of flexibility should be avoided if possible 5.4 Discussion of adaptive designs

27 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 527 (b) How should the weights be chosen? In Fisher’s combination method each batch of data is weighted equally In the inverse normal method the weights W i can be varied, but they must be chosen in advance This can lead to inappropriate weighting, the second batch of data may be far larger than the first, but it may be weighted equally in the analysis For example, the V i may turn out not to be as planned  use the predicted values and get the theory right?  use the observed values and get the weighting right?

28 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 528 (c) The method is inefficient The final analysis is not based on a sufficient statistic, leading to an inefficient use of data If the hypotheses are fixed correctly in advance, and no changes are made to the testing approach, then a group sequential design can always be found that beats any adaptive design (Jennison and Turnbull, 2003; Tsiatis and Mehta, 2003; Kelly et al., 2005) (d) Post-trial estimation is difficult But it can be done: Brannath, Posch and Bauer (2002)

29 MPS/MSc in StatisticsAdaptive & Bayesian - Lect 529 (e) Changing  R can be counterproductive A popular suggestion is to estimate  at an interim analysis, and then to re-power the trial for  R equal to that estimate Proponents consider  R to be a guess of the actual value of , rather than a clinically important difference The problem is, the smaller the estimate of , the larger the sample size: the trial becomes bigger when the treatment advantage is of least value Better to plan a sequential trial based on a clinically important difference  R, which will stop early if  is much larger than  R


Download ppt "MPS/MSc in StatisticsAdaptive & Bayesian - Lect 51 Lecture 5 Adaptive designs 5.1Introduction 5.2Fisher’s combination method 5.3The inverse normal method."

Similar presentations


Ads by Google