Download presentation
Presentation is loading. Please wait.
Published byRosamund Lucas Modified over 9 years ago
1
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 41 Lecture 4 Sample size reviews 4.1A general approach to sample size reviews 4.2Binary data 4.3Normally distributed data 4.4 Exact results: normally distributed responses
2
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 42 4.1 A general approach to sample size reviews Many sample size formulae depend on nuisance parameters, the values of which have to be guessed Part way through the trial we will have plenty of data on which to base a better guess So, do that, and recalculate the sample size Now use the new sample size, perhaps within the limits of minimum and maximum possible values Assess the effect of this procedure on type I error: usually it is very small
3
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 43 This is an adaptive design with a single interim analysis which may lead only to a reassessment of sample size Its use is becoming widespread, and the regulatory authorities are generally well-disposed towards it
4
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 44 4.2 Sample size review for binary data Treatments: Experimental (E) and Control (C) Success probabilities: p E and p C Hypotheses: H 0 : p E = p C H 1 : p E > p C Type I error: (one-sided) Power: 1 – , when p E = p ER and p C = p CR Sample sizes: n E and n C, where n E + n C = n Allocation ratio: (1:1), that is n E = n C p CR is the anticipated value of p C, and an improvement from that value to p E = p ER on E would be clinically worthwhile
5
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 45 Probability difference approach Initial sample size calculation Put = p E p C, and set power at = R = p ER p CR Two popular formulae for n are: (Machin et al., 1997) and (4.2)
6
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 46 To use these formulae: use previous data and experience to guess p CR consider what difference R would be clinically important deduce p ER and Using these values, find the required sample size n Then, when data from about patients are available, a sample size review can be conducted
7
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 47 At the sample size review we do not change R : this remains the clinically important difference To recompute n based on (4.1), identify the control patients and find an estimate, of p C as the success rate on C so far Replace p CR by, p ER by and To recompute n based on (4.2), we need not break the blinding: just estimate as the overall success rate in the trial as a whole (over E and C) The preservation of blindness makes the second option more attractive
8
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 48 Log-odds ratio approach Initial sample size calculation Put and set power at = R computed from the values p ER and p CR The resulting sample size formula is: (4.3) This formula can be updated at a sample size review in the same way as (4.2), without breaking the blind
9
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 49 Example = 0.025, 1 – = 0.90: z 1 = 1.96, z 1 = 1.282 p CR = 0.3, p ER = 0.5, = 0.4 (4.1) prob diff: R = 0.2 n = 248 (4.2) prob diff: R = 0.2 n = 252 (4.3) log-odds ratio: R = 0.847 n = 244
10
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 410 After 120 observations, the sample size is reviewed We find that = 0.2, rather than 0.4 Retaining probability difference: R = 0.2, (4.2) n = 168 sample size goes down R = 0.2 consistent with p CR = 0.1, p ER = 0.3 Retaining log-odds ratio: R = 0.847, (4.3) n = 366 sample size goes up R = 0.847 consistent with p CR = 0.134, p ER = 0.266
11
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 411 4.3 Normally distributed data Treatments: Experimental (E) and Control (C) Distributions: N( E, 2 ) and N( C, 2 ) Hypotheses: H 0 : E = C H 1 : E > C Type I error: (one-sided) Power: 1 – , when E = ER and C = CR Sample sizes: n E and n C, where n E + n C = n Allocation ratio: (1:1), that is n E = n C Put = E C and R = ER CR Let denote the anticipated common variance
12
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 412 The sample size is given by see Slide 2.8 The actual values of ER and CR have no effect on n other than through R The anticipated variance is very influential, and is replaced by an estimate at the sample size review
13
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 413 Estimating 2 (a)Use the conventional unbiased estimate based on the n observations available so far To use this requires breaking the blind, at least as far as separating the two treatment groups: their identities need not be revealed (4.5)
14
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 414 (b)Avoid unblinding, using a simple adjustment (Gould, 1995) For each term in (4.5) Substitute in (4.5)
15
MPS/MSc in Statistics Adaptive & Bayesian - Lect 415 If desired difference is present: Can use estimate for sample size review without unblinding (4.6) : The estimate of total variance
16
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 416 (c) Avoid unblinding, using an E-M algorithm (Gould and Shih, 1992; Gould, 1995) DO NOT USE THIS METHOD! See Friede and Kieser (2002)to see why you should not
17
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 417 Example = 0.025, 1 – = 0.90: z 1 = 1.96, z 1 = 1.282 R = 0.5, R = 1.0 (4.4) n = 168 After 80 patients:
18
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 418 Unblinded approach From (4.5) So that From (4.4), new sample size is
19
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 419 Blinded approach From (4.6) From (4.4), new sample size is
20
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 420 Consider the unblinded approach to sample size review An initial sample size is set When n 1 = patients have given responses, 2 is estimated by the usual pooled variance The sample size is then recalculated (as n 2 ) and that number of subjects is taken, provided that n 2 ≥ n 1 Finally, a t-test is performed 4.4 Exact results: normally distributed responses
21
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 421 The final t-statistic is where the subscript 2 identifies values computed for the final sample Now
22
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 422 With the subscript + relating to the extra patients recruited after the sample size review, it can be shown that Dividing by the true value of 2, the quantities on the rhs are distributed as independent 2 random variable on n E1 – 1, n E+ 1 and 1 degrees of freedom respectively A similar result holds for the control group
23
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 423 The final t-statistic, is a function of and these quantities are mutually independent From this result, the conditional distribution of t, given, can be deduced The revised sample size depends on the data only through
24
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 424 The density of t is given by where W represents the random variable is a t density with degrees of freedom specified by : it is a step function in w is a 2 density with n 1 2 degrees of freedom In this way, for a given value of 2, the exact properties of the sample size review procedure can be determined
25
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 425 Example = 0.05, 1 – = 0.90, = 1, R = 0.414 n = 200 A sample size review is conducted after 100 responses Suppose that the true value of 2 is 1.43 Then the true type I error rate will be 0.051 The true power will be 0.920
26
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 426 References General methods for sample size review: Wittes and Brittain (1990) Gould (1992) Birkett and Day (1994) Exact evaluations: Kieser and Friede (2000) Friede and Kieser (2006) Montague (2007)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.