Presentation is loading. Please wait.

Presentation is loading. Please wait.

FOLLOW-UP Simon Gates 13 April 2011. Contents Definition of follow-up Problems with follow-up Power Bias How much loss is acceptable? Prevention of losses.

Similar presentations


Presentation on theme: "FOLLOW-UP Simon Gates 13 April 2011. Contents Definition of follow-up Problems with follow-up Power Bias How much loss is acceptable? Prevention of losses."— Presentation transcript:

1 FOLLOW-UP Simon Gates 13 April 2011

2 Contents Definition of follow-up Problems with follow-up Power Bias How much loss is acceptable? Prevention of losses to follow-up DISCUSSION

3 Sample size slippages in randomised trials: exclusions and the lost and wayward Kenneth F Schulz, David A Grimes Lancet 2002; 359: 781–85

4 WHAT DO WE MEAN BY FOLLOW-UP?

5 Follow-up No “official” definition Data collection in a clinical trial Time points remote from recruitment Usually after participant has left health care setting Usually after trial treatment has finished Sometimes separate “follow-up” study after original trial completed Terminology derived from cohort studies

6 Examples MINT: physiotherapy treatment for whiplash injuries Research clinic: assessm ent and randomis ation Treatmen t 4 month follow-up (post) 8 month follow-up (post) 12 month follow-up (post) Randomisation8 weeks 8m4m 12m

7 Examples ORACLE trial: antibiotics for pre-term labour Women recruited in preterm labour Primary outcome: babies’ health at discharge from hospital Children’s health assessed Randomisation1 day to few months 7 years

8 Methods of follow-up Databases/routinely collected data Participant contact Attendance at clinic Home visit Post Phone Email/web questionnaires

9 THE PROBLEM

10 Follow-up: the problem Participants are followed up to get important outcome information In some cases primary outcome is measured 12 months or more after randomisation Tendency for more participants to be lost from trial with increasing time Death Withdrawal Emigration Non-response

11 Follow-up: the problem Leads to increasing quantity of missing outcome data with time from randomisation Missing data: same issues as data missing for any other reason Missing data potentially have two bad effects: Loss of power Introduction of bias

12 Power Trials always have a specified target sample size. This is partly determined by the power The chance of finding an effect if there is one Usually set at 80% or 90% Reducing sample size always reduces power Makes it MORE LIKELY that a real effect will be missed

13 30% outcome in control group, 15% in intervention. Target sample size 348 (90% power) LossSample sizePower 10%31387% 20%27882% 30%24476% 40%20969% 50%17459%

14 Bias Bias is a systematic departure from the correct result Obviously a bad thing! Randomisation is designed to avoid bias by ensuring that groups being compared are the same If participants lost are not a random selection, bias is caused

15 Bias: examples of non-random losses Loss related to baseline characteristics Older/younger people more likely to be lost Sicker/less sick participants more likely to be lost Loss related to treatment Patients who have bad side effects more likely to be lost: more losses in active drug than placebo Loss related to outcome Patients with worse/better outcomes more likely to be lost

16 Bias Missing data do not always introduce bias but there is a risk that they will Especially if losses are different between the trial groups

17 HOW MUCH LOSS IS ACCEPTABLE?

18 How much loss is acceptable? Risk of bias increases with increasing loss No clear information on the size of bias risk associated with different percentage losses of participants Commonly >20% loss is regarded as high risk of bias, <5% low risk of bias Origin of this is obscure Not supported by any data [methodological project?]

19 How much loss is acceptable? % loss is not the only important criterion; also need to consider loss in each group Important bias may exist if losses are low overall but different between the groups Could be different numbers lost Or same number lost but different people

20 How much loss is acceptable? Relationship between risk of bias and % loss of participants likely to be context-dependent Will differ between different populations and settings Some types of patient may have high losses that occur at random (low risk of bias) Others may have smaller losses but they occur in particular people (high risk of bias)

21 How much loss is acceptable? Variation in achievable follow-up rates Hospital data only; should achieve 100% 12 month follow-up of drug users; difficult population, losses likely to be high Losses always increase with time

22 Achievable follow-up rates Acute injury: young, little health contact, recovered Critical care: old, low awareness of trial, high death rate, other health issues Cancer: ongoing contacts with health services, high awareness of trials

23 WHAT TO DO ABOUT LOSS TO FOLLOW-UP

24 What to do about losses? Lack of power is easier to address: increase sample size Routinely done in sample size calculations Bias is harder to address Statistical methods to account for missing data (multiple imputation, adjustment for covariates) Complex, require assumptions and imperfect Prevention of losses is much better!

25 Preventing losses to follow-up The aim must always be to determine outcomes for 100% of the trial participants The question is how to achieve this…

26 Postal/electronic follow-up Evidence based strategies to increase response rate Edwards PJ, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews 2009, Issue 3.

27 Postal/electronic follow-up 481 trials of postal questionnaires and 32 studies of electronic Majority not from RCTs or health care settings (e.g. insurance database, phone book, etc) Many interventions found to be effective

28 Results: postal questionnaires InterventionOdds ratio monetary incentives1.87 recorded delivery1.76 teaser on the envelope3.08 more interesting questionnaire topic2.00 pre-notification1.45 follow-up contact/ providing second copy1.35 shorter questionnaires1.64 mentioning an obligation to respond1.61 university sponsorship1.32 personalised questionnaires1.14 hand-written addresses1.25 stamped return envelopes1.24 assurance of confidentiality1.33 first class outward mailing1.11

29 Results: electronic questionnaires InterventionOdds ratio incentives1.72 shorter e-questionnaires1.73 including a statement that others had responded1.52 more interesting topic1.85 lottery with immediate notification of results1.37 offer of survey results1.36 white background1.31 personalised e-questionnaires1.24 simple header1.23 textual representation of response categories1.19 giving a deadline1.18 picture was included in an e-mail3.05

30 Postal questionnaires Not all effective strategies applicable to trials or feasible But we should routinely use (and request funding for) evidence-based strategies Incentives Pre-notification

31 Two more systematic reviews on the way… Hoile EC, Free C, Edwards PJ, Felix LM. Methods to increase response rates for data collected by telephone (Protocol). Cochrane Database of Systematic Reviews 2009, Issue 3 Edwards PJ, Cooper R, Wentz R, Fernandes J. Methods to influence the completeness of response to self- administered questionnaires (Protocol). Cochrane Database of Systematic Reviews 2007, Issue 2.

32 Routine data sources Useful because they should include data on all participants No need for individuals to respond Potentially useful for Locating participants Obtaining outcome information

33 Outcome information Medical Research Information Service Check participants’ current status Notify of any deaths, cancer, exit/re-entry to NHS Summary Care Records: can only be accessed by NHS staff Opt-out allowed Currently exist for about 6 million patients 1.16% have opted out

34 Locating participants NHS Personal Demographics Service contains information on name, address, phone numbers email etc Can only be accessed from NHS

35 DISCUSSION

36 Keeping track of trial participants What methods do we use? What are the problems? What else can we do? (NHS data systems?)

37 Data collection Post, phone What are the pros and cons? What could we do better? What else could we do? Reminders by text or email? Data collection by email or web? More use of existing data sources?

38 What research could we do? Relatively easy to conduct methodological projects alongside trials (e.g. MINT incentive study) What else could we do?


Download ppt "FOLLOW-UP Simon Gates 13 April 2011. Contents Definition of follow-up Problems with follow-up Power Bias How much loss is acceptable? Prevention of losses."

Similar presentations


Ads by Google