Sources of Bias in Randomised Controlled Trials. REMEMBER Randomised Trials are the BEST way of establishing effectiveness.

Slides:



Advertisements
Similar presentations
Appraisal of an RCT using a critical appraisal checklist
Advertisements

Different types of trial design
Bias in Clinical Trials
Research Study Designs
Unequal Randomisation
Use of Placebos in Controlled Trials. Background The traditional ‘double-blind’ RCT uses a placebo to conceal allocation. There are a number of advantages.
Sample size issues & Trial Quality David Torgerson.
MAT 1000 Mathematics in Today's World. Last Time 1.What does a sample tell us about the population? 2.Practical problems in sample surveys.
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
External Validity of Trials. Background External or ecological validity refers to whether the results of the trial can be generalised to the general clinical.
Systematic Review of Literature Part XIX Analyzing and Presenting Results.
Design & Interpretation of Randomized Trials: A Clinician’s Perspective Francis KL Chan Department of Medicine & Therapeutics CUHK.
Why to Randomize a Randomized Controlled Trial? (and how to do it) John Matthews University of Newcastle upon Tyne.
Improving the Economic Efficiency of Clinical Trials.
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Sources of Bias in Randomised Controlled Trials
What makes a good quality trial? Professor David Torgerson York Trials Unit.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Recruitment to Trials. Background Recruitment of participants is a VERY important issue. The general consensus is that most trials under recuit.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Design and Analysis of Clinical Study 12. Randomized Clinical Trials Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
Cluster Randomised Trials. Background In most RCTs people are randomised as individuals to treatment. Whilst this method is appropriate for many interventions.
Journal Club Alcohol and Health: Current Evidence July-August 2006.
Cluster Randomised Trials. Background In most RCTs people are randomised as individuals to treatment. Whilst this method is appropriate for many interventions.
Clinical trials methodology group Simon Gates 9 February 2006.
N = 1, Cross-Over Trials and Balanced Designs. N = 1 Trials Trials can be undertaken with just one participant. If the condition is a chronic relapsing.
Research Design and Behavioral Analysis
Allocation Methods David Torgerson Director, York Trials Unit
Design Issues: Policy Trials Professor David Torgerson Director, York Trials Unit
Pragmatic Randomised Trials. Background Many clinical trials take place in artificial conditions that do not represent NORMAL clinical practice. Often.
Factorial Designs. Background Factorial designs are when different treatments are evaluated within the same randomised trial. A factorial design has a.
Pre-randomisation consent (Zelen’s method)
Sample Size Determination
Are the results valid? Was the validity of the included studies appraised?
Intervention Studies Principles of Epidemiology Lecture 10 Dona Schneider, PhD, MPH, FACE.
Avoiding bias in RCTs David Torgerson Director, York Trials Unit
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Lecture 16 (Oct 28, 2004)1 Lecture 16: Introduction to the randomized trial Introduction to intervention studies The research question: Efficacy vs effectiveness.
BIOE 301 Lecture Seventeen. Guest Speaker Jay Brollier World Camp Malawi.
Lecture 17 (Oct 28,2004)1 Lecture 17: Prevention of bias in RCTs Statistical/analytic issues in RCTs –Measures of effect –Precision/hypothesis testing.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
Lecture 5 Objective 14. Describe the elements of design of experimental studies: clinical trials and community intervention trials. Discuss the advantages.
Clinical Writing for Interventional Cardiologists.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
A Randomised, Controlled Trial of Acetaminophen, Ibuprofen, and Codeine for Acute Pain relief in Children with Musculoskeletal Trauma Clark et al, Paediatrics.
How to Analyze Therapy in the Medical Literature (part 1) Akbar Soltani. MD.MSc Tehran University of Medical Sciences (TUMS) Shariati Hospital
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
Critical Appraisal (CA) I Prepared by Dr. Hoda Abd El Azim.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
1 Health and Disease in Populations 2002 Session 8 – 21/03/02 Randomised controlled trials 1 Dr Jenny Kurinczuk.
Compliance Original Study Design Randomised Surgical care Medical care.
Making Randomized Clinical Trials Seem Less Random Andrew P.J. Olson, MD Assistant Professor Departments of Medicine and Pediatrics University of Minnesota.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Critical Appraisal Course for Emergency Medicine Trainees Module 3 Evaluation of a therapy.
Critical Appraisal II Prepared by Dr. Hoda Abd El Azim.
Copyright © 2009 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Benefits and Pitfalls of Systematic Reviews and Meta-Analyses
CLINICAL PROTOCOL DEVELOPMENT
Francis KL Chan Department of Medicine & Therapeutics CUHK
Interventional trials
Randomized Trials: A Brief Overview
Chapter 13- Experiments and Observational Studies
Alcohol, Other Drugs, and Health: Current Evidence May-June, 2018
Introduction to Surgical Trials: Sources of bias and solutions
Rapid Critical Appraisal of Controlled Trials
Appraisal of an RCT using a critical appraisal checklist
Evidence Based Practice
Presentation transcript:

Sources of Bias in Randomised Controlled Trials

REMEMBER Randomised Trials are the BEST way of establishing effectiveness.

All RCTs are NOT the same. Although the RCT is rightly regarded as the premier research method, by the cognoscenti, some trials are better than others. In this lecture we will look at sources of bias in trials and how these can be avoided.

Selection Bias - A reminder Selection bias is one of the main threats to the internal validity of an experiment. Selection bias occurs when participants are SELECTED for an intervention on the basis of a variable that is associated with outcome. Randomisation or other similar methods abolishes selection bias.

After Randomisation Once we have randomised participants we eliminate selection bias but the validity of the experiment can be threatened by other forms of bias, which we must guard against.

Forms of Bias Subversion Bias Technical Bias Attrition Bias Consent Bias Ascertainment Bias Dilution Bias Recruitment Bias

Bias (cont) Resentful demoralisation Delay Bias Chance Bias Hawthorne effect Analytical Bias.

Subversion Bias Subversion Bias occurs when a researcher or clinician manipulates participant recruitment such that groups formed at baseline are NOT equivalent. Anecdotal, or qualitative evidence (I.e gossip), suggest that this is a widespread phenomenon. Statistically this has been demonstrated as having occurred widely.

Subversion - qualitative evidence Schulz has described, anecdotally, a number of incidents of researchers subverting allocation by looking at sealed envelopes through x-ray lights. Researchers have confessed to breaking open filing cabinets to obtain the randomisation code. Schulz JAMA 1995;274:1456.

Quantitative Evidence Trials with adequate concealed allocation show different effect sizes, which would not happen if allocation wasn’t being subverted. Trials using simple randomisation are too equivalent for it to have occurred by chance.

Poor concealment Schulz et al. Examined 250 RCTs and classified them into having adequate concealment (where subversion was difficult), unclear, or inadequate where subversion was able to take place. They found that badly concealed allocation led to increased effect sizes – showing CHEATING by researchers.

Comparison of concealment Schulz et al. JAMA 1995;273:408.

Small VS Large Trials Small trials tend to give greater effect sizes than large trials, this shouldn’t happen. Kjaergard et al, showed it was due to poor allocation concealment in small trials, when trials are grouped by allocation methods ‘secure’ allocation reduced effect by 51%. Kjaegard et al. Ann Intern Med 2001;135:982.

Case Study Subversion is rarely reported for individual studies. One study where it has been reported was for a large, multicentred surgical trial. Participants were being randomised to 5+ centres using sealed envelopes.

Case study cont Subversion was detected and the trial changed to telephone allocation system.

Case-study (cont) After several hundred participants had been allocated the study statistician noticed that there was an imbalance in age. This age imbalance was occurring in 3 out of the 5 centres. Independently 3 clinical researchers were subverting the allocation

Mean ages of groups

Example of Subversion

Using Telephone Allocation

Subversion - summary Appears to be widespread. Secure allocation usually prevents this form of bias. Need not be too expensive. Essential to prevent cheating.

Secure allocation Can be achieved using telephone allocation from a dedicated unit. Can be achieved using independent person to undertake allocation.

Technical Bias This occurs when the allocation system breaks down often due a computer fault. A great example is the COMET I trial (COMET II was done because COMET 1 suffered bias).

COMET 1 A trial of two types of epidural anaesthetics for women in labour. The trial was using MIMINISATION via a computer programme. The groups were minimised on age of mother and her ethnicity. Programme had a fault. COMET Lancet 2001;358:19.

COMET 1 – Technical Bias

COMET II This new study had to be undertaken and another 1000 women recruited and randomised. LESSON – Always check the balance of your groups as you go along if computer allocation is being used.

Attrition Bias Usually most trials lose participants after randomisation. This can cause bias, particularly if attrition differs between groups. If a treatment has side-effects this may make drop outs higher among the less well participants, which can make a treatment appear to be effective when it is not.

Attrition Bias We can avoid some of the problems with attrition bias by using Intention to Treat Analysis, where we keep as many of the patients in the study as possible even if they are no long ‘on treatment’.

Sensitivity analysis Analysis of trial results can be subjected to a sensitivity analysis whereby those who drop out in one arm are assumed to have the worst possible outcome, whilst those who drop out in the parallel arm are assumed to have the best possible outcome. If the findings are the same we are reassured.

Consent Bias This occurs when consent to take part in the trial occurs AFTER randomisation. Most frequent danger in Cluster trials. For example, Graham et al, randomised schools to a teaching package for emergency contraception. More children took part in the intervention than the control. Graham et al. BMJ 2002;324:1179.

Consent bias?

Consent Bias? Because more children consented in the intervention group we would expect their knowledge to be less (as we include children less likely to know). Conversely we get a volunteer or consent effect with the intervention group only those most knowledgeable agreeing to take part.

Ascertainment Bias This occurs when the person reporting the outcome can be biased. A particular problem when outcomes are not ‘objective’ and there is uncertainty as to whether an event has occurred.

Example. A group of student’s essays were randomly assigned photographs purporting to be the student. The photos were of people judged to be “attractive” “average” “below average”. The average mark was significantly HIGHER for the average looking student. Why? Markers were biased into marking higher for students whom they believed were average looking (like themselves).

Another example Use of homeopathic dilution of histamine was shown in a RCT of cell cultures to have significant effects on cell motility. Ascertainment was not blind. Study repeated with assessors blind to which petri dish had distilled water or which had had homeopathic dilutions of histamine. Effect, like snow in Arabian Desert, disappeared.

Dilution Bias This occurs when the intervention or control group get the opposite treatment. This affects all trials where there is non-adherence to the intervention. For example, in a trial of calcium and vitamin D about 4% of the controls are getting the treatment and 35% of the intervention group stop taking their treatment. This will ‘dilute’ any apparent treatment effect.

Effect of dilution bias

Sources of dilution Calcium and D trial controls buying calcium supplements or intervention patients not taking them. Hip protector trial control patients MAKING their own padded knickers from bubble wrap, intervention patients not wearing them.

Dilution Bias This can be partly prevented by refusing access to the experimental treatment for the controls. Will always be a problem for active treatment seeking control therapy.

Resentful Demoralisation This can occur when participants are randomised to treatment they do not want. This may lead to them reporting outcomes badly in ‘revenge’. This can lead to bias.

Resentful Demoralisation One solution is to use a patient preference design where only participants who are ‘indifferent’ to the treatment they receive are allocated. This should remove its effects.

Hawthorne Effect This is an effect that occurs by being part of the study rather than the treatment. Interventions that require more TLC than controls could show an effect due to the TLC than the drug or surgical procedure. Placebos largely eliminate this or TLC should be given to controls as well.

Delay bias This can occur if there is a delay between randomisation and the intervention. In the GRIT trial of early delivery some women allocated to immediate delivery were delayed. This will dilute the effects of treatment.

Delay bias Similarly in Calcium and D trial delay of months between allocation and receipt of treatment. This can be dealt with, sometimes by starting analysis for active and controls from time of treatment received.

Chance Bias By chance groups can be uneven in important variables due to chance. This can be reduced by stratification or possibly better using ANCOVA. Stratification of course can lead to TECHNICAL or SUBVERSION bias

Analytical Bias Once a trial has been completed and data gathered in it is still possible to arrive at the wrong conclusions by analysing the data incorrectly. Most IMPORTANT is ITT. Also inappropriate sub-group analyses is a common practice.

Intention To Treat Main analysis of data must be by groups as randomised. Per protocol or active treatment analysis can lead to a biased result. Those patients not taking the full treatment are usually quite different to those that are and restricting the analysis can lead to bias.

Sub-Group Analyses Once the main analysis has been completed it is tempting to look to see if the effect differs by group. Is treatment more or less effective in women? Is it better or worse among older people? Is treatment better among people at greater risk?

Sub-Groups All of these are legitimate questions. The problem is the more subgroups one looks at the greater is the chance of finding a spurious effect. Sample size estimations and statistical tests are based on 1 comparison only.

Sub-Group and example. In a large RCT of asprin for myocardial infarction a sub-group analysis showed that people with the star signs Gemini and Libra asprin was INEFFECTIVE. This is complete NONSENSE! This shows dangers of subgroup analyses. Lancet 1988;ii:

More Seriously Sub group analyses led to: The wrong finding that tamoxifen was ineffective among women < 50 years; Streptokinase was ineffective > 6 hours after MI. Asprin for secondary prevention in women is ineffective. Antihypertensive treatment for primary prevention in women is ineffective. Beta-blockers ineffective in older people. And so on……

Sub groups To avoid spurious findings these should be pre-specified and based on a reasonable hypothesis. Pre-specification is important avoid data dredging as if you torture the data enough it will confess.

Cluster Trial Analysis Cluster trials (groups of individuals) need special statistical analysis. Standard methods (e.g. two sample t- test), will not be appropriate. Often cluster trials are inappropriately analysed which leads to spurious precision.

Example Edinburgh breast screening trial randomised GP practices to offer breast screening or not. Design was cluster but analysis was by individual (still didn’t manage to find a significant effect).

Summary Despite the RCT being the BEST research method unless expertly used it can lead to biased results. Care must be taken to avoid as many biases as possible.