Introducing a revised tool for assessing risk of bias in randomized trials (RoB 2.0) Jelena Savović School of Social and Community Medicine, University.

Slides:



Advertisements
Similar presentations
Telephone based self-management support for vascular conditions via non-healthcare professionals: a systematic review and meta-analysis Dr Nicola Small,
Advertisements

New methodological standards for Cochrane reviews first output from the MECIR project edit Rachel Churchill Co-ordinating Editor representative on Steering.
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Summarising findings about the likely impacts of options Judgements about the quality of evidence Preparing summary of findings tables Plain language summaries.
Funded through the ESRC’s Researcher Development Initiative
ARROW Trial Design Professor Greg Brooks, Sheffield University, Ed Studies Dr Jeremy Miles York University, Trials Unit Carole Torgerson, York University,
Methodological quality of malaria RCTs conducted in Africa Vittoria Lutje*^, Annette Gerritsen**, Nandi Siegfried***. *Cochrane Infectious Diseases Group.
Implementation Science: Finding Common Ground and Perspectives Laura Reichenbach, Evidence Project, Population Council International Conference on Family.
LifeCIT Development and pilot evaluation of a web-supported programme of Constraint Induced Therapy following stroke (LifeCIT) Meagher C 1, Conlon A 2,
Centre for Diet and Activity Research Social inequalities in physical activity: do environmental and policy interventions help reduce the gap? A pilot.
PRAGMATIC Study Designs: Elderly Cancer Trials
2 What’s in this presentation? We are seeking the board’s approval and advice on beginning a programme of work on culture and leadership across our trust.
Supporting local and national accountability Spring Workshops 2016.
Annual General Meeting 22 nd September Overview of reports published responses to support findings 1000 comments and reviews of.
Tim Friede Department of Medical Statistics
Working Together for the Benefit of Children and Young People
Stages of Research and Development
Approach to guideline development
for Overall Prognosis Workshop Cochrane Colloquium, Seoul
Proctor’s Implementation Outcomes
Funded by the NIHR HSDR Programme
Addressing Funding and Conflicts of Interest in Randomised Clinical Trials included in Cochrane Reviews Plans for the development of a ‘tool’ to assess.
EIA approval process, Management plan and Monitoring
Presentation Developed for the Academy of Managed Care Pharmacy
Module 9 Designing and using EFGR-responsive evaluation indicators
Benefits and Pitfalls of Systematic Reviews and Meta-Analyses
Sign critical appraisal course: exercise 1
The DELTA2 Study: Summary of Methodology and Results
The DEPression in Visual Impairment Trial:
Lister Hospital & University of Hertfordshire
Core Outcome Set-STAndards for Reporting
Multinutrient fortification of human breast milk for preterm infants following hospital discharge: systematic review Lauren Young1, Felicia M McCormick2,
Supplementary Table 1. PRISMA checklist
Development of an electronic personal assessment questionnaire to capture the impact of living with a vascular condition: ePAQ-VAS Patrick Phillips, Elizabeth.
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
Evidence-Based Medicine
Prognostic factors for musculoskeletal injury identified through medical screening and training load monitoring in professional football (soccer): a systematic.
AXIS critical Appraisal of cross sectional Studies
Asbjørn Hróbjartsson Co-convenor
STROBE Statement revision
New Cochrane risk of bias tool for cluster randomised tools
Professor Stephen Pilling PhD
Cambridge Upper Secondary Science Competition
Matthew H. Liang, MD Malcolm P. Rogers, MD Ira R. Katz, MD, PhD
Avon & Wiltshire Mental Health Partnership NHS Trust: Suicide data: open and transparent? Welcome.
Association between risk-of-bias assessments and results of randomized trials in Cochrane reviews: the ROBES study Jelena Savović1, Becky Turner2, David.
ICTMC Conference November 2017
Pilot Studies: What we need to know
Presented by Nancy Vargas.
Cochrane Bias Method Group Open Meeting, 2014 Agenda
Gerald Dyer, Jr., MPH October 20, 2016
Ethical Issues in Medical Writing
The Use of Counterfactual Impact Evaluation Methods in Cohesion Policy
NHS Adult Inpatient Survey 2018
MECIR: the bits that reviewers keep getting wrong!
Centre for Evidence Based Intervention
Q&A – studying medicine or health-related topics at university
Modelling the “bigger picture” Using Service-Level Modelling to support consistent resource allocation decisions across whole disease areas HTAi 2008.
Quality Risk Management ICH Q9 Frequently Asked Questions (FAQ)
Study within a Trial (SWAT) to increase the evidence for trial recruitment and retention in decision making -Shaun Treweek From the UK Trial Managers.
Social prescribing: Less rhetoric and more reality
How to apply successfully to the NIHR HTA Board?
The Use and Impact of FTA
STROBE Statement revision
Level of Evidence Lecture 4.
Regulatory Perspective of the Use of EHRs in RCTs
ISABEL NAYLON ESF EVALUATION PARTNERSHIP MEETING 13 NOVEMBER 2013
Meta-analysis, systematic reviews and research syntheses
Research Showcase: Combining multiple methods
Presentation transcript:

Introducing a revised tool for assessing risk of bias in randomized trials (RoB 2.0) Jelena Savović School of Social and Community Medicine, University of Bristol, UK NIHR CLAHRC West at University Hospitals Bristol NHS Foundation Trust, Bristol, UK With special thanks to Julian Higgins, Matt Page, Jonathan Sterne, Roy Elbers, Barney Reeves, Asbjørn Hróbjartsson, Isabelle Boutron and all RoB 2.0 collaborators

Funding The revised tool for randomized trials (RoB 2.0) is supported by the UK Medical Research Council Network of Hubs for Trials Methodology Research (MR/L004933/1- N61) My time on the project was supported by the NIHR CLAHRC West at University Hospitals Bristol NHS Foundation Trust The views are my own and of the RoB 2.0 development group and not necessarily of the MRC, NIHR or the Department of Health

RoB 2.0: contributors Core group: Jelena Savović, Julian Higgins, Matthew Page, Asbjørn Hróbjartsson, Isabelle Boutron, Barney Reeves, Roy Elbers, Jonathan Sterne Working Group members: Doug Altman, Natalie Blencowe, Mike Campbell, Christopher Cates, Rachel Churchill, Mark Corbett, Nicky Cullum, Francois Curtin, Amy Drahota, Sandra Eldridge, Jonathan Emberson, Bruno Giraudeau, Jeremy Grimshaw, Sharea Ijaz, Sally Hopewell, Asbjørn Hróbjartsson, Peter Jüni, Jamie Kirkham, Toby Lasserson, Tianjing Li, Stephen Senn, Sasha Shepperd, Ian Shrier, Nandi Siegfried, Lesley Stewart, Penny Whiting And: Henning Keinke Andersen, Mike Clarke, Jon Deeks, Geraldine MacDonald, Richard Morris, Mona Nasser, Nishith Patel, Jani Ruotsalainen, Holger Schünemann, Jayne Tierney

Background RCTs provide the best evidence for intervention effectiveness But, individual trials don’t always provide accurate estimates of effectiveness Syntheses of multiple trials for the same interventions are usually needed to inform practice (systematic reviews & meta-analyses) Assessment of potential for bias is essential when including RCTs in syntheses

BMJ 2011; 343: d5928

Current Cochrane tool for risk of bias in randomized trials Cochrane RoB tool is very widely used (Jørgensen 2016) 100 out of 100 Cochrane reviews from 2014 (100%) 31 out of 81 non-Cochrane review (38%) >2700 citations from non-Cochrane sources The scientific debate on risk of bias has continued Evaluation studies of the tool User experience: survey and focus groups (Savović 2014) Inter-rater agreement studies (e.g. Hartling 2009 & 2013) Actual use in reviews and published comments (Jørgensen 2016)

Some issues raised with existing tool Used simplistically Used inconsistently (domains added or removed) Modest agreement rates RoB judgements are difficult for some domains Challenges with unblinded trials Not well suited to cross-over trials or cluster-randomized trials Not well set up to assess overall risk of bias

Assessment is for each specific result All domains are mandatory RoB 1.0 RoB 2.0 Random sequence generation (selection bias) Bias arising from the randomization process Allocation concealment (selection bias) Blinding of participants and personnel (performance bias) Bias due to deviations from intended interventions Incomplete outcome data (attrition bias) Bias due to missing outcome data Blinding of outcome assessment (detection bias) Bias in measurement of the outcome Selective reporting (reporting bias) Bias in selection of the reported result Other bias N/A Overall bias Assessment is for each specific result All domains are mandatory No ‘Other Bias’ Overall bias for that study result Funding and vested interests will be addressed. Not within this tool, but within the wider framework Working group led by Asbjørn Hróbjartsson and Isabelle Boutron

1. Five domains Bias arising from the randomization process Bias due to deviations from intended interventions Bias due to missing outcome data Bias in measurement of outcomes Bias in selection of the reported result

3. Free text descriptions 1. Five domains 2. Signalling questions 3. Free text descriptions 4. Risk of bias judgements (5. Predict direction of bias) 6. Overall risk of bias judgement

Signalling questions and judgements Signalling questions are introduced to make the tool easier (and more transparent) ‘Yes’, ‘Probably yes’, ‘Probably no’, ‘No’, ‘No information’ Risk of bias judgements follow from answers to signalling questions (can be over-ridden by user) ‘Low risk of bias’, ‘Some concerns’, ‘High risk of bias’ A ‘High risk of bias’ judgement in one domain puts the whole study at high risk of bias

Algorithms for reaching domain level judgements 1.2 Was the allocation sequence concealed? 1.1 Was the allocation sequence random? 1.3 Were there baseline imbalances that suggest a problem with randomization? Low risk Some concerns Some concerns * High risk Y/PY NI N/PN Any response Y/PY/NI N/PN/NI

Overall risk of bias judgement Low risk of bias The study is judged to be at low risk of bias for all domains for this result. Some concerns The study is judged to be at some concerns in at least one domain for this result. High risk of bias The study is judged to be at high risk of bias in at least one domain for this result. OR The study is judged to have some concerns for multiple domains in a way that substantially lowers confidence in the result. Overall risk of bias judgement can be completed automatically (can be over-ridden by user)

The effect of interest The current tool has no provisions for trials where blinding is not feasible (other than to classify as not blind hence high risk of bias) Issues of performance bias very different for “ITT effects” and “per-protocol” effects, yet poorly addressed in current tool “ITT effect”: effect of assignment to intervention e.g. the question of interest to a policy maker about whether to introduce a screening programme “Per protocol effect”: effect of starting and adhering to intervention e.g. the question of interest to an individual about whether to attend screening Not to be confused with ITT or per protocol analyses Issues of performance bias are very different for ITT effects and per-protocol effects. The distinction is important yet very poorly addressed in the current RoB tool. See proposals on following slides.

Effect of interest Deviations from intended intervention are not important when interest is on the effect of assignment to intervention e.g. some people don’t respond to invitations to be screened ...providing these deviations reflect routine care rather than behaviour that reflects expectations of a difference between intervention and comparator But deviations such as poor adherence, poor implementation and co-interventions may lead to bias when interest is in the effect starting and adhering to intervention We therefore have different tools for these two effects of interest

Highlights of changes to bias domains

Bias arising from the randomization process 1.1 Was the allocation sequence random? 1.2 Was the allocation sequence concealed until participants were recruited and assigned to interventions? 1.3 Were there baseline imbalances that suggest a problem with the randomization process? Randomization methods Additional evidence of problems

Bias due to deviations from intended interventions Effect of assignment to intervention 2.1. Were participants aware of their assigned intervention during the trial? 2.2. Were carers and trial personnel aware of participants' assigned intervention during the trial? 2.3. If Y/PY/NI to 2.1 or 2.2: Were there deviations from the intended intervention beyond what would be expected in usual practice? 2.4. If Y/PY to 2.3: Were these deviations from intended intervention unbalanced between groups and likely to have affected the outcome? 2.5 Were any participants analysed in a group different from the one to which they were assigned? 2.6 If Y/PY/NI to 2.5: Was there potential for a substantial impact (on the estimated effect of intervention) of analysing participants in the wrong group? Blinding Deviations reflect usual practice? First ITT principle of ITT

Bias due to deviations from intended interventions Effect of starting and adhering to intervention 2.1. Were participants aware of their assigned intervention during the trial? 2.2. Were carers and trial personnel aware of participants' assigned intervention during the trial? 2.3. If Y/PY/NI to 2.1 or 2.2: Were important co-interventions balanced across intervention groups? 2.4. Was the intervention implemented successfully? 2.5. Did study participants adhere to the assigned intervention regimen? 2.6. If N/PN/NI to 2.3, 2.4 or 2.5: Was an appropriate analysis used to estimate the effect of starting and adhering to the intervention? Blinding Specific deviations Overcome by analysis?

Bias in selection of the reported result Are the reported outcome data likely to have been selected, on the basis of the results, from... 5.1. ... multiple outcome measurements (e.g. scales, definitions, time points) within the outcome domain? 5.2 ... multiple analyses of the data? Selective outcome reporting Selective analysis reporting

riskofbias.info

Cluster-randomized trials and cross-over trials Key issue is recruitment / identification of participants after interventions have been allocated to clusters Also consideration of missing data at cluster and individual level Cross-over trials (AB/BA design) Key issue is carry-over of effect from 1st period to 2nd period Also period effects, selective reporting of 1st period data

Concluding remarks We believe RoB 2.0 offers considerable advantages over the existing tool Once programmed into software, we expect the tool will be easier to use than the first one We are extremely grateful to all those who have contributed to the development of RoB 2.0 RoB 2.0 is available at www.riskofbias.info