Preventing introduction of bias at the bench: from randomizing to experimental results meta-analysis Malcolm Macleod Centre for Clinical Brain Sciences,

Slides:



Advertisements
Similar presentations
Study Quality and Publication Bias in Experimental Stroke
Advertisements

Introduction to Hypothesis Testing
Lecture 8: Hypothesis Testing
Client Assessment and Other New Uses of Reliability Will G Hopkins Physiology and Physical Education University of Otago, Dunedin NZ Reliability: the Essentials.
Design of Dose Response Clinical Trials
The Application of Propensity Score Analysis to Non-randomized Medical Device Clinical Studies: A Regulatory Perspective Lilly Yue, Ph.D.* CDRH, FDA,
CAMARADES: Bringing evidence to translational medicine Study Quality and Publication Bias in Experimental Studies of Neurological Diseases Emily S Sena,
Summarising what we already know – the pivotal role of systematic reviews Malcolm Macleod.
Structural and functional outcomes in animal models of stroke: What do they measure? Malcolm Macleod University of Edinburgh.
Summarising the evidence from animal models of neurological disease: Publication bias, poor internal validity, and (perhaps) some efficacy Malcolm Macleod.
Modelling Stroke in the Laboratory - Separating Fact from Artefact The impact of sources of bias in animal models of neurological disease, and what we.
Summarising the evidence from animal models of neurological disease: Are there any babies in the bathwater? Malcolm Macleod University of Edinburgh.
Improving the internal validity of experiments in focal ischaemia
Summarising the evidence from animal models of neurological disease: Publication bias, poor internal validity, and (perhaps) some efficacy Malcolm Macleod.
Stem cells: (How) do they work? Malcolm Macleod, Jen Lees, Emily Sena, Hanna Vesterinen, Simon Koblar, David Howells.
CAMARADES: Bringing evidence to translational medicine The failure to translate the basic science into therapy is due primarily to inadequacies in the.
CS1512 Foundations of Computing Science 2 Week 3 (CSD week 32) Probability © J R W Hunter, 2006, K van Deemter 2007.
Critical appraisal of research Sarah Lawson
Chapter 7 Sampling and Sampling Distributions
Biostatistics Unit 5 Samples Needs to be completed. 12/24/13.
The basics for simulations
On Comparing Classifiers : Pitfalls to Avoid and Recommended Approach
PP Test Review Sections 6-1 to 6-6
Business and Economics 6th Edition
Hours Listening To Music In A Week! David Burgueño, Nestor Garcia, Rodrigo Martinez.
Understanding p-values Annie Herbert Medical Statistician Research and Development Support Unit
EXPERIMENTAL PAPERS IN THE BJS – LESSONS LEARNED AND ROOM FOR IMPROVEMENT Editors Assistant Project Malin Sund 2011.
1 Cell-Free Hemoglobin-Based Blood Substitutes and Risk of Myocardial Infarction and Death Natason et al., JAMA, Prepublished online April 28, 2008 at.
Statistical Analysis SC504/HS927 Spring Term 2008
Science as a Process Chapter 1 Section 2.
PROCESS vs. WA State SCS Study A Comparison of Study Design, Patient Population, and Outcomes August 29,2007.
Putting Statistics to Work
AU 350 SAS 111 Audit Sampling C Delano Gray June 14, 2008.
CAMARADES: Bringing evidence to translational medicine The effect of anaesthetics on the developing neonatal brain: A systematic review and meta-analysis.
Ch 14 實習(2).
Chapter 8 Estimation Understandable Statistics Ninth Edition
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Testing Hypotheses About Proportions
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
Commonly Used Distributions
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
David Howells For the CAMARADES Collaboration STAIR A starting point for evidence-based translational medicine in stroke.
Systematic reviews of animal studies Malcolm Macleod.
CAMARADES: Bringing evidence to translational medicine Optimizing the Predictive Value of Pre-Clinical Research Session 3: Reviewer Perspective Malcolm.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
EVIDENCE BASED MEDICINE
Sample Size Determination Ziad Taib March 7, 2014.
Introduction to the design (and analysis) of experiments James M. Curran Department of Statistics, University of Auckland
CAMARADES: Bringing evidence to translational medicine Evidence based translational medicine Experimental Studies Systematic review and meta-analysis how.
Systematic Reviews.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
A Systematic Review On The Hazards Of Aspirin Discontinuation Among Patients With Or At Risk For Coronary Artery Disease Giuseppe Biondi Zoccai Hemodynamics.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Clinical Writing for Interventional Cardiologists.
EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry.
Development and the Role of Meta- analysis on the Topic of Inflammation Donald S. Likosky, Ph.D.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Compliance Original Study Design Randomised Surgical care Medical care.
Trial Sequential Analysis (TSA)
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Reporting quality in preclinical studies Emily S Sena, PhD Centre for Clinical Brain Sciences, University of
Randomized Trials: A Brief Overview
Lecture 4: Meta-analysis
Chapter 7 The Hierarchy of Evidence
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Presentation transcript:

Preventing introduction of bias at the bench: from randomizing to experimental results meta-analysis Malcolm Macleod Centre for Clinical Brain Sciences, University of Edinburgh

interventions in experimental stroke OCollins et al Ann Neurol 2006

interventions in experimental stroke Tested in focal ischaemia OCollins et al Ann Neurol 2006

interventions in experimental stroke Effective in focal ischaemia OCollins et al Ann Neurol 2006

interventions in experimental stroke Tested in clinical trial OCollins et al Ann Neurol 2006

interventions in experimental stroke Effective in clinical trial OCollins et al Ann Neurol 2006

Whats my problem? I want to improve the outcome for my patients with stroke To get that, I want to conduct high quality clinical trials of interventions which have a reasonable chance of actually working in humans But which of the remaining 929 interventions should I choose?

Its not just my problem …

…you will meet with several observations and experiments which, though communicated for true by candid authors or undistrusted eye-witnesses, or perhaps recommended by your own experience, may, upon further trial, disappoint your expectation, either not at all succeeding, or at least varying much from what you expected Robert Boyle (1693) Concerning the Unsuccessfulness of Experiments

One which describes some biological truth in the system being studied Internal validity: the extent to which an experiment accurately describes what happened in that model system Validity can be inferred by the extent of reporting of measures to avoid common biases What is a Valid Experiment?

Standards for reporting experiments

8 simple reporting measures 1.The animals used 2.The sample size calculation 3.The inclusion and exclusion criteria 4.Randomization 5.Allocation concealment 6.Reporting of animals excluded from analysis 7.Blinded assessment of outcome 8.Reporting potential conflicts of interest and study funding

Potential sources of bias in animal studies Internal validity ProblemSolution Selection BiasRandomisation Performance BiasAllocation Concealment Detection BiasBlinded outcome assessment Attrition biasReporting drop-outs/ ITT analysis False positive report biasAdequate sample sizes After Crossley et al, 2008, Wacholder, 2004

Internal validity Dopamine Agonists in models of PD Ferguson et al, in draft

Internal validity Dopamine Agonists in models of PD Ferguson et al, in draft

Internal validity Dopamine Agonists in models of PD Ferguson et al, in draft

Internal Validity Randomisation and blinding in studies of hypothermia in experimental stroke van der Worp et al Brain 2007 Blinded outcome assessment Yes No Efficacy 47%39% Randomisation Yes No 47%37% Efficacy

Stem cells in experimental stroke Lees et al, in draft

Infarct Volume Internal Validity Randomisation, allocation concealment and blinding in studies of Stem cells in experimental stroke Neurobehavioural score Lees et al, in draft

Internal Validity NXY-059 Macleod et al, 2008

Internal Validity False positive reporting bias The positive predictive value of any test result depends on –p ( α) –Power (1-ß) –Pre-test probability of a positive result after Wacholder, 2004

Internal Validity False positive reporting bias The positive predictive value of any test result depends on –p ( α) (0.05) –Power (1-ß) (0.30) –Pre-test probability of a positive result (0.50) Positive predictive value = 0.67 i.e. only 2 out of 3 statistically positive studies are truly positive after Wacholder, 2004

Chances that data from any given animal will be non-contributory Number of animalsPower% animals wasted 418.6%81.4% 832.3%67.7% %43.6% %14.9% assume simple two group experiment seeking 30% reduction in infarct volume, observed SD 40% of control infarct volume

Chances of wasting an animal

1. Animals The precise species, strain, substrain and source of animals used should be stated. Where applicable (for instance, in studies with genetically modified animals), the generation should also be given, as well as the details of the wild-type control group (for instance littermate, back cross, etc.).

2. Sample size calculation The manuscript should describe how the size of the experiment was planned. If a sample size calculation was performed this should be reported in detail, including the expected difference between groups, the expected variance, the planned analysis method, the desired statistical power and the sample size thus calculated. For parametric data, variance should be reported as 95% confidence limits or standard deviations rather than as the standard error of the mean.

3. Inclusion and exclusion criteria Where the severity of ischemia has to reach a certain threshold for inclusion (for instance a prespecified drop in perfusion detected with laser-Doppler flowmetry, or the development of neurological impairment of a given severity), this should be stated clearly. Usually, these criteria should be applied before the allocation to experimental groups. If a prespecified lesion size is required for inclusion this, as well as the corresponding exclusion criteria should be detailed.

4. Randomization The manuscript should describe the method by which animals were allocated to experimental groups. If this allocation was by randomization, the method of randomization (coin toss, computer-generated randomization schedules) should be stated. Picking animals at random from a cage is unlikely to provide adequate randomization. For comparisons between groups of genetically modified animals (transgenic, knockout), the method of allocation to for instance sham operation or focal ischemia should be described.

5. Allocation concealment The method of allocation concealment should be described. Allocation is concealed if the investigator responsible for the induction, maintenance and reversal of ischemia and for decisions regarding the care of (including the early sacrifice of) experimental animals has no knowledge of the experimental group to which an animal belongs. Allocation concealment might be achieved by having the experimental intervention administered by an independent investigator, or by having an independent investigator prepare a drug individually and label it for each animal according to the randomization schedule as outlined above. These considerations also apply to comparisons between groups of genetically modified animals, and if phenotypic differences (e.g. coat coloring) prevent allocation concealment this should be stated.

6. Reporting of animals excluded from analysis All randomized animals (both overall and by treatment group) should be accounted for in the data presented. Some animals may, for very good reasons, be excluded from analysis, but the circumstances under which this exclusion will occur should be determined in advance, and any exclusion should occur without knowledge of the experimental group to which the animal belongs. The criteria for exclusion and the number of animals excluded should be reported.

7. Blinded assessment of outcome The assessment of outcome is blinded if the investigator responsible for measuring infarct volume, for scoring neurobehavioral outcome or for determining any other outcome measures has no knowledge of the experimental group to which an animal belongs. The method of blinding the assessment of outcome should be described. Where phenotypic differences prevent the blinded assessment of for instance neurobehavioral outcome, this should be stated.

8. Reporting potential conflicts of interest and study funding Any relationship that could be perceived to introduce a potential conflict of interest, or the absence of such a relationship, should be disclosed in an acknowledgments section, along with information on study funding and for instance supply of drugs or of equipment.

One which considers all available supporting animal data One which considers the likelihood of publication bias One which tests a drug under circumstances similar to those in which efficacy has been demonstrated in animal models What is a Valid Translational strategy?

BetterWorse Precision All outcomes – 29 publications – 109 experiments – 1596 animals – Improved outcome by 31% (27-35%) External Validity Publication Bias for FK506 Macleod et al, JCBFM 2005

External Validity Hypertension in studies of NXY-059 in experimental stroke Macleod et al, Stroke in press Infarct volume: – 9 publications – 29 experiments – 408 animals – 44% (35-53%) improvement Hypertension: – 7% of animal studies – 77% of patients in the (neutral) SAINT II study

External Validity Hypertension in studies of tPA in experimental stroke Perel et al BMJ 2007 Comorbidity Normal BP Efficacy -2% 25% Infarct Volume: –113 publications –212 experiments –3301 animals –Improved outcome by 24% (20-28) Hypertension: –9% of animal studies –Specifically exclusion criterion in (positive) NINDS study

External validity Time to Treatment for tPA and tirilazad Both appear to work in animals tPA works in humans but tirilazad doesnt Time to treatment: tPA: –Animals– median 90 minutes –Clinical trial– median 90 minutes Time to treatment: tirilazad –Animals– median 10 minutes –Clinical trial- >3 hrs for >75% of patients Sena et al, Stroke 2007; Perel et al BMJ 2007

Chose your patients – tPA: Effect of time to treatment on efficacy Perel et al BMJ 2007; Lancet 2004

Publication bias RandomisationCo-morbidity bias Reported efficacy How much efficacy is left? 26% 32% 20% 5%

Animal Studies Systematic Review And Meta-analysis How powerful is the treatment? What is the quality of evidence? What is the range of evidence? Is there evidence of a publication bias? What are the conditions of maximum efficacy? Clinical Trial Summarising data from animal experiments

Systematic review and meta- analysis of animal studies Systematic identification of relevant sources of information Identification of individual comparisons Extraction of data from individual comparisons Calculation of effect size for each comparison Calculation of weighted averages for all studies or for selected studies

Case study: Hypothermia Systematic search: –4203 (PubMed), 1579 (EMBASE), 3332 (BIOSIS), 1 hand search, 1 from authors 193 full publications –99 did not meet inclusion and exclusion criteria, 4 duplicate and 2 triplicate publications 66 abstracts –43 published in full, 4 duplicate publications, 4 insufficient data 101 ( ) publications included

How powerful is the treatment? Infarct size: 222 comparisons in 3256 animals Improved outcome by 43.5% (95% CI, 40.1–47.0%) Neurobehavioural outcome: 55 comparisons in 870 animals Improved outcome by 45.7% (95% CI, 36.5–54.5%)

What is the Quality of Evidence? Median quality score 5 (IQR 4-6) Blinded outcome assessment Yes No Randomisation Yes No

What is the Range of Evidence?

Is there evidence of a publication bias? Funnel plotting: perhaps Egger regression: yes METATRIM: yes Infarct volume original estimate 43.5% (95% CI, 40.1–47.0%) adjusted estimate 35.4% (95% CI, %) ~20 missing studies (/101) 8.1% overstatement of efficacy

What are the conditions of maximum efficacy? Efficacy stable to 6 hours No substantial effect of duration of treatment Efficacy maintained when outcome determined at later timepoints No substantial diminution of efficacy in face of hypertension Dose-response relationship with temperature

Conclusions Animal experiments modelling stroke and other diseases are susceptible to bias Many of these sources of bias can be eliminated through good experimental design Systematic review and meta-analysis can illustrate where some of these problems arise Systematic review and meta-analysis can also provide a system for the evaluation of interventions being considered for clinical trial

Resources and acknowledgements Chief Scientist Office, Scotland Emily Sena, Evie Ferguson, Jen Lees, Hanna Vesterinen David Howells, Bart van der Worp, Uli Dirnagl, Philip Bath