Presentation is loading. Please wait.

Presentation is loading. Please wait.

Preventing introduction of bias at the bench: from randomizing to experimental results meta-analysis Malcolm Macleod Centre for Clinical Brain Sciences,

Similar presentations


Presentation on theme: "Preventing introduction of bias at the bench: from randomizing to experimental results meta-analysis Malcolm Macleod Centre for Clinical Brain Sciences,"— Presentation transcript:

1 Preventing introduction of bias at the bench: from randomizing to experimental results meta-analysis Malcolm Macleod Centre for Clinical Brain Sciences, University of Edinburgh

2 1026 1026 interventions in experimental stroke OCollins et al Ann Neurol 2006

3 1026 603 1026 interventions in experimental stroke Tested in focal ischaemia OCollins et al Ann Neurol 2006

4 1026 883 374 1026 interventions in experimental stroke Effective in focal ischaemia OCollins et al Ann Neurol 2006

5 1026 883 550 9718 1026 interventions in experimental stroke Tested in clinical trial OCollins et al Ann Neurol 2006

6 1026 883 550 9717 13 1026 interventions in experimental stroke Effective in clinical trial OCollins et al Ann Neurol 2006

7 Whats my problem? I want to improve the outcome for my patients with stroke To get that, I want to conduct high quality clinical trials of interventions which have a reasonable chance of actually working in humans But which of the remaining 929 interventions should I choose?

8 Its not just my problem …

9 …you will meet with several observations and experiments which, though communicated for true by candid authors or undistrusted eye-witnesses, or perhaps recommended by your own experience, may, upon further trial, disappoint your expectation, either not at all succeeding, or at least varying much from what you expected Robert Boyle (1693) Concerning the Unsuccessfulness of Experiments

10 One which describes some biological truth in the system being studied Internal validity: the extent to which an experiment accurately describes what happened in that model system Validity can be inferred by the extent of reporting of measures to avoid common biases What is a Valid Experiment?

11 Standards for reporting experiments

12 8 simple reporting measures 1.The animals used 2.The sample size calculation 3.The inclusion and exclusion criteria 4.Randomization 5.Allocation concealment 6.Reporting of animals excluded from analysis 7.Blinded assessment of outcome 8.Reporting potential conflicts of interest and study funding

13 Potential sources of bias in animal studies Internal validity ProblemSolution Selection BiasRandomisation Performance BiasAllocation Concealment Detection BiasBlinded outcome assessment Attrition biasReporting drop-outs/ ITT analysis False positive report biasAdequate sample sizes After Crossley et al, 2008, Wacholder, 2004

14 Internal validity Dopamine Agonists in models of PD Ferguson et al, in draft

15 Internal validity Dopamine Agonists in models of PD Ferguson et al, in draft

16 Internal validity Dopamine Agonists in models of PD Ferguson et al, in draft

17 Internal Validity Randomisation and blinding in studies of hypothermia in experimental stroke van der Worp et al Brain 2007 Blinded outcome assessment Yes No Efficacy 47%39% Randomisation Yes No 47%37% Efficacy

18 Stem cells in experimental stroke Lees et al, in draft

19 Infarct Volume Internal Validity Randomisation, allocation concealment and blinding in studies of Stem cells in experimental stroke Neurobehavioural score Lees et al, in draft

20 Internal Validity NXY-059 Macleod et al, 2008

21 Internal Validity False positive reporting bias The positive predictive value of any test result depends on –p ( α) –Power (1-ß) –Pre-test probability of a positive result after Wacholder, 2004

22 Internal Validity False positive reporting bias The positive predictive value of any test result depends on –p ( α) (0.05) –Power (1-ß) (0.30) –Pre-test probability of a positive result (0.50) Positive predictive value = 0.67 i.e. only 2 out of 3 statistically positive studies are truly positive after Wacholder, 2004

23 Chances that data from any given animal will be non-contributory Number of animalsPower% animals wasted 418.6%81.4% 832.3%67.7% 1656.4%43.6% 3285.1%14.9% assume simple two group experiment seeking 30% reduction in infarct volume, observed SD 40% of control infarct volume

24 Chances of wasting an animal

25 1. Animals The precise species, strain, substrain and source of animals used should be stated. Where applicable (for instance, in studies with genetically modified animals), the generation should also be given, as well as the details of the wild-type control group (for instance littermate, back cross, etc.).

26 2. Sample size calculation The manuscript should describe how the size of the experiment was planned. If a sample size calculation was performed this should be reported in detail, including the expected difference between groups, the expected variance, the planned analysis method, the desired statistical power and the sample size thus calculated. For parametric data, variance should be reported as 95% confidence limits or standard deviations rather than as the standard error of the mean.

27 3. Inclusion and exclusion criteria Where the severity of ischemia has to reach a certain threshold for inclusion (for instance a prespecified drop in perfusion detected with laser-Doppler flowmetry, or the development of neurological impairment of a given severity), this should be stated clearly. Usually, these criteria should be applied before the allocation to experimental groups. If a prespecified lesion size is required for inclusion this, as well as the corresponding exclusion criteria should be detailed.

28 4. Randomization The manuscript should describe the method by which animals were allocated to experimental groups. If this allocation was by randomization, the method of randomization (coin toss, computer-generated randomization schedules) should be stated. Picking animals at random from a cage is unlikely to provide adequate randomization. For comparisons between groups of genetically modified animals (transgenic, knockout), the method of allocation to for instance sham operation or focal ischemia should be described.

29 5. Allocation concealment The method of allocation concealment should be described. Allocation is concealed if the investigator responsible for the induction, maintenance and reversal of ischemia and for decisions regarding the care of (including the early sacrifice of) experimental animals has no knowledge of the experimental group to which an animal belongs. Allocation concealment might be achieved by having the experimental intervention administered by an independent investigator, or by having an independent investigator prepare a drug individually and label it for each animal according to the randomization schedule as outlined above. These considerations also apply to comparisons between groups of genetically modified animals, and if phenotypic differences (e.g. coat coloring) prevent allocation concealment this should be stated.

30 6. Reporting of animals excluded from analysis All randomized animals (both overall and by treatment group) should be accounted for in the data presented. Some animals may, for very good reasons, be excluded from analysis, but the circumstances under which this exclusion will occur should be determined in advance, and any exclusion should occur without knowledge of the experimental group to which the animal belongs. The criteria for exclusion and the number of animals excluded should be reported.

31 7. Blinded assessment of outcome The assessment of outcome is blinded if the investigator responsible for measuring infarct volume, for scoring neurobehavioral outcome or for determining any other outcome measures has no knowledge of the experimental group to which an animal belongs. The method of blinding the assessment of outcome should be described. Where phenotypic differences prevent the blinded assessment of for instance neurobehavioral outcome, this should be stated.

32 8. Reporting potential conflicts of interest and study funding Any relationship that could be perceived to introduce a potential conflict of interest, or the absence of such a relationship, should be disclosed in an acknowledgments section, along with information on study funding and for instance supply of drugs or of equipment.

33 One which considers all available supporting animal data One which considers the likelihood of publication bias One which tests a drug under circumstances similar to those in which efficacy has been demonstrated in animal models What is a Valid Translational strategy?

34 BetterWorse Precision All outcomes – 29 publications – 109 experiments – 1596 animals – Improved outcome by 31% (27-35%) External Validity Publication Bias for FK506 Macleod et al, JCBFM 2005

35 External Validity Hypertension in studies of NXY-059 in experimental stroke Macleod et al, Stroke in press Infarct volume: – 9 publications – 29 experiments – 408 animals – 44% (35-53%) improvement Hypertension: – 7% of animal studies – 77% of patients in the (neutral) SAINT II study

36 External Validity Hypertension in studies of tPA in experimental stroke Perel et al BMJ 2007 Comorbidity Normal BP Efficacy -2% 25% Infarct Volume: –113 publications –212 experiments –3301 animals –Improved outcome by 24% (20-28) Hypertension: –9% of animal studies –Specifically exclusion criterion in (positive) NINDS study

37 External validity Time to Treatment for tPA and tirilazad Both appear to work in animals tPA works in humans but tirilazad doesnt Time to treatment: tPA: –Animals– median 90 minutes –Clinical trial– median 90 minutes Time to treatment: tirilazad –Animals– median 10 minutes –Clinical trial- >3 hrs for >75% of patients Sena et al, Stroke 2007; Perel et al BMJ 2007

38 Chose your patients – tPA: Effect of time to treatment on efficacy Perel et al BMJ 2007; Lancet 2004

39 Publication bias RandomisationCo-morbidity bias Reported efficacy How much efficacy is left? 26% 32% 20% 5%

40 Animal Studies Systematic Review And Meta-analysis How powerful is the treatment? What is the quality of evidence? What is the range of evidence? Is there evidence of a publication bias? What are the conditions of maximum efficacy? Clinical Trial Summarising data from animal experiments

41 Systematic review and meta- analysis of animal studies Systematic identification of relevant sources of information Identification of individual comparisons Extraction of data from individual comparisons Calculation of effect size for each comparison Calculation of weighted averages for all studies or for selected studies

42 Case study: Hypothermia Systematic search: –4203 (PubMed), 1579 (EMBASE), 3332 (BIOSIS), 1 hand search, 1 from authors 193 full publications –99 did not meet inclusion and exclusion criteria, 4 duplicate and 2 triplicate publications 66 abstracts –43 published in full, 4 duplicate publications, 4 insufficient data 101 (86 + 15) publications included

43

44

45

46

47

48

49 How powerful is the treatment? Infarct size: 222 comparisons in 3256 animals Improved outcome by 43.5% (95% CI, 40.1–47.0%) Neurobehavioural outcome: 55 comparisons in 870 animals Improved outcome by 45.7% (95% CI, 36.5–54.5%)

50 What is the Quality of Evidence? Median quality score 5 (IQR 4-6) Blinded outcome assessment Yes No Randomisation Yes No

51 What is the Range of Evidence?

52 Is there evidence of a publication bias? Funnel plotting: perhaps Egger regression: yes METATRIM: yes Infarct volume original estimate 43.5% (95% CI, 40.1–47.0%) adjusted estimate 35.4% (95% CI, 31.7-39.1%) ~20 missing studies (/101) 8.1% overstatement of efficacy

53 What are the conditions of maximum efficacy? Efficacy stable to 6 hours No substantial effect of duration of treatment Efficacy maintained when outcome determined at later timepoints No substantial diminution of efficacy in face of hypertension Dose-response relationship with temperature

54 Conclusions Animal experiments modelling stroke and other diseases are susceptible to bias Many of these sources of bias can be eliminated through good experimental design Systematic review and meta-analysis can illustrate where some of these problems arise Systematic review and meta-analysis can also provide a system for the evaluation of interventions being considered for clinical trial

55 Resources and acknowledgements www.camarades.info/index_files/papers.htm www.camarades.info/index_files/talks.htm www.camarades.info/index_files/resources.htm Chief Scientist Office, Scotland Emily Sena, Evie Ferguson, Jen Lees, Hanna Vesterinen David Howells, Bart van der Worp, Uli Dirnagl, Philip Bath


Download ppt "Preventing introduction of bias at the bench: from randomizing to experimental results meta-analysis Malcolm Macleod Centre for Clinical Brain Sciences,"

Similar presentations


Ads by Google