Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Critical Appraisal

Similar presentations


Presentation on theme: "Introduction to Critical Appraisal"— Presentation transcript:

1 Introduction to Critical Appraisal
CHRIS REDMAN ALEX SANCHEZ-VIVAR

2 Presentation Overview
Introduction to critical appraisal Definition, differences, strengths and weaknesses of systematic reviews and meta-analyses Sources of systematic reviews/meta-analyses Levels of Evidence Interpretation of basic statistics in meta-analyses – confidence intervals, forest plots Critical appraisal of systematic reviews/meta-analyses

3 What is critical appraisal?
“Critical appraisal is the process of systematically examining research evidence to assess its validity, results, and relevance before using it to inform a decision” (Hill and Spittlehouse, 2001, p.1) Consideration of quantitative and qualitative aspects Definition the process of systematically examining research evidence to assess its validity, results and relevance before using it to inform a decision Critical Appraisal Skills Program, Institute Health Sciences, Oxford Balanced assessment of the benefits/strengths and flaws/weaknesses of a study Assessment of research process and results Part of evidence based medicine (EBM) allowing us to make sense of research evidence to ensure practice is aligned with ‘best’ evidence Clinical experience Values based medicine

4 Critical appraisal is not
Negative dismissal of any piece of research Assessment on results alone Based entirely on statistical analysis Undertaken by experts only

5 Why critically appraise?
To find out the validity of the study are the methods robust? To find out the reliability of the study what are the results and are they credible? To find out the applicability of the study is it important enough to change my practice?

6 How do I critically appraise research?
Be (critically) open to everything Believe (in principle) papers from high quality journals Read & decide yourself Let other people read and decide for (with) you Read for yourself and make a structured appraisal

7 Critical appraisal Advantages Disadvantages
systematic way of assessing validity, results & usefulness of research contributes to improving practice (quality) encourages objective assessment of information not difficult to develop skills Disadvantages time consuming not always any easy answers or what you hoped to find dispiriting if ‘good’ evidence is lacking i.e. little / poor research done

8 What do I need to know? Critical appraisal Awareness of study designs
BUT… you can all do it with the right tools & guidance What do I need to know? Awareness of study designs Levels of evidence Statistics!! CA checklists CA resources

9 Awareness of study design

10 Observational study design measures of disease, measures of risk, and temporality

11 What is a systematic review?
A review that has been prepared using some kind of systematic approach to minimising biases and random errors, and that the components of the approach will be documented in a materials and methods section Chalmers et al, 1995

12 What is a systematic review
Reviews Systematic reviews

13 Rationale for systematic reviews
Information overload Publication bias Poor quality of reviews Vitamin C and the prevention of the common cold (Pauling 1986) Missing link Inhalation of hexamethonium (comment by Clark et al, 2001)

14 Sources of systematic reviews
The Cochrane Library DARE (in Cochrane Library ‘Other reviews’) Health Technology Assessments (in Cochrane Library ‘Technology Assessments’) Medline, Cinahl, Embase search on ‘systematic review’ in title, abstract PubMed – Systematic Review in Limits > Topic TRIP Evidence Based Reviews - Journals and Databases NHS Evidence

15 Format of a systematic review
Formulation of a review question Define inclusion/exclusion criteria Locate studies Select studies (inclusion/exclusion) Assess study quality Data extraction Analyse and present results Interpretation of results Egger et al, 2001

16 Formulation of review question
Is the question focused in terms of Population studied Intervention/exposure given Outcomes considered Do anticoagulants prevent strokes in patients with atrial fibrillation?

17 Define inclusion/exclusion criteria
Were the right types of studies included to answer the question? Depends on the question. Can have observational studies (cohort, case-control), diagnostic/screening tests, prognostic, non-randomised trials Studies should be defined according to their design, participant characteristics, interventions and outcomes

18 Locate studies Comprehensive search
Databases Conference proceedings Hand searching Grey literature (reports, research registers) Foreign language Follow-up references Contacting experts/authors Publication bias – unpublished studies Explicit

19 Select and Assess Studies
Eligibility criteria for study selection can be applied More than one reviewer can help reduce bias Checklists/scoring systems

20 What do the findings mean?
Effect measures – odds ratios, relative risk, mean difference P-values Confidence intervals

21 Using statistics Assess the weight of the evidence that a treatment works (or doesn’t) Give an estimate (and likely range) of the treatment effect Test to see how likely it is that this effect would have been seen by chance

22 Odds ratio (OR) Expresses the odds of having an event compared with not having an event in two different groups OR = odds in the treated group / odds in the control group

23 OR=1 treatment has identical effect to control
OR<1 event is less likely to happen than not (i.e. the treatment reduces the chance of having the event) OR>1 event is more likely to happen than not (increases the chances of having the event) Clinical trials typically look for treatments which reduce event rates, and which have odds ratios of less than one

24 Importance of defining the outcome
Type of outcome Value of OR/RR Adverse outcome (e.g. death) Beneficial outcome (e.g. stopped smoking) <1 New intervention better New intervention worse 1 New intervention no better/no worse >1

25 P-values – significance test
A p-value is a measure of statistical significance which tells us the probability of an event occurring due to chance alone P-value results range from 0 to 1 The closer the p-value is to zero, the less chance there is that the effects of two interventions are the same

26 Statistical significance
In general, p-values of either 0.05 or 0.01 are used as a cut-off value, although this value is arbitrary P-value of <0.05 indicates the result is unlikely to be due to chance P-value of >0.05 indicates the result might have occurred by chance. Results larger than the cut-off are considered likely to attribute the event to chance, while results smaller than the cut-off value are likely to have occurred because of a real explanation (i.e. the result is less likely due to chance)

27 Be careful… A p-value in the non-significant range tells you that either there is no difference between the groups or there were too few subjects to demonstrate such a difference (ideally need to report confidence intervals) There is not much difference between p=0.049 and p=0.051 P-values do not indicate the magnitude of the observed difference between treatments that is needed to determine the clinical significance

28 Interpretation of Confidence Intervals
Confidence interval is the range within which we have a measure of certainty that the true population value lies OR The confidence interval around a result obtained from a study sample (point estimate) indicates the range of values within which there is a specific certainty (usually 95%) that the true population value for that result lies. (MeReC Briefing 2005)

29 What can a CI tell us? Tells us whether the result is significant or not The width of the interval indicates precision. Wider intervals suggest less precision Shows whether the strength of the evidence is strong or weak. The general confidence level is 95%. Therefore, the 95% CI is the range within which we are 95% certain that the true population value lies

30 Confidence Intervals reported on Ratios (odds ratio, etc)
The ‘line of no effect’ centres around 1 If a CI for an RR or OR includes 1 (the line of no effect) then we are unable to demonstrate statistically significant difference between the two groups

31 What is a meta-analysis?
A statistical analysis of the results from independent studies, which generally aims to produce a single estimate of the treatment effect Egger et al, 2001

32 Interpretation of forest plots
Effect of probiotics on the risk of antibiotic associated diarrhoea The label tells you what the comparison and outcome of interest are Scale measuring treatment effect. Take care when reading labels! Each study has an ID author Treatment effect sizes for each study Horizontal lines are confidence intervals diamond shape is pooled effect horizontal width of diamond is confidence interval the vertical line in middle is the line of no effect for ratios this is 1, for means this is 0 D'Souza, A. L et al. BMJ 2002;324:1361

33 Interpretation of forest plots
Look at the title of the forest plot, the intervention, outcome effect measure of the investigation and the scale The names on the left are the authors of the primary studies included in the MA The small squares represent the results of the individual trial results The size of each square represents the weight given to each study in the meta-analysis The horizontal lines associated with each square represent the confidence interval associated with each result The vertical line represents the line of no effect, i.e. where there is no statistically significant difference between the treatment/intervention group and the control group The pooled analysis is given a diamond shape. The horizontal width of the diamond is the confidence interval

34 Advantages of a systematic review/meta-analysis
Limits bias in identifying and excluding studies Objective Good quality evidence, more reliable and accurate conclusions Added power by synthesising individual study results Control over the volume of literature

35 Drawbacks to systematic reviews/meta-analyses
Can be done badly 2 systematic reviews on same topic can have different conclusions Inappropriate aggregation of studies A meta-analysis is only as good as the papers included Tend to look at ‘broad questions’ that may not be immediately applicable to individual patients

36 Conclusion Critical appraisal of systematic reviews and other research is well within your capabilities Use a recognised checklist (i.e. SIGN) Update your literature searching skills regularly (contact your library skills trainer)

37 D'Souza, A. L et al. BMJ 2002;324:1361

38 (Other) Critical appraisal checklists
CASP (Critical Skills Appraisal Programme) JAMA Users’ Guides to the Medical Literature Crombie I (1996) The Pocket Guide to Critical Appraisal, BMJ Books, London Greenhalgh T (2001) How to Read a Paper, BMJ Books, London BestBETs CA database

39 References Systematic reviews in health care [electronic resource] : meta-analysis in context / edited by Matthias Egger, George Davey Smith, and Douglas G. Altman. BMJ Books 2001 (ebook) What is a systematic review?, What is a meta-analysis?, What are confidence intervals? Understanding systematic reviews and meta-analysis. Akonberg AK. Archives of Disease in Childhood 2005;90:

40 References Funnel plots Heterogeneity
Cochrane Open Learning Material: Systematic Reviews and Meta-analyses (useful Forest Plot interpretation PDF) Funnel plots Bias in meta-analysis detected by a simple, graphical test. Egger M, et al BMJ 1997 (315): The case of the misleading funnel plot. Lau J, et al. BMJ 2006 (333): Heterogeneity What is heterogeneity and is it important? Fletcher J BMJ 2007;334:94-6

41

42 The label tells you what the comparison and outcome of interest are
Effect of probiotics on the risk of antibiotic associated diarrhoea

43 Scale measuring treatment effect. Take care when reading labels!
Effect of probiotics on the risk of antibiotic associated diarrhoea The label tells you what the comparison and outcome of interest are Scale measuring treatment effect. Take care when reading labels!

44 Each study has an ID (author)
Effect of probiotics on the risk of antibiotic associated diarrhoea The label tells you what the comparison and outcome of interest are Scale measuring treatment effect. Take care when reading labels! Each study has an ID author

45 Treatment effect sizes for each study (plus 95% CI)
Effect of probiotics on the risk of antibiotic associated diarrhoea The label tells you what the comparison and outcome of interest are Scale measuring treatment effect. Take care when reading labels! Each study has an ID author Treatment effect sizes for each study

46 Horizontal lines are confidence intervals Diamond shape is pooled effect Horizontal width of diamond is confidence interval Effect of probiotics on the risk of antibiotic associated diarrhoea The label tells you what the comparison and outcome of interest are Scale measuring treatment effect. Take care when reading labels! Each study has an ID author Treatment effect sizes for each study Horizontal lines are confidence intervals diamond shape is pooled effect horizontal width of diamond is confidence interval

47 The vertical line in middle is the line of no effect For ratios this is 1, for means this is 0
Effect of probiotics on the risk of antibiotic associated diarrhoea The label tells you what the comparison and outcome of interest are Scale measuring treatment effect. Take care when reading labels! Each study has an ID author Treatment effect sizes for each study Horizontal lines are confidence intervals diamond shape is pooled effect horizontal width of diamond is confidence interval the vertical line in middle is the line of no effect for ratios this is 1, for means this is 0

48 Rationale for meta-analysis
Conventional and cumulative meta-analysis of 33 trials of intravenous streptokinase for acute myocardial infarction. Mulrow, C D BMJ 1994;309:


Download ppt "Introduction to Critical Appraisal"

Similar presentations


Ads by Google