Critical Appraisal Skills quantitative reviews

Slides:



Advertisements
Similar presentations
Systematic reviews and Meta-analyses
Advertisements

What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
How would you explain the smoking paradox. Smokers fair better after an infarction in hospital than non-smokers. This apparently disagrees with the view.
Meta-analysis: summarising data for two arm trials and other simple outcome studies Steff Lewis statistician.
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Find the Joy in Stats ? ! ? Walt Senterfitt, Ph.D., PWA Los Angeles County Department of Public Health and CHAMP.
Introduction to Critical Appraisal : Quantitative Research
How Science Works Glossary AS Level. Accuracy An accurate measurement is one which is close to the true value.
Statistics By Z S Chaudry. Why do I need to know about statistics ? Tested in AKT To understand Journal articles and research papers.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Thomas Songer, PhD with acknowledgment to several slides provided by M Rahbar and Moataza Mahmoud Abdel Wahab Introduction to Research Methods In the Internet.
Quantitative Research
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Are the results valid? Was the validity of the included studies appraised?
RESEARCH A systematic quest for undiscovered truth A way of thinking
Reading Scientific Papers Shimae Soheilipour
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Systematic Reviews Professor Kate O’Donnell. Reviews Reviews (or overviews) are a drawing together of material to make a case. These may, or may not,
Health and Disease in Populations 2001 Sources of variation (2) Jane Hutton (Paul Burton)
Research Skills Basic understanding of P values and Confidence limits CHE Level 5 March 2014 Sian Moss.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Understanding real research 4. Randomised controlled trials.
Systematic Reviews By Jonathan Tsun & Ilona Blee.
Clinical Writing for Interventional Cardiologists.
Chris and Abbie’s Vital Statistics. It’s actually quite dull!!! But......It is quite easy if you think it through. Arghhh Statistics...
Levels of evidence and Interpretation of a systematic review
Study designs. Kate O’Donnell General Practice & Primary Care.
CAT 5: How to Read an Article about a Systematic Review Maribeth Chitkara, MD Rachel Boykan, MD.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
LIBRARY SERVICES Evaluating the evidence Paula Funnell Senior Academic Liaison Librarian (Medicine and Dentistry)
Compliance Original Study Design Randomised Surgical care Medical care.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
Research Design Evidence Based Medicine Concepts and Glossary.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Critical Appraisal of a Paper Feedback. Critical Appraisal Full Reference –Authors (Surname & Abbreviations) –Year of publication –Full Title –Journal.
2 3 انواع مطالعات توصيفي (Descriptive) تحليلي (Analytic) مداخله اي (Interventional) مشاهده اي ( Observational ) كارآزمايي باليني كارآزمايي اجتماعي كارآزمايي.
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
Research Methods Systematic procedures for planning research, gathering and interpreting data, and reporting research findings.
“Reading and commenting papers” (Scientific English) Alexis Descatha INSERM, UMS UVSQ- Unité de pathologie professionnelle, Garches.
CHAPTER 6: SAMPLING, SAMPLING DISTRIBUTIONS, AND ESTIMATION Leon-Guerrero and Frankfort-Nachmias, Essentials of Statistics for a Diverse Society.
HelpDesk Answers Synthesizing the Evidence
Sample size calculation
Critically Appraising a Medical Journal Article
NURS3030H NURSING RESEARCH IN PRACTICE MODULE 7 ‘Systematic Reviews’’
The Research Design Continuum
Improving Adverse Drug Reaction Information in Product Labels
How to read a paper D. Singh-Ranger.
Evidence-Based Medicine Appendix 1: Confidence Intervals
Confidence Intervals and p-values
Randomized Trials: A Brief Overview
EVIDENCE BASED MEDICINE
Lecture 4: Meta-analysis
Chapter 7 The Hierarchy of Evidence
Critical Reading of Clinical Study Results
H676 Meta-Analysis Brian Flay WEEK 1 Fall 2016 Thursdays 4-6:50
Critical Appraisal Dr Samantha Rutherford
11/20/2018 Study Types.
remember to round it to whole numbers
Q&A – studying medicine or health-related topics at university
Interpreting Basic Statistics
How to apply successfully to the NIHR HTA Board?
EAST GRADE course 2019 Introduction to Meta-Analysis
Research Strategies.
The objective of this lecture is to know the role of random error (chance) in factor-outcome relation and the types of systematic errors (Bias)
How to assess an abstract
Interpreting Epidemiologic Results.
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Evidence-Based Public Health
Basic statistics.
Presentation transcript:

Critical Appraisal Skills quantitative reviews Pippa Orr Knowledge Support Librarian Introduction: What going to do in the Workshop Trained Librarians not statisticians, so……. Introduction to quantitative appraisal of Reviews Further guidance from: NCAH R&D Coordinator, Leon Jonker (email: leon.jonker@ncumbria-acute.nhs.uk) on hypothesis testing. Will refer users on to medical statistician at Lancaster University if needed. Health R&D NW Research Workshop programme Library Catalogue CASP website: http://www.phru.nhs.uk/casp/critical_appraisal_tools.htm Glossary in packs With acknowledgements to CASP for their slides North Cumbria Informatics Service

Critical Appraisal Skills Programme (CASP) Critical appraisal is the process of weighing up evidence to see how useful it is in decision making To enable effective delivery of health care. Workshop covers: 1- sources and heirarchy/ ranking of evidence 2- reviews as a key source of evidence 3- how critical appraisal can help people base decisions on sound evidence http://www.phru.nhs.uk/casp/critical_appraisal_tools.htm North Cumbria Informatics Service

Effectiveness of Health Care doing the right thing to the right patient in the right way at the right time at the right cost in the right place To enable us to deliver effective health care we need to be informed, we need to look at the evidence North Cumbria Informatics Service

Kinds of evidence Descriptive Analytic Experimental cross-sectional, longitudinal Analytic case-control study cohort study Experimental randomized controlled trial Cross-sectional Observation of a defined population at a single point in time. Exposure and outcome are determined simultaneously can look at groups of individuals but not populations Longitudinal ie study continues forward, lengthwise, eg. Study assessing educational outcomes of a lecture, first at two weeks and then at two months Case-control study The level of exposure to a health hazard is measured in two groups and compared, looks back in time to find the cause, eg. an outbreak of food poisoning at a wedding Population defined by those with the disease (cases) and those without (controls) Cohort study Large trial group with lengthy follow-up, eg. Several years The population of study is defined by exposure to a health hazard They are followed up over time to observe incidence of disease in exposed and non-exposed RCT A randomised controlled trial (RCT) is an experimental study where participants are randomised to receive either the new intervention being tested or to receive a control treatment (usually either the standard treatment or a placebo) = best source of clinical evidence, second on evidence heirarchy North Cumbria Informatics Service

Hierarchy of evidence General hierarchy, ie. For qualitative and quantative research 1- Systematic Review with meta-analysis is the best source of clinical evidence. Meta-analysis is a statistical technique that summarises the results of several studies into a single estimate. 2- Systematic Review 3- Double blind RCT = best source of clinical evidence (Neither the participants nor the researchers know which intervention the participants are receiving). Single blind RCTs. 4- RCT without blinding 5- Cohort study = 2 grps of patients followed up over period of time (1 grp received intervention, 1 did not) 6- Case-control study = 2 grps, cases and controls, looks back to find the cause 7- Case series/ report (Case series = several/ series of cases/ patients, Case report = single case study) 8- General review/ overview/ expert opinion North Cumbria Informatics Service

Why does good evidence from research fail to get into practice?? - 75% cannot understand the statistics - 70% cannot critically appraise a research paper Need to be aware of this. Important to read all the text. Not just the Introduction and the Conclusion. Using Research for Practice: A UK Experience of the barriers scale Dunn V, Crichton C, Williams K, Roe B, Seers K North Cumbria Informatics Service

Critical appraisal helps the reader of research ………... Decide how trustworthy a piece of research is (validity) Determine what it is telling us (results) Weigh up how useful the research will be (relevance) North Cumbria Informatics Service

Primary Research Evidence: Randomised Controlled Trials (RCT) Robust randomisation procedures: to ensure that the variables are equal in both groups to remove all bias to ensure that the results are generalisable RCTs and Reviews are considered primary research evidence. See evidence heirarchy. Main features of RCTs…… North Cumbria Informatics Service

Randomised controlled trial new treatment group 1 Outcome population group 2 Outcome control treatment North Cumbria Informatics Service

Blinding Blinding = participants don’t know what intervention they are getting Double blinding = those giving the intervention don’t know what the participant is receiving North Cumbria Informatics Service

Loss to follow-up It is important to ensure that all those that are randomised into the trial are followed up to the trial’s conclusion North Cumbria Informatics Service

Intention to treat analysis Analysing people, at the end of the trial, in the groups to which they were randomised, even if they did not receive the intended intervention. North Cumbria Informatics Service

Types of review: Reviews Systematic reviews Meta-analysis Reviews look at the results from 2 or more studies Systematic Review of all the literature on a particular topic, which has been systematically identified, appraised and summarised giving a summary answer. Meta-analysis is a statistical technique which summarises the results of several studies into a single estimate, giving more weight to results from larger studies. It is only used in Systematic Reviews but not all syst reviews use it. Cochrane Collaboration prepare, maintain and disseminate systematic reviews of the effects of health care Reviews useful because: volume of literature “new” information good reviews of effectiveness available Meta-analysis North Cumbria Informatics Service

Publication bias Papers with "interesting" results are (or may be) more likely to be: submitted for publication accepted for publication published in a major journal and in English Language quoted by authors quoted in newspapers When looking at evidence, bare in mind …... Researchers fail to publish, studies with negative results may not be submitted for publication. Pharmaceutical industry may not publish research with negative results Journal editors tend to publish studies with positive rather than negative results Language bias. Positive findings more likely to be found in English language journals. Negative results more likely to be published in foreign language journals Geographic bias major databases we use tend to be U.S/European focused Comprehensiveness. Only a small proportion of journals are covered by Medline out of the number published worldwide Bias towards English language journals North Cumbria Informatics Service

Odds Ratio, Relative Risk Measures of risk The likelihood of something happening V The likelihood of something not happening We will now look at statistical elements of quantitative research: Odds ratio, to do with risk.… North Cumbria Informatics Service

Odds ratio (OR) The odds of an event happening in the experimental group expressed as a proportion of the odds of an event happening in the control group The closer the OR is to 1, the smaller the difference in effect, i.e. no effect: OR = 1 Odds (see Glossary) is a term little used outside gambling and statistics. It is defined as the ratio of the probability of an event happening, to that of it not happening. Think of it as meaning ‘risk’. Odds ratio (OR) is one measure of a treatment’s clinical effectiveness. If it is equal to 1, then the effects of the treatment are no different from those of the control treatment. If the OR is greater than 1, then the effects of the treatment are more than those of the control treatment. If the OR is less than 1, then the effects of the treatment are less than those of the control treatment. Note that the effects being measured may be adverse (e.g. death, disability) or desirable (e.g. stopping smoking). 1 = line of no effect/ line of unity North Cumbria Informatics Service

Confidence intervals/ limits Presents the range of likely effects The 95% confidence interval, for example, includes 95% of results from studies of the same size and design in the same population This is close, but not identical, to saying that the true size of effect (never exactly known) has 95% chance of falling within the confidence interval The narrower/ shorter the confidence interval, the more precise/ confident we can be about the estimate How ‘confident’ can we be that the results are a true reflection of the actual effect/phenomena? the shorter the CI the more certain we can be about the results If the CI crosses the line of no effect/ line of unity (no treatment effect) the intervention might not be doing any good and could be doing harm Confidence interval/ limit Presents the range of likely effects. Is the range within which the true size of effect (never exactly known) lies, with a given degree of assurance (usually 95%). People often speak of a “95% confidence interval”. This is the interval which includes the true value in 95% of cases. North Cumbria Informatics Service

Forest plots Common approach to presenting the results of a meta-analysis Also known as a ‘blobbogram’ or ‘odds ratio diagram’ Graphical representation of individual trial results included in a review, together with the combined meta-analysis result Intended to present what are complicated results and concepts in a clear visual fashion. The results of long and detailed statistical theory and analysis are made accessible to a wide audience without any real need to understand or explain the background. North Cumbria Informatics Service

line of no effect confidence interval meta-analysis result This is a Forest plot of studies looking at vaccines for preventing cholera. What comparison is being made? KWC (killed whole cell) vaccine vs placebo What is the outcome under investigation? Cholera cases, up to one year follow up Is the outcome positive (e.g. recovery) or negative (e.g. death)? Negative, i.e. having cholera is not desirable Forest plots list all studies included and the numbers of participants in the vaccination group and the control group. Line of no effect/ line of unity = 1 Remember, the closer the Odds Ratio (the ratio of the probability/ risk of an event happening, to that of it not happening) is to 1, the smaller the difference in effect, i.e. no effect: OR = 1 If the OR is greater (or less) than 1, then the effects of the treatment are more (or less) than those of the control treatment. Blobs. Size of blob reflects the number of people included in the study (small blob, long CI = small study. Large blob, small CI = large study). Position of the blob is calculated by dividing the Intervention fraction ( 12/6956) by the Control fraction (43/7103). Must look at text of Review as well as blobogram. Confidence interval/ limit Presents the range of likely effects. People often speak of a “95% confidence interval”. This is the interval which includes the true value in 95% of cases. The shorter the CI the more certain we can be about the results. If it crosses the line of no effect/ line of unity (no treatment effect) the intervention might not be doing any good and could be doing harm Meta-analysis result. If the diamond crosses the line of no effect, then the result is inconclusive. Important to read textual conclusions in a review as well as studying blobagram On which side of the line of no effect does the meta-analysis result lie? The left, which indicates there is less of the outcome (cholera cases) in the treatment (vaccinated) group Does the confidence interval touch or cross the line of no effect? No, so the results are significant What conclusions can de drawn? Compared to placebo, using KWC vaccines appears to be beneficial in terms of reducing the number of cholera cases (up to one year follow up) Weighted mean difference (WMD) is another statistical technique used to deal with different types of outcome. meta-analysis result North Cumbria Informatics Service

p-value The probability (ranging from 0 to 1) that the results observed in a study (or results more extreme) could have occurred by chance if in reality the null hypothesis was true, ie if you did nothing. If this probability is less than 1/20 (which is when the p value is less than 0.05), then the result is conventionally regarded as being “statistically significant”. Ie. How sure are you the results are true and have not happened by chance? “Conventional two-sided p-values (2p) are used throughout”. Authors trying to blind us with science, unless anyone has any ideas? North Cumbria Informatics Service

The p-value in a nutshell Could the result have occurred by chance? The result is likely to be due to chance The result is unlikely to be due to chance 0 1 p < 0.05 a statistically significant result p > 0.05 not a statistically significant result Some people only accept 0.01 as true. p = 0.05 or 1 in 20 result fairly unlikely to be due to chance p = 0.5 or 1 in 2 result quite likely to be due to chance 1 20 1 2 North Cumbria Informatics Service

Number needed to treat Is the number of people you would need to treat with a specific intervention to see one additional occurrence of a specific beneficial outcome. North Cumbria Informatics Service

Critical appraisal: questions to apply to reviews is it trustworthy? validity what does it say? results will it help? relevance North Cumbria Informatics Service