Getting to grips with evidence that matters!

Slides:



Advertisements
Similar presentations
Evidence into Practice: how to read a paper Rob Sneyd (with help from...Andrew F. Smith, Lancaster, UK)
Advertisements

What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Participation Requirements for a Guideline Panel Member.
Participation Requirements for a Guideline Panel Co-Chair.
Participation Requirements for a Patient Representative.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
Designing Influential Evaluations Session 5 Quality of Evidence Uganda Evaluation Week - Pre-Conference Workshop 19 th and 20 th May 2014.
A multi-disciplined approach to tinnitus research
8. Evidence-based management Step 3: Critical appraisal of studies
Critical Appraisal Dr Samira Alsenany Dr SA 2012 Dr Samira alsenany.
Introduction to Critical Appraisal : Quantitative Research
THE NEWCASTLE CRITICAL APPRAISAL WORKSHEET
Critical Appraisal Library and Information Service Southmead Ext 5333 Frenchay Ext 6570.
Understanding Standards: Biology An Overview of the Standards for Unit and Course Assessment.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
How to Critically Review an Article
EVIDENCE BASED PROGRAMS Dr. Carol AlbrechtUtah State Extension Assessment
The Submission Process Jane Pritchard Learning and Teaching Advisor.
“Knowing Revisited” And that’s how we can move toward really knowing something: Richard Feynman on the Scientific Method.
Introduction to Central Lab Journal Club 21 November 2013 by Asieh Azarm.
CRITICAL APPRAISAL OF SCIENTIFIC LITERATURE
How to Analyze Systematic Reviews: practical session Akbar Soltani.MD. Tehran University of Medical Sciences (TUMS) Shariati Hospital
Plymouth Health Community NICE Guidance Implementation Group Workshop Two: Debriding agents and specialist wound care clinics. Pressure ulcer risk assessment.
Appraising Randomized Clinical Trials and Systematic Reviews October 12, 2012 Mary H. Palmer, PhD, RN, C, FAAN, AGSF University of North Carolina at Chapel.
Critical Appraisal of the Scientific Literature
Deciding how much confidence to place in a systematic review What do we mean by confidence in a systematic review and in an estimate of effect? How should.
Clinical Writing for Interventional Cardiologists.
RevMan for Registrars Paul Glue, Psychological Medicine What is EBM? What is EBM? Different approaches/tools Different approaches/tools Systematic reviews.
Session 1 Review. 1. Which is the last of the four steps in the EBM process? Apply evidence to your patient Evaluate evidence for validity Formulate a.
SUBJECTS AND METHODS. PURPOSE RESULTS BACKGROUND.
Type Your Title Here Author’s First Name Last Name, degree,…. Mentor’s First Name Last Name, degree Dept. Name here, NYU Lutheran Medical Center, Brooklyn,
For children with speech, language and communication needs A library of evidenced interventions to support professional decision-making.
Module 3 Finding the Evidence: Pre-appraised Literature.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Pilot and Feasibility Studies NIHR Research Design Service Sam Norton, Liz Steed, Lauren Bell.
Unit 11: Evaluating Epidemiologic Literature. Unit 11 Learning Objectives: 1. Recognize uniform guidelines used in preparing manuscripts for publication.
Protocol Launch Meeting and Research Skills Course September 16 th 2015, RCS England Searching the Literature.
Critical Appraisal of a Paper Feedback. Critical Appraisal Full Reference –Authors (Surname & Abbreviations) –Year of publication –Full Title –Journal.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
MASTER'S THESIS SEMINAR DR. SHUAIQIANG WANG DEPARTMENT OF CS-IS, JYU.
Developing & Applying Rubrics To Improve Student Performance
Evidence-based Practice for HINARI Users (Advanced Course Module 6 Part B) This module explains why HINARI users might want to start by searching evidence-based.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Writing Scientific Research Paper
Performance Appraisal & Workplace Performance
REPORT WRITING The Principle Differences Between Report & Essay Writing Rhodri Davies.
HHRS Train-The-Trainer
Webpage Creation Introduction
Writing a Manuscript Matt Fury.
The Research Design Continuum
The DEPression in Visual Impairment Trial:
Assessment and Feedback – Module 1
Supplementary Table 1. PRISMA checklist
Randomized Trials: A Brief Overview
Writing for “Innovations in Family Medicine Education”
AXIS critical Appraisal of cross sectional Studies
Research Methods Project
Week 3 Class Discussion.
Reading Research Papers-A Basic Guide to Critical Analysis
Pearls Presentation Use of N-Acetylcysteine For prophylaxis of Radiocontrast Nephrotoxicity.
Evidence-based Medicine Curriculum
Dr. Matthew Keough August 8th, 2018 Summer School
Access to HE Standardisation Event
Access to HE Standardisation Event
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Evidence-based Practice for HINARI Users (Advanced Course Module 6 Part B) This module explains why HINARI users might want to start by searching evidence-based.
Access to HE Standardisation Event
Measuring outcomes Emma Frew October 2012.
Access to HE Standardisation Event
Presentation transcript:

Getting to grips with evidence that matters! NIHR Nottingham Hearing Biomedical Research Unit, UK Melanie Ferguson Helen Henshaw British Society of Hearing Aid Audiologists 13.9.14

Learning outcomes To explain the hierarchy of evidence To describe how to assess quality of research articles Wong and Hickson, 2012, Evidence-based practice in Audiology, Plural Publishing

Pie charts

Hierarchy of evidence Cox (2005) J Am Acad of Audiology, 16 (7) AAA (2010) APD guidelines Strength of evidence

Level of evidence – validity criteria Basis of systematic review on auditory training (Henshaw and Ferguson, 2013) Scientific study-specific Randomisation? Power calculation? For sample size (n) Blinding of participants and researchers? ----------- Outcome measure selection reporting Intervention-specific Ecologically valid training environment (at home)? Training performance feedback? Follow-up assessment? ----------- Compliance 0 = flawed or no information form which to make a judgement 1 = weak information or lack of detail 2 = appropriate use and reporting

Table 1: Study quality scores. Scoring: please circle 0 for flawed or no information from which to make a judgement, 1 = weak information or lack of detail, 2 = appropriate use and reporting, for scientific and intervention specific validity criteria. *Level of evidence (PTO): Study quality score of 0-3 = very low, 4-6 = low, 7-9 = moderate, 10-12 = high (adapted from GRADE Working Group, 2004). Article Scientific study validity criteria Intervention-specific study validity criteria Study quality score Level of evidence* Comments Randomisation? Power calculation to determine sample size? Blinding of participants and researchers? Ecologically valid (at- home) training environment? Training performance feedback provided? Follow-up to examine retention of training effects?   Paper #1 0 1 2 2 very-low Paper #2 10 high

Interactive session Two papers abstract, introduction, methods read Take table headings (e.g. randomisation) Search paper Make your judgement of quality (0,1,2) Do same with second paper Discussion at the end 0 = flawed or no information form which to make a judgement 1 = weak information or lack of detail 2 = appropriate use and reporting Rubbish Somewhere in between Good

Example - randomisdation Paper #1 Pg 920 “All the participants were submitted to the evaluation only after they were assigned to the Experimental group and Control group, and the individuals themselves pick a number to be randomized to which group they would be sent to”.

Level of evidence Study quality score   Level of evidence Confidence in estimation of effect 0-3 Very low The estimation of effect is uncertain 4-6 Low Further evidence is very likely to impact on our confidence in the estimation of effect and are likely to change the estimate 7-9 Moderate Further evidence is likely to impact on our confidence in the estimation of effect and may change the estimate 10-12 High Further evidence is very unlikely to change our confidence in the estimation of effect From GRADE working group 2004 End of session – what do you think the quality of the papers is

Purpose of the exercise Brief introduction to appraisal of the literature To highlight the factors that underlie “quality” of a paper to gain a broad understanding of what quality means Stop talking Get stuck in

Table 1: Study quality scores. Scoring: please circle 0 for flawed or no information from which to make a judgement, 1 = weak information or lack of detail, 2 = appropriate use and reporting, for scientific and intervention specific validity criteria. *Level of evidence (PTO): Study quality score of 0-3 = very low, 4-6 = low, 7-9 = moderate, 10-12 = high (adapted from GRADE Working Group, 2004). Article Scientific study validity criteria Intervention-specific study validity criteria Study quality score Level of evidence* Comments Randomisation? Power calculation to determine sample size? Blinding of participants and researchers? Ecologically valid (at- home) training environment? Training performance feedback provided? Follow-up to examine retention of training effects?   Paper #1 0 1 2 2 very-low Randomisation not clear No power calculation Double-blinding? Participants are not able to be blinded (intervention vs. no intervention) Training completed in lab No feedback mentioned No follow-up assessment mentioned General point: not repeatable because outcome measures are not clear/referenced. Paper #2 10 high Minimisation – best form of randomisation – adaptive stratified sampling that is used in clinical trials - aims to minimise the imbalance between participants in two groups based on pre-specified factors. (Pocock & Simon, 1975) Power calculation = 20 individuals per group to detect 2.5 dB SNR difference in digit triplets between the groups, Cohen’s d = .89  No blinding mentioned  Training took place @ home  Feedback (correct/incorrect response) was provided during training and at the end of each session  Follow-up 4 weeks post-training