C.W.Ritchie, L.Flicker, A. Noel-Storr, R.McShane

Slides:



Advertisements
Similar presentations
Participation Requirements for a Guideline Panel Member.
Advertisements

Protocol Development.
Participation Requirements for a Guideline Panel Co-Chair.
Participation Requirements for a Patient Representative.
Critical Reading VTS 22/04/09. “How to Read a Paper”. Series of articles by Trisha Greenhalgh - published in the BMJ - also available as a book from BMJ.
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Mapping Studies – Why and How Andy Burn. Resources The idea of employing evidence-based practices in software engineering was proposed in (Kitchenham.
Elements of a clinical trial research protocol
3. STARE-HI - Guidelines for authors of IT evaluation studies a) Why STARE-HI (Jan Talmon) b) STARE-HI: Guidelines for authors.
15 de Abril de A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.
Vanderbilt Sports Medicine Chapter 4: Prognosis Presented by: Laurie Huston and Kurt Spindler Evidence-Based Medicine How to Practice and Teach EBM.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Mild Cognitive Impairment as a Target for Drug Development Steven H. Ferris, Ph.D. Silberstein Aging and Dementia Research Center New York University School.
STrengthening the Reporting of OBservational Studies in Epidemiology
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Systematic reviews of genetic association studies Robert Walton Fiona Fong 15 March 2013.
Self-reported cognitive and emotional effects and lifestyle changes shortly after preventive cardiovascular consultations in general practice Dea Kehler.
Providing Consultancy & Research in Health Economics Julie Glanville, York Health Economics Consortium, UK Anna Noel Storr, Cochrane Dementia and Cognitive.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Study designs. Kate O’Donnell General Practice & Primary Care.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
Diagnostic Tests Studies 87/3/2 “How to read a paper” workshop Kamran Yazdani, MD MPH.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Chapter 2 What is Evidence?. Objectives Discuss the concept of “best available clinical evidence.” Describe the general content and procedural characteristics.
R. Heshmat MD; PhD candidate Systematic Review An Introduction.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Unit 11: Evaluating Epidemiologic Literature. Unit 11 Learning Objectives: 1. Recognize uniform guidelines used in preparing manuscripts for publication.
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
Levels of Evidence Dr Chetan Khatri Steering Committee, STARSurg.
“Reading and commenting papers” (Scientific English) Alexis Descatha INSERM, UMS UVSQ- Unité de pathologie professionnelle, Garches.
Sample Journal Club Your Name Here.
Systematic review an overview and posing the question
Critically Appraising a Medical Journal Article
Evidence Synthesis/Systematic Reviews of Eyewitness Accuracy
Types of Research Studies Architecture of Clinical Research
Patient Involvement in the HTA Decision Making Process
Present: Disease Past: Exposure
The Need for Competing Risks
How to read a paper D. Singh-Ranger.
Nut and Bolts of Critical Appraisal of Medical Literature
Supplementary Table 1. PRISMA checklist
GATHER reporting guidelines
Purpose of Critical Appraisal
Sam Creavin1 Liz Ewins2 Dane Rayment2 Anna Noel-Storr3 Sarah Cullum4
Chapter 7 The Hierarchy of Evidence
STROBE Statement revision
Critical Reading of Clinical Study Results
H676 Meta-Analysis Brian Flay WEEK 1 Fall 2016 Thursdays 4-6:50
What are reporting guidelines The EQUATOR Network workshop
11/20/2018 Study Types.
Projects: Background, Design, Study Population, Exposure & Outcome Presentations start Continue on and
New FDA Guidance on Early Alzheimer’s Disease
Diagnosis General Guidelines:
Information Pyramid UpToDate, Dynamed, FIRSTConsult, ACP PIER
Clinical Audit Summary Guide
Chris Hyde Exeter Test Group.
Argentina Arg-ADNI Ezequiel Surace, PhD.
How to apply successfully to the NIHR HTA Board?
Michael Putman, MD Rheumatology Fellow RWCS 2019
What are systematic reviews and why do we need them?
Research Design. Research Design Validity Validity refers to the amount that a measure actually measures the concept it is designed to measure.
Meta-analysis, systematic reviews and research syntheses
Patient reported outcome measures for facial skin cancer: a systematic review and evaluation of the quality of their measurement properties Tom Dobbs,
Evidence Based Diagnosis
Introduction to Systematic Reviews
Presentation transcript:

C.W.Ritchie, L.Flicker, A. Noel-Storr, R.McShane A systematic review of the quality and reporting standards of longitudinal biomarker studies in dementia and recommendations C.W.Ritchie, L.Flicker, A. Noel-Storr, R.McShane

Background Reporting standards drive better methodology Claims about a diagnostic test should be based on body of data describing Validation of new test against a gold standard New prodromal criteria emphasise biomarker abnormality in patients with memory impairment to aid specificity to conversion. 2

Why ‘conversion’ as Gold Standard? This is the critical clinical question. Does my patient have Alzheimer’s disease causing their MCI? Gold standards Pathology in situ years before symptoms/dementia develops. Conversion to Alzheimer’s dementia verifies Alzheimer’s pathology. Longitudinal – delayed verification.

Specific aims of today: General Aims: To develop a series of DTA systematic reviews Biomarkers (imaging and plasma/CSF proteins) Neuropsychological tests Specific aims of today: Restricting to biomarkers: To systematically review the weight of evidence: total numbers converting The quality of the methodology and reporting 4

Methods Stage 1 – Sensitive MEDLINE search from 2000 to June 2011 19,104 published abstracts/references Stage 2 - Abstract review Inclusion criteria Biomarker of interest (Ab, tau, PET, or structural MRI) Longitudinal design Team of 9 medical students ~2,250 references each All students had a batch of 100 abstracts: kappa=0.62 (moderate to good) Overlapping pairs of 100: kappa=0.62-0.75 (moderate to good) 5

Results: search 142 primary papers Inter-assessor agreement MEDLINE (Ovid SP) 19104 1032 Cross-sectional 17572 Not relevant (background/animal/review) 500 Longitudinal 77 Ab 64 tau 44 PET 124 MRI Inter-assessor agreement Kappa: 0.62-0.7 Our search was sensitive which means it was designed to retrieve all at the expense of retrieving potentially also a lot of ‘noise’ – irrelevant records. And it did. The search retrieved 19401 results. A team of assessors screened these results (kappa 0.62-0.7). This brought the number of references to longitudinal biomarker studies down to 500. Of these xx where studies which included those with cognitive impairment, no dementia at baseline. 202 references to studies for inclusion Accounts for multiple publications from same cohort 142 primary papers

Results: study size (1) 8 (6%) 79 (56%) 20 (14%) 35 (25%) Before we move on to the results regarding the reporting assessments that were performed, I want to just provide an overview in terms of the size of these 142 studies. Here we can see that most studies 79 (56%) fell within the Size A category of under 100 participants (this includes healthy controls), and that 6% had over 500 or more participants. If we look in more details at those with cog imp no dem… 35 (25%)

Study size (2): number converting So then we extracted information regarding the numbers of those with cog imp at baseline, and the numbers reported as ‘converting’ to dementia by final follow-up. Here you can see within each test category the results. And bear in mind these are likely to be over-estimates due to the difficulty in sometimes determining unique samples across study papers. The proportion of conversion by final follow-up was similar across each modality ranging from 31%(sMRI)-44%(PET-PiB) (mean % conversion: 39%). But let’s now take a look at the methods and reporting of the studies themselves…

Results: STARD CONSORT : consolidated standards for reporting trials QUORUM : quality of reporting of meta-analyses MOOSE : meta-analysis of observational studies in epidemiology STARD : standards for reporting of diagnostic accuracy (2003) Other similar initiatives include CONSORT and QUORUM and MOOSE. There have been three other published studies looking at the quality of reporting in studies of diagnostic accuracy. Those studies found little if any improvement in reporting standards since the STARD statement was published. None of those studies included diagnostic accuracy studies in dementia.

Results: items fully reported Now I want to look in a more detail a number of these items. In the interests of time, I’m going to focus here on those that are most likely to produce a biasing effect on diagnostic test accuracy:

Item 17: Appropriate Time Interval STARD Item 11: Blinding Item 17: Appropriate Time Interval

Item 11: Blinding Describe whether or not the readers of the index tests and reference standard were blind (masked) to the results of the other test and describe any other clinical information available to the readers

Item 11: Blinding Readers of index test blind to final outcome: 32% Readers of reference standard blind to index test results: 23% - so only 23% of studies clearly reported that readers of the reference standard were blind to the results of the biomarker or imaging test. Those that did not report either way, what is one to assume? What about other clinical information? Can a reader of the index test in a prospective study be anything but blind to a result that has not yet occurred!! What is more relevant for us is that readers of the index test be blind to all other clinical information at the time of the test? 32% 23%

Item 17: Time between tests Report time interval from the index tests to the reference standard, and any treatment administered between Usually the time between tests for studies of diagnostic accuracy is ideally as short as possible so as to reduce confounders in between tests such as disease deterioration, effects of intervening treatment. However, for the types of studies we are considering here, there is obviously a necessary period of ‘delayed verification’ of the disease of interest.

Item 17: Time between tests For this item, the assessors were asked to not only specify whether this was reported, but also to extract some details of the follow-up periods: 1. How follow-up had been reported

Item 17: Time between tests Instead, follow-up information reported in each primary paper was extracted. This highlighted large variability in how follow-ups are reported in the studies as well as variability in the lengths and frequencies of follow-up across studies. Within the Aβ studies 14 (38%) reported mean time to final follow-up plus standard deviation; Tau 14 (42%); PET-FDG 12 (55%); PET-PiB 3 (33%), MRI 32 (46%).The range of final follow-up was between 1.0-12.0 years in the Aβ studies, 1.0-12.0 in tau, 1.0-5.4 in PET-FDG, 1.2-10.8 in PET-PiB, and 1.0-9.0 in MRI.

Discussion, collation, re-drafting STARDdem Phase 2 Phase 1 Phase 3 Discussion, collation, re-drafting Evaluation Delivery Three phases: Phase 1 – evaluation of existing guidelines: Complete; Phase 2: period of discussion and consultation with key stakeholders and experts on possible modifications (60 days); Phase 3: delivery. We plan to go live in two weeks. But here is a sneak preview… Complete

Discussion, collation, re-drafting STARDdem Phase 2 Phase 1 Phase 3 Discussion, collation, re-drafting Evaluation Delivery Phase 2: period of discussion and consultation with key stakeholders and experts on possible modifications (60 days); Phase 3: delivery. We plan to go live in two weeks. But here is a sneak preview…//draft new checklist//more detail – next slide – Melborne in March

Phase 2: Delphi method (n=10) Discussion and item generation Discussion and item generation Round 1 Round 2 Round 3 Round 4 Round 5 2 papers Original STARD 3 papers Methods 3 papers Methods 3 papers Results Original 2 papers STARDdem STARD and QUADAS were developed using adapted Delphi methods similar to above. Starting point: poor concensus; wide range of interpretations. End with good agreement and tailored guidelines. Discussion and item generation Discussion and item generation Wider web-based development of consensus of extended STARD Criteria

STARDdem Phase 1 Phase 2 Phase 3 Evaluation Discussion Delivery