Retrospective Chart Reviews: How to Review a Review Adam J. Singer, MD Professor and Vice Chairman for Research Department of Emergency Medicine Stony.

Slides:



Advertisements
Similar presentations
Survey design. What is a survey?? Asking questions – questionnaires Finding out things about people Simple things – lots of people What things? What people?
Advertisements

What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Yiu-fai Cheung, MD Department of Paediatrics and Adolescent Medicine LKS Faculty of Medicine The University of Hong Kong Hong Kong, China Sharing in GRF.
Research Curriculum Session II –Study Subjects, Variables and Outcome Measures Jim Quinn MD MS Research Director, Division of Emergency Medicine Stanford.
Protocol Development.
Copyright © Healthcare Quality Quest, Proposed standards for a national clinical audit — How we got involved and what we have learned.
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Departments of Medicine and Biostatistics
8. Evidence-based management Step 3: Critical appraisal of studies
Critiquing Research Articles For important and highly relevant articles: 1. Introduce the study, say how it exemplifies the point you are discussing 2.
15 de Abril de A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.
Concept of Measurement
PSYC512: Research Methods PSYC512: Research Methods Lecture 16 Brian P. Dyre University of Idaho.
Vanderbilt Sports Medicine Chapter 4: Prognosis Presented by: Laurie Huston and Kurt Spindler Evidence-Based Medicine How to Practice and Teach EBM.
Sampling and Data Collection
Course Content Introduction to the Research Process
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Writing Up the Research Paper for Medical Journals Jeanne M. Ferrante, M.D., M.P.H. Associate Professor Department of Family Medicine.
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
Reading Science Critically Debi A. LaPlante, PhD Associate Director, Division on Addictions.
September 26, 2012 DATA EVALUATION AND ANALYSIS IN SYSTEMATIC REVIEW.
Validity and Reliability Dr. Voranuch Wangsuphachart Dept. of Social & Environmental Medicine Faculty of Tropical Medicine Mahodil University 420/6 Rajvithi.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Research Methods in Computer Science Lecture: Quantitative and Qualitative Data Analysis | Department of Science | Interactive Graphics System.
V ALIDITY IN Q UALITATIVE R ESEARCH. V ALIDITY How accurate are the conclusions you make based on your data analysis? A matter of degree Non-reification.
CHP400: Community Health Program- lI Research Methodology STUDY DESIGNS Observational / Analytical Studies Case Control Studies Present: Disease Past:
Research Methodology Abdulelah Nuqali Intern. What it’s made ofHow it works.
Systematic Reviews.
Julio A. Ramirez, MD, FACP Professor of Medicine Chief, Infectious Diseases Division, University of Louisville Chief, Infectious Diseases Section, Veterans.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
UNDERSTANDINGCLINICAL DECISION RULES Department of Emergency Medicine Stony Brook University Adam J Singer, MD Professor and Vice Chairman for Research.
EBC course 10 April 2003 Critical Appraisal of the Clinical Literature: The Big Picture Cynthia R. Long, PhD Associate Professor Palmer Center for Chiropractic.
Assumes that events are governed by some lawful order
1 f02kitchenham5 Preliminary Guidelines for Empirical Research in Software Engineering Barbara A. Kitchenham etal IEEE TSE Aug 02.
Medical Record Reviews – The Rules of the Road David H. Rubin, MD Department of Pediatrics.
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
DATA PREPARATION: PROCESSING & MANAGEMENT Lu Ann Aday, Ph.D. The University of Texas School of Public Health.
How to Read Scientific Journal Articles
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
The Scientific Method: Terminology Operational definitions are used to clarify precisely what is meant by each variable Participants or subjects are the.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
1 f02laitenberger7 An Internally Replicated Quasi- Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents Laitenberger, etal.
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Using Nonexperimental Research.
CAT 5: How to Read an Article about a Systematic Review Maribeth Chitkara, MD Rachel Boykan, MD.
PTP 661 EVIDENCE ABOUT INTERVENTIONS CRITICALLY APPRAISE THE QUALITY AND APPLICABILITY OF AN INTERVENTION RESEARCH STUDY Min Huang, PT, PhD, NCS.
Case-Control Studies Abdualziz BinSaeed. Case-Control Studies Type of analytic study Unit of observation and analysis: Individual (not group)
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
BY DR. HAMZA ABDULGHANI MBBS,DPHC,ABFM,FRCGP (UK), Diploma MedED(UK) Associate Professor DEPT. OF MEDICAL EDUCATION COLLEGE OF MEDICINE June 2012 Writing.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
Unit 11: Evaluating Epidemiologic Literature. Unit 11 Learning Objectives: 1. Recognize uniform guidelines used in preparing manuscripts for publication.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
EBM --- Journal Reading Presenter :黃美琴 Date : 2005/10/27.
Source: S. Unchern,  Research is not been completed until the results have been published.  “You don’t write because you want to say something,
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
Article Dissection Patients with Multiple Sclerosis and Parkinson’s Disease Undergoing Upper Cervical Chiropractic Care.
Critically Appraising a Medical Journal Article
Journal Club Notes.
Criteria For Critiquing A Research Report
Supplementary Table 1. PRISMA checklist
Controls to Reduce Threats to Validity
Critical Reading of Clinical Study Results
Natalie Robinson Centre for Evidence-based Veterinary Medicine
Candice Preslaski, PharmD, BCPS, BCCCP
Presentation transcript:

Retrospective Chart Reviews: How to Review a Review Adam J. Singer, MD Professor and Vice Chairman for Research Department of Emergency Medicine Stony Brook University

Retrospective chart reviews  Generate original research 25% of all published studies  Hypothesis generating  Early observations  Disease description and natural course  May demonstrate associations  Quality audits  Derivation and validation of clinical prediction rules  Cost analysis

Chart reviews  Advantages Simple Quick Inexpensive For rare diseases  Disadvantages No cause-effect Incomplete/inaccurate/ conflicting data No study protocol

Process of transforming clinical record information into “hard” data  Occurrence of event Basic clinical data may be erroneous Clinicians vary in obtaining, interpreting, and recording data from histories, physicals, and tests  Select patients to be studied Patients may be highly selected limiting generalizability  Systematic errors more likely than random errors

 Assemble charts Charts may be missing Bias may be introduced if missing cases are death or more (or less) severe, or have better (or worse) outcomes  Locate desired information Documentation is often incomplete Important data may be missing

 Read note Note may be illegible or ambiguous Two or more notes may give conflicting information for some variables  Code information Coding information into categories and Likert scales requires interpretation, judgment and guess work Problems arise if information must be coded as “missing”, “negative”, or “unsure”

 Transfer data to computer database Mistakes may be made in the transcription process  Perform statistical analysis Statistical test are performed Quality of data is not evaluated Errors in extraction of data are forgotten

Strategies to improve accuracy of chart reviews  Training  Case selection  Definition and variables  Abstraction forms  Meetings  Monitoring  Blinding  Testing of inter-rater agreement

Important questions when reviewing a (chart) review  Is chart review appropriate for research question? Available charts representative of population of interest Charts must include pertinent data Sample size must be adequate  Enough with outcome of interest  Only 10% of ED chart reviews include sample size calculation

Is investigator bias transparent?  Identify any financial or philosophical COI  Data collection forms as definitions should be submitted as an appendix

Study and target population  Representative sample  Random sample  External validity  Study setting should be typical of population of interest  All charts included and have equal chance of selection Multiple methods of chart identification helpful

Collected variables  Possible conflicting entries  Inaccuracies in documentation  Errors in reading, interpreting, coding and transcribing data  One study found 30% of care not documented and 19% of care recorded not given  Variables should be well defined a priori  Coding rules should be provided

Systematic data collection  Use standardized data collection forms Organized similar to data in chart  Enter directly into computer to minimize transcription errors  Describe coding options  How was missing data handled?  Pilot testing of data collection form

Missing and conflicting data  Missing data creates selection bias  Conflicting data common leading to incorrect conclusions  Determine whether amount of missing data threatens validity  Sensitivity analysis allows readers to estimate effects of missing data Assume all missing data positive or negative  Consider imputations

Abstractor bias  Abstractors should be blinded to objectives and hypothesis  If abstractors know desirable value they may overlook contradictory information or search diligently for it  Abstractors often the investigators  Only 4% of studies adhere to this principle  Investigators should select and train abstractors who are blinded to objectives and hypotheses

Abstractor training  Some chart reviewers non-medical  May fail to recognize medical jargon or may misinterpret data  May not know where in chart to find data  Difficulty resolving internal discrepancies  Less than 20% of studies describe training  Investigators should use abstractors with sufficient training Describe background and training

Abstractor monitoring  Decrement in consistency over time Especially in long term studies  Periodic comparison of forms with actual charts  Multiple meetings with abstractors required to retrain and resolve disputes  Only 9% of chart reviews have documented monitoring

Abstractor inter-rater reliability  Ideally multiple abstractors would review all charts  Seldom done  Alternative to multiple abstractors is to establish inter-rater reliability  May need to be periodically reassessed

How much is enough?  Both raw agreement and kappa Cohen’s kappa for binary data Weighted kappa with multiple categories Intra-class correlation for continuous data  Level of agreement may differ by variable (for death should be 1.0)  Most important for the most important confounders and outcomes  10% sample may not be adequate if few or no yes answers

What should investigators do  Consider other data sources Large and representative sources NHAMCS  Do not overlook methods section  Be cautious about making conclusions  Adherence to reporting standards doesn’t mean review should be published  Era of electronic records

Gilbert et al. Ann Emerg Med 1996;27:305  986 original articles from 3 peer- reviewed EM journals reviewed, 244 were chart reviews

Adherence to methodological standards Percent95% CI Abstractor training Selection criteria Variables defined Standard abstraction forms Performance monitored Abstractor blinding Interrater reliability mentioned Interrater agreement tested0.40-2

Reassessing methods of medical review studies in EM research (2003) Abstractor training18 Case selection96 Variable definition77 Abstraction forms27 Performance monitored9 Blind to hypothesis4 Agreement Adherence Worster et al. Ann Emerg Med 2005;45:448

The quality of medical record review studies in the international EM literature Badcock Ann Emerg Med 2005;45:444 All Clear hypothesis Training Selection criteria Standard forms Defined variables Monitoring Blinding73814 Inter-rater agreement Sample size calculation109 14

RECENT CHART REVIEW: Ann EM 2016

METHODS