Download presentation
Presentation is loading. Please wait.
Published byBriana Page Modified over 9 years ago
1
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC
2
Learning Objectives Define the concept of quality assessment Define the concept of quality assessment What are the reasons for quality assessment? What are the reasons for quality assessment? What are the stages in quality assessment? What are the stages in quality assessment? How do we report quality assessment? How do we report quality assessment?
3
CER Process Overview 2
4
Reasons for Quality Assessment Quality assessment is required for – – Interpreting results – – Grading the body of evidence Quality assessment may also be used for – – Selecting studies for inclusion – – Selecting studies for pooling 3
5
What is Quality Assessment? Quality can be defined as “the extent to which all aspects of a study’s design and conduct can be shown to protect against systematic bias, nonsystematic bias, and inferential error” (Lohr, 2004) Considered synonymous with internal validity Relevant for individual studies Distinct from assessment of risk of bias for a body of evidence 4 Lohr KN. Rating the Strength of Scientific Evidence: Relevance for Quality Improvement Programs. International Journal for Quality in Health Care 2004; 16(1):9-18.
6
What are Components of Quality Assessment? Systematic errors include selection bias and confounding, in which values tend to be inaccurate in a particular direction Nonsystematic errors are attributable to chance Inferential errors result from problems in data analysis and interpretation, such as choice of the wrong statistical measure or wrongly rejecting the null hypothesis 5 Lohr KN, Carey TS. Assessing 'best evidence': issues in grading the quality of studies for systematic reviews. Joint Commission. Journal On Quality Improvement 1999, Sep; 25(9), 470-9.
7
Consider the Contribution of an Individual Study to a Body of Evidence 6 Internal validity of results Size of study (random error) Direction and degree of results Relevance of results (applicability) Type of study Limitations in study design and conduct Risk of Bias Precision Directness Consistency Applicability
8
What Are the Stages in Quality Assessment? Classify the study design Apply predefined criteria for quality and critical appraisal Arrive at a summary judgment of the study’s quality 7
9
Questions to Consider When Classifying Study Design Is a control group present? Is there concurrent assessment of intervention or exposure status? Do investigators have control over allocation and timing? Do investigators randomly allocate interventions? Is there more than one group? Is there concurrent assessment of outcomes? 8
10
Apply Predefined Criteria Apply one of several available tools that consider Similarity of groups at baseline in terms of baseline characteristics and prognostic factors Extent to which valid primary outcomes were described Blinding of subjects and providers Blinded assessment of the outcome Intention-to-treat analysis Differential loss to followup between the compared groups or overall high loss to followup Conflict of interest 9
11
Additional Criteria for Trials Methods used for randomization Allocation concealment 10
12
Additional Criteria for Observational Studies Sample size Methods for selecting participants (inception cohort, methods to avoid selection bias) Methods for measuring exposure variables Methods to deal with any design-specific issues such as recall bias and interviewer bias Analytical methods to control confounding 11
13
Arrive at a Universal Judgment of Quality Assign ratings of good, fair, or poor Ratings may vary across outcomes for an individual study Ratings should be based on the assessment of the impact of individual criteria on overall internal validity rather than on summary scores 12
14
Attributes of Good Studies A formal randomized controlled study Clear description of the population, setting, interventions, and comparison groups Appropriate measurement of outcomes Appropriate statistical and analytic methods and reporting No reporting errors Low dropout rate Clear reporting of dropouts 13
15
Attributes of Fair Studies Fair studies do not meet all the criteria required for a rating of good quality, because they have some deficiencies No flaw is likely to cause major bias Missing information often drives rating 14
16
Attributes of Poor Studies Significant biases, including – – Errors in design, analysis, or reporting – – Large amounts of missing information – – Discrepancies in reporting 15
17
Reporting Quality Assessment Overall assessments of quality must be accompanied by a statement of – – Flaws in design or execution of a study – – Assessment of the potential consequences of that flaw Poor studies may be excluded or included – – Decisions should be guided by gaps in current evidence – – Selective inclusion of poor studies for subgroups should be justified 16
18
Key Messages Transparency of process – – Full reporting on all elements of quality for each individual study – – Clear instructions on how abstractors scored quality – – Description of reconciliation process Transparency of judgment – – Explanation of final score 17
19
Key Source Draft AHRQ Methods Guide, Chapter 6, AHRQ, 2007 http://www.effectivehealthcare.ahrq.gov/r epFiles/2007_10DraftMethodsGuide.pdf. http://www.effectivehealthcare.ahrq.gov/r epFiles/2007_10DraftMethodsGuide.pdf 18
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.