The Campbell Collaborationwww.campbellcollaboration.org Systematic Review Methods Workshop Study Coding Sandra Jo Wilson Vanderbilt University Editor,

Slides:



Advertisements
Similar presentations
Standardized Scales.
Advertisements

GLOBAL TOBACCO SURVEILLANCE SYSTEM Global Youth Tobacco Survey Training Workshop Introduction to the GYTS Sample Design & Weights.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
Experimental Research Designs
Systematic Approaches to Literature Reviewing
8. Evidence-based management Step 3: Critical appraisal of studies
Correlation AND EXPERIMENTAL DESIGN
Conducting systematic reviews for development of clinical guidelines 8 August 2013 Professor Mike Clarke
Elements of a clinical trial research protocol
The Campbell Collaborationwww.campbellcollaboration.org Education Panel Session Comments and Points for Discussion Sandra Jo Wilson Vanderbilt University.
EVAL 6970: Meta-Analysis Formulating a Problem, Coding the Literature, and Review of Research Designs Dr. Chris L. S. Coryn Spring 2011.
Experimental Design.
Clinical Trials Hanyan Yang
Minnesota Manual of Accommodations for Students with Disabilities Training Guide
Campbell Collaboration Colloquium 2012 Copenhagen, Denmark The effectiveness of volunteer tutoring programmes Dr Sarah Miller Centre.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Wilson Coding Protocol Page 1 Development of Coding Protocol Coding protocol: essential feature of meta-analysis Goal: transparent and replicable  description.
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
September 26, 2012 DATA EVALUATION AND ANALYSIS IN SYSTEMATIC REVIEW.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Overview of MSP Evaluation Rubric Gary Silverstein, Westat MSP Regional Conference San Francisco, February 13-15, 2008.
Funded through the ESRC’s Researcher Development Initiative
Developing Evidence-Based Products Using the Systematic Review Process Session 3/Unit 9: Inclusion and Exclusion Criteria Screening and Coding Studies.
Advanced Statistics for Researchers Meta-analysis and Systematic Review Avoiding bias in literature review and calculating effect sizes Dr. Chris Rakes.
Research Methodology For IB Psychology Students. Empirical Investigation The collecting of objective information firsthand, by making careful measurements.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Classroom Assessments Checklists, Rating Scales, and Rubrics
By Jo Ann Vertetis and Karin Moe. Self-Assessment Can you define RTI? What is its purpose? Rate your understanding of RTI and how to implement it on a.
Systematic Reviews.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Data Evaluation: Initial screening and Coding Adapted from David.
Evaluating a Research Report
Some terms Parametric data assumptions(more rigorous, so can make a better judgment) – Randomly drawn samples from normally distributed population – Homogenous.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
The effects of within class grouping on reading achievement: A meta-analytic synthesis Kelly Puzio & Glenn Colby Vanderbilt University.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Session I: Unit 2 Types of Reviews September 26, 2007 NCDDR training course for NIDRR grantees: Developing Evidence-Based Products Using the Systematic.
Assisting GPRA Report for MSP Xiaodong Zhang, Westat MSP Regional Conference Miami, January 7-9, 2008.
LECTURE 2 EPSY 642 META ANALYSIS FALL CONCEPTS AND OPERATIONS CONCEPTUAL DEFINITIONS: HOW ARE VARIABLES DEFINED? Variables are operationally defined.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
Criteria to assess quality of observational studies evaluating the incidence, prevalence, and risk factors of chronic diseases Minnesota EPC Clinical Epidemiology.
Clinical Writing for Interventional Cardiologists.
RevMan for Registrars Paul Glue, Psychological Medicine What is EBM? What is EBM? Different approaches/tools Different approaches/tools Systematic reviews.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
The expanding evidence for the efficacy of ACT: results from a meta analysis on clinical applications.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Characteristics of Studies that might Meet the What Works Clearinghouse Standards: Tips on What to Look For 1.
Ensuring Consistency in Assessment of Continuing Care Needs: An Application of Differential Item Functioning Analysis R. Prosser, M. Gelin, D. Papineau,
CONSORT 2010 Balakrishnan S, Pondicherry Institute of Medical Sciences.
Appropriate use of Design Effects and Sample Weights in Complex Health Survey Data: A Review of Articles Published using Data from Add Health, MTF, and.
June 25, Regional Educational Laboratory - Southwest Review of Evidence on the Effects of Teacher Professional Development on Student Achievement:
Systematic Reviews of Evidence Introduction & Applications AEA 2014 Claire Morgan Senior Research Associate, WestEd.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Clinical Studies Continuum
Experimental Design-Chapter 8
Supplementary Table 1. PRISMA checklist
Randomized Trials: A Brief Overview
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
STROBE Statement revision
S1316 analysis details Garnet Anderson Katie Arnold
Evidence Based Practice 3
Evidence Based Practice
Meta-analysis, systematic reviews and research syntheses
Presentation transcript:

The Campbell Collaborationwww.campbellcollaboration.org Systematic Review Methods Workshop Study Coding Sandra Jo Wilson Vanderbilt University Editor, Campbell Collaboration Education Coordinating Group

The Campbell Collaborationwww.campbellcollaboration.org Workshop Overview Levels of Study Coding – Study eligibility screening Abstract level Full-text level – Study content coding – Effect size coding (next session) Common mistakes

The Campbell Collaborationwww.campbellcollaboration.org Why do study coding? Provide an accounting of the research included in your review. – Also helps identify what’s missing Identify the characteristics of the interventions, subjects, methods in the research. Assuming different primary studies produce different results, study coding allows you to identify variables that might explain those differences.

The Campbell Collaborationwww.campbellcollaboration.org Abstract Level Eligibility Screening Does the document look like it might be relevant? – Based on reading titles and abstracts If yes, retrieve full text of article – Exclude obviously irrelevant studies, but don’t assume the title and abstract are going to give reliable information on study design, outcomes, subject characteristics, or even interventions When in doubt, double code – At least two trained raters working independently

The Campbell Collaborationwww.campbellcollaboration.org Full-Text Eligibility Screening Develop an eligibility screening form with criteria. Before screening, link together all reports from the same study. Complete form for all studies retrieved as potentially eligible. Modify criteria after examining sample of studies (controversial). Double-code eligibility. Maintain database on results for each study screened.

The Campbell Collaborationwww.campbellcollaboration.org 6

Eligibility Screening Results Once screening of all relevant full-text reports is complete, you will have: – An accounting of the ineligible studies and the reasons for their ineligibility. Campbell and Cochrane reviews often include a table of ineligible studies as an appendix. – A set of studies eligible for coding.

The Campbell Collaborationwww.campbellcollaboration.org Sample Exclusion Table Inclusion CriteriaNumber Excluded Intervention A school-based or affiliated psychological, educational, or behavioral prevention/intervention program that involves actions performed with the expectation that they will have beneficial effects on student recipients. 317 Population Intervention directed toward school-aged youth (ages 4-18) or recent dropouts between the ages of for programs explicitly oriented toward secondary school completion or the equivalent. 2 Population General population samples of school-age children. Samples consisting exclusively of specialized populations, such as students with mental disabilities or other special needs, are not eligible. 9

The Campbell Collaborationwww.campbellcollaboration.org Sample Exclusion Table CitationReason Excluded Wilson, S. J. (2011). An experimental evaluation of a middle school mathematics curriculum. Journal of Education, 20, Subjects too old. Average age=16. Smith, J. (2010). Math curriculum study. Journal of Education, 19, No control group.

The Campbell Collaborationwww.campbellcollaboration.org Study Content and Effect Size Coding Develop a coding manual that includes: – Setting, study context, authors, publication date and type, etc. – Methods and method quality – Program/intervention – Participants/clients/sample – Outcomes – Findings, effect sizes Coding manual should have a “paper” version and an electronic version.

The Campbell Collaborationwww.campbellcollaboration.org Study Context Coding Setting, study context, authors, publications date and type, etc. – Multiple publications; “study” vs. “report” – Geographical/national setting; language – Publication type and publication bias issue – Publication date vs. study date – Research, demonstration, practice studies

The Campbell Collaborationwww.campbellcollaboration.org Publication Type [SH20] Type of publication. If you are using more than one type of publication to code your study, choose the publication that supplies the effect size data. 1Book 2Journal article 3Book chapter (in an edited book) 4Thesis or dissertation 5Technical report 6Conference paper 7Other 9Cannot tell

The Campbell Collaborationwww.campbellcollaboration.org Study Method Coding Variety of options for coding study methods – Cochrane risk of bias framework – GRADE system – Method quality checklists – Direct coding of methodological characteristics

The Campbell Collaborationwww.campbellcollaboration.org Cochrane risk of bias framework Focus on identifying potential sources of bias: – Selection bias - Systematic differences between groups at baseline (coding allocation concealment and sequence generation) – Performance bias - Something other than the intervention affects groups differently (blinding of participants) – Attrition bias - Participant loss affects initial group comparability – Detection bias - Method of outcome assessment affects group comparisons (blinding of data collectors) – Reporting bias - Selective reporting of outcomes – Coded as low risk, high risk, unclear risk

The Campbell Collaborationwww.campbellcollaboration.org Modifications of Cochrane Framework Quasi-experimental studies may need additional coding with regard to selection bias, e.g., potential third variables that might account for results when groups were not randomized. Some risk of bias items don’t make sense in social science research – Blinding of study participants to what condition they’re in – Blinding of data collectors

The Campbell Collaborationwww.campbellcollaboration.org GRADE system for method quality Quality of evidence across trials Outcome-specific Considers: sparse data, consistency/inconsistency of results across trials, study designs, reporting bias, possible influence of confounding variables Software available at: Also see:

The Campbell Collaborationwww.campbellcollaboration.org Method Quality Checklists Method quality ratings (or not) More than 200 scales and checklists available, few if any appropriate for systematic reviews Overall study quality scores have questionable reliability and validity (Jüni et al., 2001) – Conflate different methodological issues and study design/implementation features, which may have different impacts on reliability/validity – Preferable to examine potential influence of key components of methodological quality individually Weighting results by study quality scores is not advised!

The Campbell Collaborationwww.campbellcollaboration.org Direct Method Coding Methods: Basic research design – Nature of assignment to conditions – Attrition, crossovers, dropouts, other changes to assignment – Nature of control condition – Multiple intervention and/or control groups Design quality dimensions – Initial & final comparability of groups (pretest effect sizes) – Treatment-control contrast treatment contamination blinding

The Campbell Collaborationwww.campbellcollaboration.org Direct Method Coding Methods: Other aspects – Issues depend on specific research area – Procedural, e.g., monitoring of implementation, fidelity credentials, training of data collectors – Statistical, e.g., statistical controls for group differences handling of missing data

The Campbell Collaborationwww.campbellcollaboration.org Sample Coding Item ASSIGNMENT OF PARTICIPANTS [H6] Unit of group assignment. The unit on which assignment to groups was based. 1.individual (i.e., some children assigned to treatment group, some to comparison group) 2.group (i.e., whole classrooms, schools, therapy groups, sites, residential facilities assigned to treatment and comparison groups) 3.program area, regions, school districts, counties, etc. (i.e., region assigned as an intact unit) 4.cannot tell

The Campbell Collaborationwww.campbellcollaboration.org Sample Coding Item [H7] Method of group assignment. How participants/units were assigned to groups. This item focuses on the initial method of assignment to groups, regardless of subsequent degradations due to attrition, refusal, etc. prior to treatment onset. These latter problems are coded elsewhere. Random or near-random: 1.randomly after matching, yoking, stratification, blocking, etc. The entire sample is matched or blocked first, then assigned to treatment and comparison groups within pairs or blocks. This does not refer to blocking after treatment for the data analysis. 2.randomly without matching, etc. This also includes cases when every other person goes to the control group. 3.regression discontinuity design: quantitative cutting point defines groups on some continuum (this is rare). 4.wait list control or other quasi-random procedure presumed to produce comparable groups (no obvious differences). This applies to groups which have individuals apparently randomly assigned by some naturally occurring process, e.g. first person to walk in the door. The key here is that the procedure used to select groups doesn’t involve individual characteristics of persons so that the groups generated should be essentially equivalent. Non-random, but matched: Matching refers to the process by which comparison groups are generated by identifying individuals or groups that are comparable to the treatment group using various characteristics of the treatment group. Matching can be done individually, e.g., by selecting a control subject for each intervention subject who is the same age, gender, and so forth, or on a group basis, e.g., by selecting comparison schools that have the same demographic makeup and academic profile of treatment schools. 1.matched ONLY on pretest measures of some or all variables used later as outcome measures.

The Campbell Collaborationwww.campbellcollaboration.org What to do about method quality? My recommendation: Select a method of assessing and coding methodological quality or design your own items. Code all the studies with whatever method you select. Examine the influence of the methodological characteristics on effect sizes at the analysis stage.

The Campbell Collaborationwww.campbellcollaboration.org Intervention Coding Program/Intervention – General program type (mutually exclusive or overlapping?) – Specific program elements (present/absent) – Any treatment received by the comparison group – Treatment implementation issues Integrity, implementation fidelity Amount, length, frequency, “dose” – Goal is to differentiate across studies

The Campbell Collaborationwww.campbellcollaboration.org 24

The Campbell Collaborationwww.campbellcollaboration.org Participant Coding Participants/clients/sample – Data are at aggregate level (you’re coding the characteristics of the entire sample, not a single individual) – Mean age, age range – Gender mix – Racial/ethnic mix – Risk, severity – Restrictiveness; special groups (e.g., clinical)

The Campbell Collaborationwww.campbellcollaboration.org Study Outcome Coding Outcome measures – Construct measured – Measure or operationalization used – Source of information, informant – Composite or single indicator (item) – Scale: dichotomous, count, discrete ordinal, continuous – Reliability and validity – Time of measurement (e.g., relative to treatment)

The Campbell Collaborationwww.campbellcollaboration.org Structuring your Data Hierarchical structure of meta-analytic data o Multiple outcomes within studies o Multiple measurement points within outcomes o Multiple subsamples within studies o Multiple effect sizes within studies You need to design coding scheme and eventual analytic datasets in a way that handles the hierarchical nature of your data.

The Campbell Collaborationwww.campbellcollaboration.org Coding Studies: Relational Files Study Level File Study IDPubYearMeanAge Multiple rows per study Outcome Level File Study IDDV NumConstruct Effect Size Level File Study ID DV Num ES NumTimeES

The Campbell Collaborationwww.campbellcollaboration.org Study coding (AKA data extraction) Double coding: Cohen’s kappa Agreement on key decisions – Study inclusion/exclusion, key characteristics, risk of bias, coding of results Pilot-test and refine coding manual

The Campbell Collaborationwww.campbellcollaboration.org Creating your coding manual Write up a detailed document that includes all the coding items. – Organize the manual hierarchically One section for study-level, one for group-level, one for dependent variables, one for effect sizes – Include coding manual in Campbell Collaboration protocol Translate the coding manual into “forms” to use for actual coding – Paper forms, spreadsheets – Databases

The Campbell Collaborationwww.campbellcollaboration.org 31

The Campbell Collaborationwww.campbellcollaboration.org 32

The Campbell Collaborationwww.campbellcollaboration.org Common Mistakes Too many coding items (my biggest problem!) Too many subjective coding items Coding two reports from the same study as two different studies Coder drift Failure to ask questions

The Campbell Collaborationwww.campbellcollaboration.org Thank you! Any Questions? Sandra Jo Wilson