Analysis of Overall Impact Scoring Trends within AHRQ Peer Review Study Sections Gabrielle Quiggle, MPH; Rebecca Trocki, MSHAI; Kishena Wadhwani, PhD, MPH; Francis Chesley, MD Office of Extramural Research, Education, and Priority Populations, Agency for Healthcare Research and Quality
Background: Peer Review The Agency for Healthcare Research and Quality (AHRQ) has a chartered health services research Initial Review Group (IRG) responsible for the peer review of grant applications submitted for funding opportunity. This IRG is comprised of five subcommittees or study sections: o Healthcare Systems & Value Research (HSVR) o Healthcare Safety & Quality Improvement Research (HSQR) o Healthcare Information Technology Research (HITR) o Health Care Effectiveness and Outcomes Research (HEOR) o Health Care Research Training (HCRT)
Background: Peer Review Research grant applications submitted to AHRQ are reviewed by one of five standing study section committees. Applications are submitted in response to a Program Announcement (PA) or Request for Application (RFA). General research grant mechanisms of interest: R01 – Research project grants – (Independent) R03 – Small research project grants R18 – Research demonstration and dissemination project grants
AHRQ uses a 9-point overall impact score system to evaluate the scientific/technical merit of research grant applications submitted to AHRQ for funding opportunities. Background: Scoring
The final overall impact score reflects the average of the impact scores provided by the study section members as a whole (x10). Percentiles are calculated to rank applications relative to each other. Background: Scoring 1090 High impactLow impact Final Overall Impact Score Preliminary Score 1 9 Exceptional Poor
1.To determine whether trends exist in the scoring of research grant applications submitted for funding by AHRQ 2.To assess potential differences in scoring trends between the five (5) AHRQ study sections Study Objectives
Final impact scores were obtained from the following applications: o First-time applications o Received from October 2009 to June 2014 (15 review cycles) o Submitted to one of five AHRQ study sections o Not withdrawn Resubmitted applications and applications not discussed (ND) were excluded from sample Data collection using NIH eRA Commons and NIH Query View & Report Database (QVR system) Methods: Data Collection
Means (SD) and medians (range) were calculated for each quarterly review meeting and fiscal year. Score trends were assessed by council meeting for each study section, using all application mechanisms. o Subgroup analysis was conducted on applications considered under general research mechanisms R01, R03, and R18 in FY Percentile standardized scores were used to compare score trends between study sections. Descriptive statistics and linear regression conducted using MS Excel and SAS 9.3. Methods: Analysis
AHRQ received 3,370 applications between Fiscal Year 2010 and ,752 (52%) applications were discussed and received a final overall impact score. Slight trends towards lower (better) median scores were found in four out of five AHRQ study sections: Results Study sectionTrend lineR2R2 HEOR-0.67x HSQR-0.98x HSVR+0.45x HITR-0.23x HCRT-0.41x
Results
Results
Results
Results
Results
Subgroup analysis included 1,086 applications (57% discussed) considered under general research mechanisms R01, R03, and R18. Triaging of applications was high among R03 (54.8%) and R18 (57.5%) applications across all study sections, compared to R01 applications (11.4%). Mean scores: R01 = 34.2 ± 13.3 to 42.8 ± 12.5 R03 = 36.4 ± 15.2 to 41.9 ± 12.8 R18 = 38.0 ± 13.6 to 43.1 ± 18.0 Results
Results Comparison of score distribution between study sections: Percentile scores did not differ by study section, adjusting for FY, for R01 (F=0.74, p=0.53), R03 (F=0.31, p=0.82), and R18 (F=0.22, p=0.88).
The analysis of impact scores among study sections as a function of time revealed no statistically significant differences. AHRQ study sections perform consistently over time, reflecting both the assessments of the reviewers and the quality of the applications. These results show that careful selection of subject-matter experts, and consistency and uniformity in conducting the evaluation of research grant applications, are the best practices for peer review. Conclusions
FIRST AUTHOR: Gabrielle Quiggle, MPH AHRQ/OEREP staff: Francis Chesley, MD – Director of OEREP/AHRQ Kishena Wadhwani, PhD, MPH – Director of Division of Scientific Review (DSR)/OEREP Rebecca Trocki, MSHAI – Program Analyst, DSR/OEREP Acknowledgements
AHRQ study section review committees: AHRQ research announcements: AHRQ scoring criteria: htmlResources