Amy Rubinstein, Ph.D. Scientific Review Officer Direct Ranking of Applications: Pilot Study.

Slides:



Advertisements
Similar presentations
Group 5: Historical control data Follow-up of 2005 IWGT where the use of historical control data in interpretation of in vitro results was identified as.
Advertisements

HN BUSINESS GRADED UNIT Feedback from Verification.
Standardized Scales.
How a Study Section works
NIH Mentored Career Development Awards (K Series) Part 5 Thomas Mitchell, MPH Department of Epidemiology & Biostatistics University of California San Francisco.
Unit 16: Statistics Sections 16AB Central Tendency/Measures of Spread.
How Your Application Is Reviewed Robert Elliott, Ph.D. Scientific Review Officer (SRO)
Project Monitoring Evaluation and Assessment
Laurie Tompkins, PhD Acting Director, Division of Genetics and Developmental Biology NIGMS, NIH Swarthmore College May 14, 2012 NIH 101.
Faculty Performance-Based Compensation Presentation to Academic Senate August 30, 2006.
Archived File The file below has been archived for historical reference purposes only. The content and links are no longer maintained and may be outdated.
Setting Performance Standards Grades 5-7 NJ ASK NJDOE Riverside Publishing May 17, 2006.
Archived File The file below has been archived for historical reference purposes only. The content and links are no longer maintained and may be outdated.
Archived File The file below has been archived for historical reference purposes only. The content and links are no longer maintained and may be outdated.
How Your Application Is Reviewed Vonda Smith, Ph.D. Scientific Review Officer (SRO)
Slide Slide 1 Copyright © 2007 Pearson Education, Inc Publishing as Pearson Addison-Wesley. Created by Tom Wegleitner, Centreville, Virginia Section 5-2.
WRITING the Research Problem.
1 Major changes Get ready! Changes coming to Review Meetings Considering Potential FY2010 funding and beyond: New 1-9 Scoring System Scoring of Individual.
Reliability of Selection Measures. Reliability Defined The degree of dependability, consistency, or stability of scores on measures used in selection.
How do we assign the final grade? Dheeraj Sanghi IIT Kanpur.
CSR Quick Feedback Pilot Mary Ann Guadagno, PhD Senior Scientific Review Officer CSR Office of the Director.
NIH Review Procedures Betsy Myers Hospital for Special Surgery.
1.1 Chapter One What is Statistics?. 1.2 What is Statistics? “Statistics is a way to get information from data.”
Chapter 11 Descriptive Statistics Gay, Mills, and Airasian
Descriptive Statistics
Advanced Higher Physics Investigation Report. Hello, and welcome to Advanced Higher Physics Investigation Presentation.
NIH Mentored Career Development Awards (K Series) Part 5 Thomas Mitchell, MPH Department of Epidemiology & Biostatistics University of California San Francisco.
Archived File The file below has been archived for historical reference purposes only. The content and links are no longer maintained and may be outdated.
1 Review Sections Descriptive Statistics –Qualitative (Graphical) –Quantitative (Graphical) –Summation Notation –Qualitative (Numerical) Central.
Psychology’s Statistics. Statistics Are a means to make data more meaningful Provide a method of organizing information so that it can be understood.
NIH Submission Cycle. Choosing a Study Section Ask Program Officer for advice Review rosters: – sp
Jon Lorsch Director, NIGMS Maximizing Investigators’ Research Awards: An experiment to promote a more productive, efficient and sustainable biomedical.
1 Amy Rubinstein, Ph.D., Scientific Review Officer Adrian Vancea, Ph.D., Program Analyst Office of Planning, Analysis and Evaluation Study on Direct Ranking.
12/11/2009 Writing a NIH Grant Application Ellen Puré, PhD, Professor and Associate Vice President of Academic Affairs, Wistar Institute Mitchell Schnall.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Stefani Dawn Assistant Director of Assessment Office of Academic Program, Assessment and Accreditation eSET.
1.) *Experiment* 2.) Quasi-Experiment 3.) Correlation 4.) Naturalistic Observation 5.) Case Study 6.) Survey Research.
Analysis of Overall Impact Scoring Trends within AHRQ Peer Review Study Sections Gabrielle Quiggle, MPH; Rebecca Trocki, MSHAI; Kishena Wadhwani, PhD,
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
NIH is divided into two sections 1) Center for Scientific Review (CSR) 2) Institutes (eg., NIDDK, NCI, NHLBI) What is the difference? CSR organizes the.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Section 5-1 Review and Preview.
ESET: Combined Versus Uncombined Reports Stefani Dawn, PhD Assistant Director of Assessment Academic Programs, Assessment and Accreditation.
How is a grant reviewed? Prepared by Professor Bob Bortolussi, Dalhousie University
An Insider’s Look at a Study Section Meeting: Perspectives from CSR Monica Basco, Ph.D. Scientific Review Officer Coordinator, Early Career Reviewer Program.
Insider Guide to Peer Review for Applicants Dr. Valerie Durrant Acting Director CSR Division of Neuroscience, Development and Aging.
Proposal on Revised Mechanism of Selecting Applications for Approval Presentation by Secretariat of Council for the AIDS Trust Fund in Sharing Session.
Variability Introduction to Statistics Chapter 4 Jan 22, 2009 Class #4.
Funding Opportunities for Investigator-initiated Grants with Foreign Components at the NIH Somdat Mahabir, PhD, MPH Program Director Epidemiology and Genetics.
Reliability EDUC 307. Reliability  How consistent is our measurement?  the reliability of assessments tells the consistency of observations.  Two or.
Applying the Churchman/Ackoff Value Estimation Procedure to Spatial Modeling Susan L. Ose MGIS Capstone Presentation Penn State University - World Campus.
Psychometrics: Exam Analysis David Hope
The TDR Targets Database Prioritizing potential drug targets in complete genomes.
March Nunavut Wildlife Research Trust & Nunavut Wildlife Studies Fund  There are a number of documents that relate to the NWRT and NWSF:  1.)
Peer Review and Grant Mechanisms at NIH What is Changing? May 2016 Richard Nakamura, Ph.D., Director Center for Scientific Review.
T Relationships do matter: Understanding how nurse-physician relationships can impact patient care outcomes Sandra L. Siedlecki PhD RN CNS.
Understanding Standards: Advanced Higher Event
NATA Foundation Student Grants Process
Lecture Slides Elementary Statistics Eleventh Edition
Sales Organization Structure and Sales Force Deployment
NSF/NIH Review Processes University of Southern Mississippi
Activity 35 Building in Boomtown
NSF/NIH Review Processes University of Southern Mississippi
Data, conclusions and generalizations
The NIH Peer Review Process
Rick McGee, PhD and Bill Lowe, MD Faculty Affairs and NUCATS
Research Strategies: How Psychologists Ask and Answer Questions
Analyzing test data using Excel Gerard Seinhorst
Investigations using the
Peer learning and Peer assessment
A Moodle-based Peer Assessment Tool
Presentation transcript:

Amy Rubinstein, Ph.D. Scientific Review Officer Direct Ranking of Applications: Pilot Study

Applications are assigned to 3 reviewers who provide preliminary impact scores (1-9) and critiques. After panel discussion of each of the top 50% of applications, all panel members vote on a final overall impact score. Each application’s score is derived from the average of all panel members’ votes and multiplied by 10 (resulting in final scores of 10-90). R01 applications are assigned a percentile based on the scores of applications reviewed in the relevant study section in that round and the previous 2 rounds. Current System for Evaluating and Ranking Applications Reviewed in CSR Study Sections

The number of applications reviewed by NIH is at or near historic highs and award rates are at historic lows. It can be difficult to differentiate between the top 1-20% of applications reviewed in study sections. Concerns about the potential for an application to be funded results in compression of scores in the 1-3 range (final scores between 10 and 30). The current system of percentiles is used to rank applications reviewed in different study sections. However, score compression results in many applications with the same percentile, making funding decisions more difficult. Why Consider Direct Ranking?

Distribution of scores for discussed applications

Council RoundScore 20Score 25Score / / / / / / / / / / / / / / / CSR All Percentiles at scores of 20, 25, 30

Andrea Hollingshead, University of Southern California (rank order methods and procedures) Reid Hastie, University of Chicago (behavioral and psychological aspects of ranking and rating systems) David Budescu, Fordham University (statistical considerations when designing scoring and ranking systems) Donald Saari, University of California Irvine (effects of different voting methods on order of preference and how outcomes can be manipulated) Symposium on Ranking: August 2013

Reviewers would not be forced to give applications higher (worse) overall impact scores than they think the applications deserve. Reviewers would be required to distinguish between applications of similar quality and separate the very best from the rest. Reviewers will have an opportunity to re-rank applications after hearing the discussion of all applications, something that is less practical with the current system. Potential Advantages of a Rank Order Method

New Investigators are reviewed in a separate cluster but must be integrated into the final rank order of applications that are reviewed. Applications cannot be ranked with respect to applications in the previous two rounds as is done with the percentile system. Reviewers in study sections that cover highly diverse scientific areas may find direct ranking more difficult. Direct ranking may not allow reviewers to indicate applications of essential equivalent quality. Private ranking may lack the transparency of the current system where reviewers who vote out of the range set by assigned reviewers must provide justification during or after the discussion. Possible Disadvantages of Direct Ranking

Will be carried out in parallel with current review system. Applications will be scored as usual; reviewers will be asked to privately rank the top 10 R01 applications discussed on a separate score sheet. Rank data will be analyzed internally but average rank information will not be shared with Program staff or be used to influence funding decisions. Pilot Study for Direct Ranking of Applications

Average rank, minimum rank, maximum rank for each application. Variance, standard deviation Percentage of eligible reviewers ranking a given application in the top 10. Data Analysis

How does the rank order of directly ranked applications compare to the order obtained by percentiles? Does the rank order method produce fewer ties than the percentile method and therefore better spreading of scores? How much consensus among reviewers is evident in the rank system? (A calculation of variance and the percentage of eligible reviewers ranking an application in the top 10 will be noted.) How do reviewers perceive the new system? Do they find it difficult to remember earlier discussions? Do they find it difficult to judge the relative rank of applications that are of similar quality? (Reviewers will be asked to provide optional, open-ended feedback after the pilot). How might NIH use data from a rank ordering system? (Results will be discussed within CSR and with Program staff using aggregate data. No rank orders for specific applications will be shared with Program Directors). Pilot Evaluations: Questions to Answer

Dr. Ghenima Dirami, SRO, Lung Injury and Repair study section Dr. Tomas Drgon, SRO, Biostatistics Methods and Research study section Dr. Gary Hunnicutt, SRO, Cellular, Molecular and Integrative Reproduction study section Dr. Raya Mandler, SRO, Molecular and Integrative Signal Transduction study section Dr. Atul Sahai, SRO, Pathobiology of Kidney Disease study section Dr. Wei-qin Zhao, SRO, Neurobiology of Learning Memory study section Dr. Adrian Vancea, Program Analyst, Office of Planning, Analysis and Evaluation Direct Ranking Pilot Working Group Members