Long Term Impacts of Research Capacity Building in Global Health

Slides:



Advertisements
Similar presentations
The Readiness Centers Initiative Early Education and Care Board Meeting Tuesday, May 11, 2010.
Advertisements

Strengthening Health Research Capacity of Nurses – The Need for Evidence Based Practice Dr. Shareen Swastika Ali ( BDS,PGDETT,MPH) Health Research Officer.
Donald T. Simeon Caribbean Health Research Council
Cancer Disparities Research Partnership Program Process & Outcome Evaluation Amanda Greene, PhD, MPH, RN Paul Young, MBA, MPH Natalie Stultz, MS NOVA Research.
The NIH Roadmap for Medical Research
IDR Snapshot: Quantitative Assessment Methodology Evaluating Size and Comprehensiveness of an Integrated Data Repository Vojtech Huser, MD, PhD a James.
Science for Global Health: Fostering International Collaboration Norka Ruiz Bravo,PhD Special Advisor to the Director National Institutes of Health U.S.
Fogarty International Center: HIV Research Training Program: New Applications and Strategies Gene D. Morse, PharmD University at Buffalo AIDS International.
Evaluation in the GEF and Training Module on Terminal Evaluations
Handicap-International Challenges of the Sustainability of physical rehabilitation sector Nepal, January 2013.
1 Introduction to Grant Writing Beth Virnig, PhD Haitao Chu, MD, PhD University of Minnesota, School of Public Health December 11, 2013.
1 Analysing the contributions of fellowships to industrial development November 2010 Johannes Dobinger, UNIDO Evaluation Group.
Monitoring and Evaluation of GeSCI’s Activities GeSCI Team Meeting 5-6 Dec 2007.
Systems Approaches to Population Health. Activities Supported by NCI.
704: Conducting Business in Fiscally Challenging Times: Strategies and Tools to Get There PCYA Leadership Academy Presentation March 28, 2012.
6 Key Priorities A “scorecard” for each of the 5 above priorities with end of 2009 deliverables – with a space beside each for a check mark (i.e. complete)
The Importance of a Strategic Plan to Eliminate Health Disparities 2008 eHealth Conference June 9, 2008 Yvonne T. Maddox, PhD Deputy Director Eunice Kennedy.
SMRB Working Group on Approaches to Assess the Value of Biomedical Research Supported by NIH SMRB Working Group on Approaches to Assess the Value of Biomedical.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Copyright: Knowledge Utilization Research Center Chapter 5 Recommendations-A Road Map to Support Countries in Strengthening National Health Research Systems.
University of Kentucky Center for Clinical and Translational Science (CCTS) November 2015 Stephen W. Wyatt, DMD, MPH Senior Associate Director Center for.
NIAMS Training Grant and Career Development Award Program Evaluation Presented by David Wofsy, M.D. Chairman Evaluation Working Group September 27, 2007.
Scottish Improvement Science Collaborating Centre Strengthening the evidence base for improvement science: lessons learned Dr Nicola Gray, Senior Lecturer,
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Science for Global Health: Fostering International Research Collaboration James Herrington, PhD, MPH Director Division of International Relations Fogarty.
Jeanne McDermott, PhD,MPH,CNM Program Officer Division of International Training and Research Fogarty International Center National Institutes of Health.
Performance Measurement of Information Technology Investment October 16, 2006 Kevin J. Leonard MBA, Ph.D., CMA Associate Professor, Dept of Health Policy,
NIH – A Vision Through 2020 Philip E. Bourne, PhD, FACMI Associate Director for Data Science
Building a Center of Reference for M&E of Health Programs:
Finding, Developing and Capitalizing on the Capacity Dividend
Ruth Geraghty Data Curator Children’s Research Network
Building evaluation in the Department of Immigration and Citizenship
Designing Effective Evaluation Strategies for Outreach Programs
Flag and Logo USAID/Pakistan Alumni Association Discussion on New Directions October 1, 2016.
Managing for Results Capacity in Higher Education Institutions
American Evaluation Association
THE SELF SUSTAINING NON-PROFIT Golden Lessons From the Development and Corporate Sectors 14th Eastern Africa Resource Mobilization Workshop Paper.
2. What are the major research priorities for the LAC region?
Monitoring &Evaluation
GEF governance reforms to enhance effectiveness and civil society engagement Faizal Parish GEC, Central Focal Point , GEF NGO Network GEF-NGO Consultation.
District Leadership Team Sustainability Susan Barrett Director, Mid-Atlantic PBIS Network Sheppard Pratt Health.
The International Cancer Expert Corps (ICEC) mentorship model for sustainable cancer care in Low- and Middle Income Countries (LMIC) and indigenous populations.
The Influence of Domain-Specific Metric Development on Evaluation and Design: An Example from National Institutes of Health Technology Development Programs.
Strengthening Capacity for Research and Innovation
Programme Board 6th Meeting May 2017 Craig Larlee
Group 1 Issues of highest importance Foci for discussion/action
Office of the President The Select Committee on Social Services
Low Carbon Development for climate resilient societies
Descriptive Analysis of Performance-Based Financing Education Project in Burundi Victoria Ryan World Bank Group May 16, 2017.
Challenges in Evaluating Basic Science Investments: a Funder’s Perspective Julia Klebanov.
GROUP 2 - STRATEGY GOAL/IMPACT AND OUTCOME; OUTPUT OBJECTIVE 4
Evaluating Partnerships
Department of Myanmar Education Research
Implementation of SAPCC:
National Workshop on Planning for the GEOHealth Hub for Interdisciplinary Research and Training project overview and progress Kiros Berhane, PhD, Professor.
Ruth Geraghty Data Curator Children’s Research Network
As we reflect on policies and practices for expanding and improving early identification and early intervention for youth, I would like to tie together.
SRH & HIV Linkages Agenda
Evaluation in the GEF and Training Module on Terminal Evaluations
Introduction to M&E Frameworks
Joseph B. Berger University of Massachusetts Boston
A Funders Perspective Maria Uhle Co-Chair, Belmont Forum Directorates for Geosciences, US National Science Foundation.
Retreat Preview: Reflecting on Current Strategic Priorities
Troubleshooting Logic Models
Service Array Assessment and Planning Purposes
Quality and Process Improvement Program (QPIP)
Rachel Sturke, PhD Deputy Director and Senior Scientist
Public Health Systems Strengthening within the Division of Public Health Systems & Workforce Development Center for Global Health Division of Public Health.
Comprehensive M&E Systems
Balanced Scorecard Process Writing Outcome Objectives and Measures
Presentation transcript:

Long Term Impacts of Research Capacity Building in Global Health Fogarty International Center Celia Wolfman, MSPPM Rachel Sturke, PhD, MPH, MIA Long Term Impacts of Research Capacity Building in Global Health This presentation discusses the ways FIC explores, defines and captures the long-term impacts of its research and research training programs Understanding the impacts not only helps inform the Institute programmatically but we get a better context of the growth of the IC and the history

The Fogarty Mission Supporting and facilitating global health research conducted by U.S. and international investigators Building partnerships between health research institutions in the U.S. and abroad Training the next generation of scientists to address global health needs and challenges

The Fogarty Pipeline Foreign Chronic Infectious Implementation Networks Collaborations STRENGTHENING Institutions Post-Doc Doctoral Graduate College Domestic Individuals Research Training

Fogarty Values and Approach Funded research body for training We center around LMIC and growing a critical mass of trainees that are independent and capable researchers. This will create a sustainable research environemtn. We also value collaborations between the US and foreign institutions; and increasingly, we encourage the foreign institutions building partnerships with one another to create local networks FIC has been around for over 35 years. We take a long-term approach and understand that most of the big impacts take time to mature before creating an impact in the LMIC

Components of Research Capacity Development Supporting careers of LMIC scientists Experience in low resource settings for US researchers Funded research in LMICs Encourage collaborative research Training and Career Development Training LMIC researchers Training of scientific administrators Research Support in curricula development Institution Support in cross-cutting research (e.g. informatics, bioethics)

Metrics Utilized by Fogarty OUTPUTS OUTCOMES IMPACTS Trainees Publications Research laboratories New competitive grants Institutional changes Acquisition of leadership positions in research/policy Critical mass scientists Organizational partnerships/network development More easily captured & more quantitative Harder to Measure & often qualitative Outputs often are easy to measure = counts Impacts are more qualitative. We want to know what the right parameters for these metrics.

Articles by Topic Area (Fogarty DIEPS, 2014) Outputs are easier for us to capture, define and count. This is an example of publications binned by category for our intramural program. Source: NIH Library (July 2012)

Questions RE: Long-Term Impact Standard metrics don’t always apply. What have FIC funded research training programs contributed to? What are the mechanisms through which these impacts have occurred? What type of data should be used? (e.g. for networks, policy influence, tracking trainee careers): What are the lessons learned that can be used by other FIC funded programs, and more broadly by other research training programs?

Methodological Challenges for Capturing Long-term, Distal Impacts Establishing a universal “capacity building” metric Defining linkages between research inputs, outputs, outcomes, and impacts Attribution vs. contribution in multiple funder environment Creating a framework for measuring impact that can deal with priorities of different stakeholders The are some overarching challenges to answering these questions. For example, we don’t have a clear understanding of when capacity has been reached? What is that metric? When are we done? Similarly, we know that the linkages between outputs, outcomes and impacts are not any easy cause effect. In a multi-funded environment with multiple stakeholders and funders, we can never take responsibility for an impact but rather “contribute” to it.

World RePORT TOP 2013 INSTITUTIONS Provides an illustrative map (and downloadable dataset) to facilitate communication and coordination of biomedical research funded by major government agencies and philanthropic organizations around the world. So there are some systems in place that allow us to systematically capture more long term impacts. For example, to help us understand the synergies with other funders and the inputs that could impact our outcomes, NIH and other international funders have developed World RePORT. This database captures grants for multiple funders including FIC and other NIH to understand the larger context.

CareerTrac CareerTrac is an innovative global trainee tracking and evaluation system for Fogarty Document Fogarty’s investment in international capacity building Track extramural trainee’s progress and career paths Grantees’ need for reporting Evaluation: Monitor output, outcome and impact Benchmarking CareerTrac allows us to understand our training outcome. Over 3 quarters of our programs are training. Understanding what happens to our trainees and their careerpaths is important. However, this is not a quantitative straightforward metric. Instead, we need to capture multiple outcome accomplishments (publications, employment, awards, degrees, etc) including qualitative data to understand the path of our trainees. CareerTrac allows for this.

Fogarty Trainees (1989-2015) 5400 Long Term Trainees from 120 Countries Source: CareerTrac

Return Home Rate of Fogarty Trainees 90% 92% 97% 92% 94% Used the canned report. Only long term trainees (n=4694) of which 28% (n=1293) were excluded because they didn’t say whether or not they returned. Doesn’t include trainees who ended training in 2015 (grantees haven’t entered the information into CT on whether they returned home or not). 92% Note: 28% Not Reporting Source: CareerTrac

How Case Studies Can Help Convey a more comprehensive understanding of career development or policy development Capture the interplay between research, research capacity, delivery of health care, and population health improvements Complement the conventional short-term metrics of success With these systems in place, we can use them to create case studies. Some of the benefits include they are able to 1) Convey a more comprehensive understanding, not only of outcomes of these investments in terms of career development, but of broader effects upon institutional capacity, knowledge production and translation, policy development, and ultimately improved health in-country. 2) Capture the interplay between research, research capacity, delivery of health care, and population health improvements will provide a deep understanding of the relationship between the research capacity, a strengthened health system, and improved health outcomes.

Case Study Example This is an example of a case study used in the evaluation of our Brain Disorders program evaluation. Through the development of the case studies and interviews with both foreign and US collaborators, we were able to understand the ripple effect our grant had on the larger research community. As shown, this grantee was able to create networks and train individuals in Colombia who were able to support the large scale (and currently ongoing) Alzheimer clinical trial being conducted at the local institution.

Fogarty Challenges: Capacity Building What is success for Fogarty? When has capacity been reached? Depends on what “capacity building” includes: Institutions Research area Independent Scientists Other funders’ investments Ultimately we are trying to building a critical mass of researchers. But defining this is hard. We don’t have a universal metric of what “success” looks like in terms of capacity ? So how and when do we know we are done is complicated.

Thank you For questions or more information: Celia Wolfman 301.594.7857 wolfmancm@mail.nih.gov Links: Fogarty: www.fic.nih.gov World RePORT: www.worldreport.nih.gov CareerTrac: https://careertrac.niehs.nih.gov