Challenges in Evaluating Basic Science Investments: a Funder’s Perspective Julia Klebanov
Gordon and Betty Moore Foundation Fosters path-breaking scientific discovery, environmental conservation, patient care improvements, and preservation of the character of the San Francisco Bay Area Founded November 2000 Endowment of $6.4 billion
Gordon and Betty Moore Foundation “We want the foundation to tackle large, important issues at a scale where it can achieve significant and measurable impacts.” –Gordon Moore
Science Program Seeks to advance basic science through developing new technologies, supporting imaginative scientists, and creating new collaborations at the frontiers of traditional scientific disciplines. Courtesy of Princeton University We fund research designed to: Advance our understanding of the world by asking new questions Enable new science through advances in technology Break down barriers and cultivate collaborations Enhance society’s understanding of the joy of science 3
Science Program Areas Marine Microbiology Initiative Data-Driven Discovery Initiative Emergent Phenomena in Quantum Systems Initiative Thirty Meter Telescope Imaging Science Learning Astronomy Special Projects Courtesy of Sossina Haile
Measurement, Evaluation and Learning in Philanthropy Measurement: internal process of gathering information to monitor progress in the implementation of our work; occurs on an ongoing basis Evaluation: periodic assessments of ongoing or completed work; conducted both internally or by an external third party Learning: using data and insights from measurement and evaluation to inform strategy and decision-making
Measurement, Evaluation and Learning at Moore Responsible for ensuring access to the best evaluation, data, knowledge management, measurement systems and practices that support evidence-based decision-making Examples: Working with Science program staff to develop measurement frameworks Designing and managing external evaluations Facilitating internal reviews Field-building
Developing a Funding/Evaluation Strategy Unit of funding (e.g. individuals, institutions, projects) Risk—supporting risky research is often a niche for philanthropies Integrating both financial and non-financial supports
How do we monitor and evaluate our investments? Grantee requirements Annual reports—self-reported data Meetings/site visits Track most of the quantitative data to measure scientific output (e.g. publications, presentations, # of instruments developed, etc.) Internal research/strategy reviews External evaluations
Conceptual Challenges for Monitoring & Evaluation Basic science does not follow a linear path Difficulty of setting up measurement framework prospectively for outcomes that cannot be precisely defined Tension of setting aspirational outcomes while being realistic about what’s achievable during the life of portfolio
Conceptual Challenges for Monitoring & Evaluation Many of our initiatives have strategies aimed at developing new ways of thinking, or changing the culture of a research community. Challenge of capturing the nuances of progress towards these types of outcomes Timescale issues Large initiatives typically approved for 5-7 years Conduct evaluations ~4-5 years into the life of the initiative Ultimate impact not expected until many years later
Measurement Challenges Most data self-reported—how can we objectively measure progress of grants? Limited baseline data—how do we collect this for the “state of the field?” Bibliometrics—failure of grantees to acknowledge funding; doesn’t always capture quality Investigator counterfactual—would they would have done it anyway? Contribution vs. attribution
Measurement Challenges Informal collection of qualitative data (e.g. getting out in the field, talking with grantees/members of the scientific community). Raises different issues: Diminishes rigor of systematic data collection May be bias based on what grantees think you want to hear How do we aggregate the progress reported by grantees and roll it up to better understand progress towards overarching initiative outcomes?
How does this all relate to open science? Need to be able to measure research outputs as early and often as possible Limits of bibliometric analyses Lag time—what’s beginning to emerge? Missed learning Open access policy
How can we meet our information needs going forward? Re-examining how we develop our outcomes Develop more meaningful, measurable interim milestones Incorporating expert scientific peer review panels
What can we do to improve the practices? Working with other funders Evaluating our own practices and sharing lessons Convening science evaluators
Questions? Julia.Klebanov@moore.org