Abt Associates | pg 1 Performance Management Systems and Evaluation: Towards a Mutually Reinforcing Relationship Jacob Alex Klerman (Abt Associates) APPAM/HSE.

Slides:



Advertisements
Similar presentations
Donald T. Simeon Caribbean Health Research Council
Advertisements

Job Search Assistance Strategies Evaluation Presentation for American Public Human Services Association February 25, 2014.
Prioritizing among impact questions Module 3.3. How important will the results be? Is the program (or similar programs) large or expanding? If the program.
Essentials of Marketing 13e
Imperfect Testing Unit 7 Lesson 1 and 2. Lesson One Part One Vocabulary Sample Representative sample Population Percentage Pie chart.
Management Structure and Organisation
Marketing Research and Information Systems
BSBIMN501A QUEENSLAND INTERNATIONAL BUSINESS ACADEMY.
Game Design Serious Games Miikka Junnila.
WHY LARGE-SCALE RANDOMIZED CONTROL TRIALS? David Myers Senior Vice President IES 2006 Research Conference David Myers Senior Vice President IES 2006 Research.
1 CCLI Proposal Writing Strategies Tim Fossum Program Director Division of Undergraduate Education National Science Foundation Vermont.
Management Tools Problem Solving Chapter 17 Doug Winter, Christy Blew, Anh Le, Jennifer Stoltz.
Abt Associates | pg 1 Performance Management Systems and Evaluation: Towards a Mutually Reinforcing Relationship Jacob Alex Klerman (Abt Associates) APPAM/HSE.
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
HL2 MARKETING THEORY: QUANTITATIVE MARKET RESEARCH IB BUSINESS & MANAGEMENT A COURSE COMPANION.
Building a Training Agenda Focus, Structure and Variety.
Effectively applying ISO9001:2000 clauses 5 and 8
SOCIAL MARKETING RESEARCH. OUTLINE When to do research Types of research –Formative –Process –Post-tests How to choose.
Change Management. Why change management  For many change practitioners, there is no doubt that change management must be used on projects that impact.
XlyIbM&feature=context&context=C44f5da cADvjVQa1PpcFNGjtX7e0f0hiJUCHkWpw Mbk1oTFTFzFGUhttps://
Harm Reduction Organizational Considerations. Background Thinking Organizations need to incorporate a deeper understanding of what is helpful and provide.
Building Support through Effective Communication Strategies.
Copyright © 2010 Pearson Education, Inc. Slide
Using Inference to Make Decisions
How to write the answer they’re looking for! ANSWERING APUSH ESSAY QUESTIONS.
Rethinking Homelessness Their Future Depends on it!
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
How can school districts support the development of healthy school communities? Facilitated by: Rhonda Patton, Alberta Health Services Dr. Steve Manske,
Evaluating a Research Report
Introduction to Evaluation. Objectives Introduce the five categories of evaluation that can be used to plan and assess ACSM activities. Demonstrate how.
CAN WE KEEP GETTING BETTER? Presented by Dr Reg Allen CEO Tasmanian Qualifications Authority.
Participate in a Team to Achieve Organizational Goal
Proposals. Introducing the Problem Depending on what your readers know Explain how the problem came to be Explain what attempts have been made to solve.
By Chris Anscombe Evaluation Question 3.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
Word problems DON’T PANIC! Some students believe they can’t do word problem solving in math. Don’t panic. It helps to read the question more than once.
Modernisation of Statistics Production Stockholm November 2009 Summary and Conclusions New York 24 February 2010 Mats Wadman Deputy Director General Statistics.
Benefits from the AWAKE project The Centre for Senior Citizens Initiatives Poznań, Poland AWAKE Partnership meeting 6th – 9th June 2013 Jelgava, Latvia.
Emerging ethical issues for professional engineers. Engineering 10 Spring, 2008.
Fall 2002Biostat Statistical Inference - Confidence Intervals General (1 -  ) Confidence Intervals: a random interval that will include a fixed.
Challenges to successful quality improvement HAIVN 2012.
Response to Intervention RTI Teams: Following a Structured Problem- Solving Model Jim Wright
College of Public Health and Human Sciences Communicating About Public Health Policy Presenter: Craig Mossbaek Date: August 22, 2013 Public Health Policy.
Evaluation and Designing
MAT 1000 Mathematics in Today's World. Last Time 1.Collecting data with experiments 2.Practical problems with experiments.
Getting Everybody on Board Session 3 Module 4 Presented by the MBI Consultants.
© 2013 Catalyst The Case for Incrementality How to Measure the Real ROI of Your Marketing Programs By Marc Solomon, Director of Analytics, Catalyst 6/3/2013.
Introduction to the unit How far did British society change, 1939 – 1975? (A972/22)
Problem Solving Skills
Dealing with Difficult People
Enhancing Education Through Technology Round 8 Competitive.
Chapter 3 Surveys and Sampling © 2010 Pearson Education 1.
COUNTER-ARGUMENTS What is it? How to write it effectively?
Moving ON Audits Illawarra Retirement Trust. Foundation An opinion without data is just another opinion Real data helps services and managers to make.
Pilot and Feasibility Studies NIHR Research Design Service Sam Norton, Liz Steed, Lauren Bell.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
Outcomes Working Group: Introductory Webinar Facilitator: Frances Sinha, Director EDA Rural Systems (India) and board member of SPTF 14 and 21 October.
+ Grant Writing. + Goal “The overriding principles of grantsmanship are the same – develop a top-flight program and use the proposal to convince the grant.
Designing a Survey The key to obtaining good data through a survey is to develop a good survey questionnaire.
HOW TO PREPARE FOR TYG ELECTIONS A PRACTICAL GUIDE FOR THE TEMPLE YOUTH GROUP ADVISOR/CLERGY.
Devil’s advocate and an alternative: MAAD View as slide show Adapted from AdPrin.com Using your group to evaluate your own proposals.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
CREATING A SURVEY. What is a survey questionnaire? Survey questionnaires present a set of questions to a subject who with his/her responses will provide.
An Introduction to Motivational Interviewing
ENTERPRISE FACULTY What is Enterprise?.
Philosophy Essay Writing
Using Team Roles to Succeed in Instant Challenge
Homegrown Usability Testing-- Will It Provide Results?
CHAPTER 4 Designing Studies
Solving Problems in Groups
Presentation transcript:

Abt Associates | pg 1 Performance Management Systems and Evaluation: Towards a Mutually Reinforcing Relationship Jacob Alex Klerman (Abt Associates) APPAM/HSE Conference “Improving the Quality of Public Services” Moscow, June 2011

Abt Associates | pg 2 Performance Management Systems and Evaluation Performance ManagementEvaluation

Abt Associates | pg 3 Performance Management Systems and Evaluation Performance Management  The need is clear –“What gets measured gets done” –If you know what you want done; you need to manage against it –To manage against it, you need to measure it Evaluation

Abt Associates | pg 4 Performance Management Systems and Evaluation Performance Management  The need is clear –“What gets measured gets done” –If you know what you want done; you need to manage against it –To manage against it, you need to measure it Evaluation  But what do you want done? –Are you sure?  That’s the role of rigorous impact evaluation –Dirty little secret: much of what we do—much of what seems “plausible”—has minimal impact (or even hurts)

Abt Associates | pg 5  Current Practice  A Better Way  Closing Thoughts Outline

Abt Associates | pg 6  Everyone wants better program outcomes –We might even be willing to spend more if we could prove better outcomes  Proving “better outcomes” requires rigorous impact evaluation –Many apparently plausible programs (and program innovations) don’t work –Naive evaluation methods give the wrong answer  Rigorous impact evaluation is challenging –Requiring large samples –And, the smaller the projected incremental impact, the larger the required samples Rigorous Impact Evaluation Is Crucial

Abt Associates | pg 7 Current Evaluation Practice Isn’t Very Useful  Asks the wrong questions: Does the program “work”? –i.e., Should we shut the program down? –Big programs address major social problems –The programs aren’t going away

Abt Associates | pg 8 Current Evaluation Practice Isn’t Very Useful  Asks the wrong questions: Does the program “work”? –i.e., Should we shut the program down? –Big programs address major social problems –The programs aren’t going away  The right question is often: How can we make the program better? –Which program model works better? –Would some minor—and affordable—change in program design help? –For which subgroups does our program work? Target the program at them

Abt Associates | pg 9  Answering up/down evaluation question requires (relatively) small samples –For a training program, perhaps 500-2,000 case  Answering practitioners’ questions requires much large samples –For a training program, perhaps 10,000+ cases  At current evaluation cost—$1,000+ per case—we can’t afford to answer practitioner’s questions –Especially if the change in outcomes will be at best small –And that’s a big problem because CQI/kaizen suggests that major improvement often comes from lots of small improvements The Realities of Sample Size and Cost To answer practitioner’s questions, we’re going to need to get the cost way down

Abt Associates | pg 10  Negotiate access to sites, including convincing them to deny service to some applicants –  Customize randomization for each site –  Detailed process analysis at each site –  Detailed survey follow-up – Steps in a Current Evaluation Is there another way? Sometimes, yes …

Abt Associates | pg 11  Current Practice  A Better Way  Closing Thoughts Outline

Abt Associates | pg 12  We have just argued that collecting information on outcomes drives costs  Performance measurement systems already collect information on outcomes –Presumably on the key outcomes  So, when we can measure outcomes through the performance measurement system –Costs will be much, much lower –Allowing large samples –A key requirement for evaluating incremental changes  Will only work when both treatment and control are “in the system” (e.g., incremental changes) At the Back End—Leverage Ongoing Performance Management Systems

Abt Associates | pg 13  Currently research is “top down” –Someone outside the system decides to evaluate X –Then, evaluator tries to convince sites to adopt X; and to deny all services to a control group At the Front End—A Learning Organization

Abt Associates | pg 14  Currently research is “top down” –Someone outside the system decides to evaluate X –Then, evaluator tries to convince sites to adopt X; and to deny all services to a control group  Alternative is “bottom up” –Ask sites to suggest what to evaluate –Form a committee—site representatives, central program staff, substance experts, evaluation experts –Ask them to select from among the suggestions –Ask sites to volunteer to implement the selected suggestions –Randomize at the site level; control condition is “current practice”, not “no service” At the Front End—A Learning Organization Cutting time and costs

Abt Associates | pg 15 In Summary NowBetter Negotiate access to sites (expensive and time consuming) They volunteer Customize randomization for each site (expensive and time consuming Site level randomization Detailed process analysis (expensive) Skip this Collect detailed survey outcome data (very, very expensive) Use Performance Management System data And when your costs drop sharply, CQI if feasible; i.e., you can test little changes

Abt Associates | pg 16  Current Practice  A Better Way  Closing Thoughts Outline

Abt Associates | pg 17 A True Learning Organization  Performance measurement is an ongoing task  CQI/Continuous Quality Improvement; i.e., –Proposing small changes to SOP/Standard Operating Procedures –Rigorously evaluating those small changes –Adopting those that can be shown to “help”  … Should also be an ongoing task  The key insight of “kaizen” is that improved outcomes arise from the accumulation of lots of such small changes Data collected as part of Performance Management Systems makes such CQI feasible

Abt Associates | pg 18  Site level randomization needs lots (50-200) of, relatively similar, sites  Central organization controls resources –Much easier to get volunteers, when volunteering is the only way to get more resources When Will this Work? We’re looking for test cases. Any volunteers?

Performance Management Systems and Evaluation: Towards a Mutually Reinforcing Relationship Jacob Alex Klerman APPAM/HSE Conference “Improving the Quality of Public Services” Moscow, June 2011