Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Program Assessment Rating Tool (PART)

Similar presentations


Presentation on theme: "The Program Assessment Rating Tool (PART)"— Presentation transcript:

1 The Program Assessment Rating Tool (PART)
Mary Cassell Office of Management and Budget April 28, 2011

2 Overview What is the PART? How was it developed?
What are the components? Quality controls How was the PART used? Budget Program Improvements Lessons learned

3 …all others, bring data.”
“In God we trust… …all others, bring data.” -W. Edwards Deming

4 Introduction The PART was a component of the Bush Administration’s Management Agenda that focused on Budget and Performance Integration The PART promoted efforts to achieve concrete and measurable results The PART supported program improvements CJ

5 What is the Program Assessment Rating Tool (PART)?
A set of questions that evaluates program performance in four critical areas: Program Purpose and Design Strategic Planning Program Management Program Results and Accountability A tool to assess performance using evidence Provides a consistent, transparent approach to evaluating programs across the Federal government CJ

6 Why PART? Measure and diagnose program performance
Evaluate programs in a systematic, consistent, and transparent manner Inform agency and OMB decisions on resource allocations Focus on program improvements through management, legislative, regulatory, or budgetary actions Establish accountability for results

7 How did the PART work? Answers to questions generated scores which are weighted to tally to a total score. Based on evidence, evaluations, and data Ratings based on total scores: Effective, Moderately Effective, Adequate, Ineffective. Results Not Demonstrated assigned to programs that do not have performance measures or data, regardless of overall score.

8 PART Questions and Process
Roughly analytical questions; explanations and evidence are required Standards of evidence hold programs to a high bar Question weight can be tailored to reflect program specifics Interactions between questions. Yes/No answers in diagnostic sections. Four levels of answers in results section. Collaborative process with agencies; OMB had the pen.

9 How was the PART developed?
Designed by 12 OMB career staff, including one representative from each division Piloted with about 60 programs Pilot generated extensive input from agencies that resulted in several revisions – changes in scoring, elimination of a question about whether the program served an appropriate federal roles Conducted trial runs with research institutions Agency roll-out: OMB training Agency meetings Agency trainings Incorporation into 2002 budget decisions and materials Development, pilot, and revision process took about 6 months, including development of guidance and training.

10 PART Program Types Direct Federal Competitive Grant
Block/Formula Grant Regulatory Based Capital Assets and Service Acquisition Credit Research and Development

11 PART Questions Section I: Program Purpose & Design (20%)
Is the program purpose clear? Does the program address an existing problem or need? Is the program unnecessarily duplicative? Is the program free of major design flaws? Is the program targeted effectively? Section II: Strategic Planning (10%) Does the program have strong long-term performance measures? Do the long-term measures have ambitious targets Does the program have strong annual performance targets? Does the program have baselines and ambitious targets? Do all partners agree to the goals and targets? Are independent evaluations conducted of the program? Are budgets tied to performance goals? Has the program taken steps to correct strategic planning deficiencies?

12 PART Questions Section III: Program Management (20%)
Does the program collect timely performance information and use it to manage? Are managers and partners held accountable for program performance? Are funds obligated in a timely manner? Does the program have procedures (IT, competitive sourcing, etc) to improve efficiency? Does the program collaborate with related programs? Does the program use strong financial management practices? Has the program taken meaningful steps to address management deficiencies? Additional questions for specific types of programs. Section VI: Program Results (50%) Has the program made adequate progress in achieving its long-term goals? Does the program achieve its annual performance goals? Does the program demonstrate improved efficiencies? Does the program compare favorably to similar programs, both public and private? Do independent evaluations shows positive results?

13 Performance Measures, Data and Evaluations
Strong focus on performance measures. Performance measures should capture the most important aspects of a program’s mission and priorities. Key issues to consider: 1) performance measures and targets . 2) focus on outcomes whenever possible. 3) annual and long-term timeframes. Efficiency measures required Rigorous evaluations are strongly encouraged

14 Quality Controls The PART is a tool used to guide a collective analysis-not a valid and reliable evaluation instrument. Therefore it required other mechanisms to promote consistent application. Guidance and standards of evidence Training On-going technical assistance Consistency check Appeals process Public transparency

15 How was the PART used? A Focus on Improvement
Every program developed improvement plans Focus on findings in the PART assessments Implementation of plans and report on progress Reassessments occurred once the program has made substantive changes

16 The Use of the PART in the Budget Process
Informed budget decisions (funding, legislative, and management) Increased prominence of performance in the Budget Increased accountability and focus on data and results

17 Example: Migrant Education and the PART
Collaborative process between OMB and program office. Program office provided evidence to back up PART answers (such as monitoring instruments, State data, action plans, etc.) OMB and ED met to discuss evidence OMB and ED shared PART drafts ED developed follow-up actions.

18 Migrant Education PART
PART Findings: Program is well-designed and has a good strategic planning structure Program is well-managed Issues relating to possible inaccuracies in the eligible student count are being addressed States are showing progress in providing data and in improving student achievement Results section: Ensure all States report complete and accurate data Continue to improve student achievement outcomes Improve efficiencies, in particular in migrant student records transfer system Complete a program evaluation Areas for Improvement and Action Steps for Migrant Education Complete national audit of child eligibility determinations Implement and collect data on Migrant Student Information Exchange (MSIX) Use data, in particular on student achievement, to improve performance

19 Distribution of Ratings Government-wide
The Process Distribution of Ratings Government-wide 45% 75% 55% The following are examples of commonly known and/or large programs by rating (based on FY 2006 estimates): (need some basic info on theses in case questions arise.) Effective: Ineffective: Community Health Centers (2002) Health Professions (2002) Homeless Assistance Grants, Competitive (2005) CDBG (2003) Customs and Border Protection - Border Security Inspections and Trade Facilitation at Ports of Entry’s Program (2005) Moderately Effective: Results Not Demonstrated: Corps Emergency Management (reassessment, 2004) The Emergency Food Assistance Program (2005) Medicare (2003) Flood Damage Reduction (2002) Adequate: Disability Compensation (2002) IRS Tax Collection (2002) VA Medical Care (reassessment, 2003) Head Start (2002) AmeriCorps (reassessment, 2005) Space Shuttle (reassessment, 2005) NOTE: There may be a spike in ineffective and/or RND program ratings next year as some of the more difficult/complex programs (?) are finally assessed. 25% Ratings / Year 2002 2003 2004 2005 Effective 6% 9% 13% 15% Moderately Effective 24% 19% 22% 23% Adequate 14% 25% Ineffective 5% 3% 1% Results Not Demonstrated 51% 31% 28%

20 Department of Education Cumulative Ratings
The Process Department of Education Cumulative Ratings The following are examples of commonly known and/or large programs by rating (based on FY 2006 estimates): (need some basic info on theses in case questions arise.) Effective: Ineffective: Community Health Centers (2002) Health Professions (2002) Homeless Assistance Grants, Competitive (2005) CDBG (2003) Customs and Border Protection - Border Security Inspections and Trade Facilitation at Ports of Entry’s Program (2005) Moderately Effective: Results Not Demonstrated: Corps Emergency Management (reassessment, 2004) The Emergency Food Assistance Program (2005) Medicare (2003) Flood Damage Reduction (2002) Adequate: Disability Compensation (2002) IRS Tax Collection (2002) VA Medical Care (reassessment, 2003) Head Start (2002) AmeriCorps (reassessment, 2005) Space Shuttle (reassessment, 2005) NOTE: There may be a spike in ineffective and/or RND program ratings next year as some of the more difficult/complex programs (?) are finally assessed. Ratings / Year 2002 2003 2004 2005 Effective 6% 9% 13% 15% Moderately Effective 24% 19% 22% 23% Adequate 14% 25% Ineffective 5% 3% 1% Results Not Demonstrated 51% 31% 28%

21

22 Lessons Learned Pros Focus on results, data, performance measurement, evaluation Program improvements Common analysis Transparency Cross-program and cross-agency comparisons between similar programs Identification of best practices Informed budget descisions

23 Lessons Learned Cons Not consistent enough to allow trade-offs between unlike programs Better for program improvement than accountability, unless coupled with strong evaluation Became too burdensome Not fully embraced by agencies or Congress


Download ppt "The Program Assessment Rating Tool (PART)"

Similar presentations


Ads by Google