Developmental Evaluation A New Way to Capture Feedback on your Evolving EE Programs Sue Staniforth, BSc., MSc. Sue Staniforth, BSc., MSc.

Slides:



Advertisements
Similar presentations
1 Module 4: Partners demand and ownership Towards more effective Capacity Development.
Advertisements

Agenda For Today! School Improvement Harris Poll Data PDSA PLC
Social Innovation Generation Workshop An Introduction to Social Innovation: Complexity and Scale Presenter: Ola Tjornbo.
Leading and Changing School of Management, RMIT Tuesday 9 December 2003 Assoc. Prof. Janet A Secatore Director of Nursing The Alfred.
Introduction to Monitoring and Evaluation
Intelligence Step 5 - Capacity Analysis Capacity Analysis Without capacity, the most innovative and brilliant interventions will not be implemented, wont.
M & E for K to 12 BEP in Schools
Local Control and Accountability Plan: Performance Based Budgeting California Association of School Business Officials.
1 Tools and mechanisms: 1. Participatory Planning Members of local communities contribute to plans for company activities potentially relating to business.
Project Monitoring Evaluation and Assessment
Decision Making Tools for Strategic Planning 2014 Nonprofit Capacity Conference Margo Bailey, PhD April 21, 2014 Clarify your strategic plan hierarchy.
Laura Pejsa Goff Pejsa & Associates MESI 2014
PPA 502 – Program Evaluation
Katoomba Group Training Initiative Climate Change, Markets and Services Welcome and Introduction Course Introduction and Guidelines Participant Introduction:
Evaluation. Practical Evaluation Michael Quinn Patton.
Student Assessment Inventory for School Districts Inventory Planning Training.
Organizational Learning (OL)
What is Business Analysis Planning & Monitoring?
Dr. G. Johnson, Program Evaluation and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson.
Monitoring Evaluation Impact Assessment Objectives Be able to n explain basic monitoring and evaluation theory in relation to accountability n Identify.
Reporting and Using Evaluation Results Presented on 6/18/15.
The Vision Implementation Project
Webinar: Leadership Teams October 2013: Idaho RTI.
Continuing QIAT Conversations Planning For Success Joan Breslin Larson Third webinar in a series of three follow up webinars for.
Key Requirements for Transforming to Community Impact Lessons from the Impact Transformation Partnership (ITP) – October 2005.
Investing in Change: Funding Collective Impact
1 Framework Programme 7 Guide for Applicants
May 12 th Monitoring and Project Control. Objectives Anticipated Outcomes Express why Monitoring and Controlling are Important. Differentiate between.
Indicators of Success -- Applying the TOC What will change? You must be able to test your theory!
Transboundary Conservation Governance: Key Principles & Concepts Governance of Transboundary Conservation Areas WPC, Sydney, 17 November 2014 Matthew McKinney.
FewSomeAll. Multi-Tiered System of Supports A Comprehensive Framework for Implementing the California Common Core State Standards Professional Learning.
Year Seven Self-Evaluation Workshop OR Getting from Here to There Northwest Commission on Colleges and Universities.
1 Dominoes or Pick-Up Sticks: Philanthropy, Evaluation & Social Change Council on Foundations Family Philanthropy Conference February 14, 2012 John Bare.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
Program Evaluation EDL 832 Jeffrey Oescher, Instructor 6 June 2013.
Module 2 Stakeholder analysis. What’s in Module 2  Why do stakeholder analysis ?  Identifying the stakeholders  Assessing stakeholders importance and.
ROLE OF INFORMATION IN MANAGING EDUCATION Ensuring appropriate and relevant information is available when needed.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
1 Designing Effective Programs: –Introduction to Program Design Steps –Organizational Strategic Planning –Approaches and Models –Evaluation, scheduling,
Monitoring and Evaluation of GeSCI’s Activities GeSCI Team Meeting 5-6 Dec 2007.
Designing Local Curriculum Module 5. Objective To assist district leadership facilitate the development of local curricula.
Donor perspectives on planning and evaluation Janice Astbury The J.W. McConnell Family Foundation.
Teresa K. Todd EDAD 684 School Finance/Ethics March 23, 2011.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
Lessons Learned about Going to Scale with Effective Professional Development Iris R. Weiss Horizon Research, Inc. February 2011.
Third Sector Evaluation: Challenges and Opportunities Presentation to the Public Legal Education in Canada National Conference on “Making an Impact” 26.
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Notes by Ben Boerkoel, Kent ISD, based on a training by Beth Steenwyk.
Facilitating UFE step-by-step: a process guide for evaluators Joaquín Navas & Ricardo Ramírez December, 2009 Module 1: Steps 1-3 of UFE checklist.
Monitoring and Evaluation
Policies and Procedures for Civil Society Participation in GEF Programme and Projects presented by GEF NGO Network ECW.
Developing a Framework In Support of a Community of Practice in ABI Jason Newberry, Research Director Tanya Darisi, Senior Researcher
Livia Bizikova and Laszlo Pinter
Evaluate Phase Pertemuan Matakuliah: A0774/Information Technology Capital Budgeting Tahun: 2009.
Mathematics Performance Tasks Applying a Program Logic Model to a Professional Development Series California Educational Research Association December.
DEVELOPING THE WORK PLAN
Evaluation of the Air Quality Health Index Program in Canada San Diego 2011 Sharon Jeffers - Environment Canada (EC) Kamila Tomcik – Health Canada (HC)
Evaluating Engagement Judging the outcome above the noise of squeaky wheels Heather Shaw, Department of Sustainability & Environment Jessica Dart, Clear.
Principal Student Achievement Meeting PLC Visioning and Beyond.
Relationships in the 21 st Century Parent Teachers Students Association (PTSA) Goals, Membership, Participation.
KNOWING THAT WE KNOW NOTHING: THE SOCRATIC PARADOX AND HEALTH SECTOR REFORM IN PNG LUKE ELICH & BENJAMIN DAY “As for me, all I know is that I know nothing…”
.  Evaluators are not only faced with methodological challenges but also ethical challenges on a daily basis.
CONVENTION & DESTINATION MARKETING Prepared by Yooshik Yoon, Kyunghee University
Instructional Leadership Supporting Common Assessments.
Outcomes By the end of our sessions, participants will have…  an understanding of how VAL-ED is used as a data point in developing professional development.
Jme McLean, MCP, MPH PolicyLink Kari Cruz, MPH Dana Keener Mast, PhD ICF International American Evaluation Association October 27, 2012 | Minneapolis,
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
Auditing Sustainable Development Goals
Claire NAUWELAERS, independent policy expert
How do you start a collaborative response?
Presentation transcript:

Developmental Evaluation A New Way to Capture Feedback on your Evolving EE Programs Sue Staniforth, BSc., MSc. Sue Staniforth, BSc., MSc.

Webinar Objectives Briefly review the Main Types of Evaluation Explore Developmental Evaluation –What is it? –When it is useful? –How is it practiced? Clarify the developmental evaluator’s role as a long term partner with program stakeholders. Explore ways to incorporate developmental evaluation to assess and improve your programs.

Why Do We Evaluate? type in your responses

Why do we Evaluate?? Program/ Project Improvement Maximize the impact of limited resources Project Accountability Understand and Work Effectively within Context Improve group dynamics and processes Build support for programs/projects Deal with uncertainty and change

Evaluation Not only to critically analyze but to provide a positive contribution that helps make programs work better / allocates resources to better programs.

Philip Cox - wonderful webinar Oct.18 - unpacked some of the evaluation terminology around outcomes measurement, and presented the main methodologies and tool sets of evaluation: The Logic Model Risk Analysis Monitoring and Evaluation Planning Touched on some of less conventional evaluation models: participatory, developmental, user-focused, etc.

Think back to experiences you have had being evaluated: Question: Poll Have you ever had a negative evaluation experience? Use your Yes or No buttons If yes, what made it so?

Negatives: “Objectivity” not knowing or understanding context of a program poor evaluation tools inaccurate questions and emphasis Powerlessness top-down – having an evaluation done to you, not with you

Question 2: Have you had a good evaluation experience? Use your Yes or No buttons If yes, what made it so?

Positive Evaluation Experiences: Knew context in which evaluation was taking place (culturally, regionally) Familiarity with the discipline Asked for input from all stakeholders Inclusive versus exclusive Asked good questions that get to the heart of the program or project Supportive environment – success as goal, not punitive punishment

Historically, evaluations looked at the goals and objectives of a program, developed set of questions and indicators used as the sole measurement of success or failure /delivered findings to an administrator – even though the direction and focus of the program may have changed along the way, due to changes in the system, the stakeholders and /or the environment: e.g. a teachers strike a program worked better at the middle school level than at original target of Grade 4 - so changed audiences

In evaluation, we are moving from one-of studies to streams: Monitoring and evaluation are starting to merge, and analysis and data bases are continuous. Most situations have multiple players, multiple levels of impacts, actors, systems and actions There is increased transparency – evaluations can no longer be bureaucratically contained New methods are evolving to capture and assess innovations.

Main types of evaluation For many years, evaluators and evaluation methodologies have tended to focus on three broad purposes: 1.Formative Evaluation 2. Summative Evaluation 3. Accountability Evaluation

1.Formative Evaluation - is used to help improve a program or policy. Formative evaluation produces information that is fed back during the course of a program -main purpose to provide information to improve the program under study. E.g. a pilot program is developed at the Calgary Zoo, implemented to school groups or the public, and staff collect feedback as to how it is working from participants and observers.

2. Summative Evaluation -used to judge the merit of a program or policy to determine whether it should be sustained, discontinued or scaled up. -done after the program (or a phase of it) is finished, to determine extent to which anticipated outcomes were produced. - intended to provide information about the worth of the program - its effectiveness. So – should that Zoo program be continued next year? Why/ Why not?

Scriven simplified this distinction, as follows: “When the cook tastes the soup, that’s formative evaluation; when the guest tastes it, that’s summative evaluation.”

3. Accountability Evaluation -used to assess the extent to which an organization or group is ‘implementing a detailed model with fidelity’ to an already approved – often rigid – blueprint. e.g. often what we have to provide to our funders…. Following our proposal methodologies to meet their goals (e.g. “the program will reduce carbon emissions of Grade 12 high school students by X%...”)

Now – just to get a sense of the experiences we have in the room: Poll: Have you done a program evaluation? Use your Yes or No buttons Is it any one of - or all of ? –Formative –Summative –Accountability Write your answers in the chat box

These are Outcome Measurement evaluations: based on what the programs goals are, what’s happening? There are plenty of situations where these types of evaluations are not appropriate and may even be counter- productive. For example:

When you are: creating an entirely new program or policy; adapting a proven program in a fast moving environment; E.g. - a funding model for government contracts to provide funds to many new evolving climate change NGO’s – importing a program or policy that proved effective in one context into a new one; - from one province or region to another ( British text books to the colonies…!)

dealing with complex issues where solutions are uncertain and/ or stakeholders are not on the same page. E.g. Fraser Salmon and Watersheds program to conserve river ecosystem – fishers, conservation groups, aboriginal groups,communities, businesses = many levels of government, very complex and inter-related systems and agendas.

Developmental Evaluation – A New Player on the Block -an emerging evaluative approach designed to help decision-makers check in on how the program is doing “in-flight” and make corrections. -creates less expectations up front, and is more about what is happening as the program rolls out. Systems Thinking and Complexity Theory - moving beyond linearity or direct cause and effect to try and capture elements of the many systems we humans operate in.

Using Different System Lenses to Understand a “particular” System Biologic System Emergence Coordination/synergy Structure, Process, Pattern Vitality Sociologic System Relationships Conversations Interdependence Loose-tight coupling Meaning/sense Mechanical / Physical System Flow Temporal Sequencing Spatial Proximities Logistics Information Economic System  Inputs/Outputs  Cost/Waste/Value/Benefits  Customers/Suppliers Political System Power Governance Citizenship Equity Anthropologic System Values Culture/Milieu Information System Access Speed Fidelity/ utility Privacy/ security Storage Psychological System Organizing Force Fields Ecological/Behaviour Settings Michael Q. Patton

“Developmental evaluation refers to long- term, partnering relationships between evaluators and those engaged in innovative initiatives and development. Developmental evaluation processes include asking evaluative questions and gathering information to provide feedback and support developmental decision- making and course corrections along the emergent path. - MQP, 2008

Differs from Traditional Evaluation in several key ways…. Traditional Evaluation… Renders definitive judgments of success or failure Developmental Evaluation… Provides feedback, generates learnings, supports direction or affirms changes in direction

Traditional Evaluation… Measures success against predetermined goals Developmental Evaluation… Develops new measures and monitoring mechanisms as goals emerge & evolve

TraditionalDevelopmental Evaluation…Evaluation… The evaluator is external, independent, ‘objective’. Evaluator is part of a team, a facilitator and learning coach, bringing evaluative thinking to the table, supportive of the organization’s goals A “critical friend”

Large, complex, challenging innovations do not lend themselves to linear or easy prediction, so it is important to be able to: track changes as they happen, feed the information back to the people doing the work, and adjust the program accordingly: in- flight adjustments.

Michael Q. Patton Following a Recipe A Rocket to the Moon Raising a Child Formulae are critical and necessary Formulae are critical and necessary Sending one rocket increases assurance that next will be ok Sending one rocket increases assurance that next will be ok High level of expertise in many specialized fields + coordination High level of expertise in many specialized fields + coordination Separate into parts and then coordinate Separate into parts and then coordinate Rockets similar in critical ways Rockets similar in critical ways High degree of certainty of outcome High degree of certainty of outcome Formulae have only a limited application Formulae have only a limited application Raising one child gives no assurance of success with the next Raising one child gives no assurance of success with the next Expertise can help but is not sufficient; relationships are key Expertise can help but is not sufficient; relationships are key Can’t separate parts from the whole Can’t separate parts from the whole Every child is unique Every child is unique Uncertainty of outcome remains Uncertainty of outcome remains ComplicatedComplex zThe recipe is essential zRecipes are tested to assure replicability of later efforts zNo particular expertise; knowing how to cook increases success zRecipe notes the quantity and nature of “parts” needed zRecipes produce standard products zCertainty of same results every time Simple

Complex developments need flexible and adaptable approaches Can be a very useful approach when you are working in Environmental Education

DE helps to unearth the complexities of the many systems we work in, monitor changes, and provide a more continuous picture of what is happening to your program when it is out in the real world! Are you scoring some goals, or….. did a tidal wave hit?!!

When do you Use Developmental Evaluation? DE is not appropriate for all situations - some of the things to ask include: 1.The evaluation should be part of the initial program design: “Evaluation isn’t something to incorporate only after an innovation is underway. The very possibility articulated in the idea of making a major difference in the world ought to incorporate a commitment to not only bringing about significant social change, but also thinking deeply about, evaluating, and learning from social innovation as the idea and process develops.” ( 2006: from “Getting to Maybe" by Frances Westley, Brenda Zimmerman and Michael Patton)

When the cook is in the market shopping for the best ingredients and developing the recipe, that’s part of the developmental evaluation! When the cook tastes the soup, that’s formative evaluation; when the guest tastes it, that’s summative evaluation.

2. Fit and Readiness Does the group want to test new approaches? Are they (you) a learning organization? Is the program flexible enough to be adapted as you go? - financial and logistical questions to be answered here What about accountability i.e. are the funders open to changes?

3. Environment What are the “ripples” that Philip Cox talks about in his splash & ripple analogy – the disturbances that get in the way of activities and outcomes? With EE being a non prescribed subject in schools –its development, varied ways it is implemented and by whom, range of impacts and the many stakeholders: administrators, teachers, parents, students, NGO’s, custodians, etc. – lots of room for variation.

4. Is the program socially complex, requiring collaboration among stake­holders from different organizations, systems, and/or sectors? E.g. the Formal school system, different segments of the public, several levels of governments, different cultures, NGO’s, community groups?

5. Is the program new or evolving? - requiring real-time learning and development -do you need to adapt, change course, incorporate new learning from another program, add new components such as teacher training or community involvement. -Is it feasible to have an “embedded” evaluator?

HOW IS DEVELOPMENTAL EVALUATION PRACTICED? The short answer is: any way that works. - an adaptive, context- specific approach. As such, there is no prescribed methodology.

A few key entry points and practices that can be applied to your program: 1.Get the Background Story: ORIENTING YOURSELF What is the theory of change that is implicit in a program??? - this needs to be clarified Look at the whys and hows of decisions and systems that are in place – how did you get to this place, and why? Review existing documentation, meet with stakeholders, ask questions, conduct mini-interviews, explore related research, take people out for coffee.

Evaluators become part of the institutional memory of an organization – a common myth is ‘ we knew where we were going’ - documenting decisions, course changes, and why they happened is critical to understand results.

2. BUILDING RELATIONSHIPS Relationship building is critical to developmental evaluation; because of the importance of Access to information: Back to the question of How the group makes decisions? What is the problem, and then – in choosing a solution, what actions are considered and what direction is chosen? - note and document the forks in the road

3. Collective Analysis The core of evaluation is getting people to engage with the data. In developmental evaluation, meaning-making is a collective process. Shifting responsibility for the meaning-making process from the evaluator to the entire team can help to: Build capacity for evaluative thinking among other team members Create a sense of ownership Increase understanding of the findings Increase the likelihood that the findings will actually be used (Patton, 2008)

4. INTERVENING - in productive ways: Asking “Wicked” – or Good Questions – questions that create openings, expose assumptions, push thinking and surface values Facilitating - active listening, surfacing assumptions, clarifying, synthesizing, ensuring all voices are heard Sourcing and providing information – bringing information and resources into the system Reminding groups of their higher level purpose – refocusing on priorities, goals History Keeper – keep track of past failures and successes to build on what has gone before Matchmaking – connecting the group with people, resources, organizations ideas

In Summary: Even if you don’t do developmental evaluation in its formal sense, there are still many facets of its practice that can contribute to your evolving programs: Consider implementing some of the practices discussed above: 1.Get the Background Story: Clarify your theory of change Look at the whys and hows of decision-making and systems that are in place 2. Relationships - How do you make decisions? Build relationships, note and document the forks in the road and the processes, strengths, and weaknesses

3. Collective Analysis Ensure all program team members are part of any evaluative process, help them engage with the data. Help Build capacity for evaluative thinking, Create a sense of ownership, Help ensure data collected is useful and used. 4. INTERVENING - in productive ways: Ask “Wicked” Questions about the program Facilitate evaluative discussions amongst team members Source and provide information, ideas, people and resources Document what you do and why!