Download presentation
Presentation is loading. Please wait.
Published bySilvester McGee Modified over 9 years ago
1
Dare to Evaluate Roger A. Rennekamp, Ph.D. Department Head and State 4-H Program Leader Youth Development Education Oregon State University roger.rennekamp@oregonstate.edu
2
Reflections on a Quarter Century of Extension Evaluation Practice Twenty-five years have passed since the Journal of Extension published its landmark issue dedicated to program evaluation The issue served as a “call to action” following several critical investigations of Cooperative Extension that concluded that Extension… is “no longer relevant” has “no clearly defined focus” is “short on impacts” and “long on documenting participation and activity”
3
The Challenge in 1983 It can no longer be taken for granted that programs are good and appropriate. Extension is operating in a new environment – an environment more open to criticism and demands for justification of actions. All publicly funded agencies, not just extension, are vulnerable in these times. In an era of accountability, Extension must be able to document who and how people are being served. It also needs to document that programs are achieving positive results. (Andrews, 1983)
4
Three Areas of Progress The use of logic modeling has become widespread. Capacity to conduct evaluation increased markedly. Data for decision making is readily available.
5
The New “Call to Action” Logic modeling must be better understood. Build capacity for increased rigor in evaluation. Rethink the purpose of evaluation in Extension.
6
A logic model is… A. a framework for program planning that links inputs and activities to program outcomes B. useful in formulating evaluation questions C. a graphic representation of the theory which underlies a program D. all of the above.
7
Put the Logic into Logic Models Significant evolution in thinking about programming planning from Bennett (1975) to Boyle (1981) to Boone (1985) Bennett and Rockwell (1995) Logic Modeling (Taylor-Powell) Widely adopted as a model for program planning and framework for evaluation Logic models are more than “templates for preparing plans of work” or “forms to be filled out”
8
Inputs OutputsOutcomes ActivitiesParticipationInitialIntermediateLong-Term Resources deployed to address situation Staff Volunteers Time Money Materials Equipment Technology Partners Activities supported by resources invested Workshops Meetings Field Days Demonstration Camps Trainings Web Sites Home Visits Individuals or groups who participate in the activities Number Characteristics Reactions Learning that results from participation Awareness Knowledge Opinions Skills Aspirations Actions that results from learning Practices Behaviors Policies Social Action Choices Conditions which change as a result of action Social Economic Environmental Contextual Factors
9
Put the Logic into Logic Models Logic should represent an underlying theory for how a program should operate They are “pictures” of programs Implicit program theory becomes explicit Linkages between inputs, outputs and outcomes can be based on research, intuition, experience, and at times, untested assumptions. As these linkages are confirmed, the theory becomes increasingly sound and mature.
10
The degree of rigor built into my evaluations is most frequently influenced by… A. my level of knowledge and skill in program evaluation. B. relative need for accuracy and confidence in the evaluation results. C. resource limitations. D.lack of technical assistance with evaluation.
11
Build Capacity for Rigor Rigor is about the technical qualities of an evaluation that make it convincing. How much rigor is necessary? A bad evaluation might be worse than no evaluation at all. But an overly sophisticated evaluation may waste precious resources. Decisions about rigor depend on the need for precision, need for acceptance of results, and the need to generalize findings.
12
Build Capacity for Rigor Is training the answer? Puts the burden on field staff. Is hiring program evaluators the answer? Puts the burden on evaluators. New approaches suggest that individual development and organizational development go hand in hand, using experiential approaches where the evaluator serves as an evaluation coach, working hand-in-hand with program staff.
13
The purpose for which I most frequently conduct evaluations is… A. to generate impact data for stakeholders. B. to improve the program. C. to better understand how the program works and advance the field. D.to assess the need for the program.
14
Rethink Evaluation Purpose Is the goal of evaluation to prove or improve? Sometimes we approach evaluation as having something to prove. Other times we approach evaluation with the aim of discovering new information that will help improve the program. Perhaps we are a bit out of balance.
15
Rethink Evaluation Purpose Joan Thomson (1983, p. 3), then editor of the Journal of Extension, wrote in her introductory notes to the evaluation issue that the “rationale for conducting Extension program evaluation in today’s complex environment…is often overshadowed by a suspicion of who, why, and for what is Extension being questioned.” Consequently, individual and organizational learning took a back seat to countering the criticism that had been levied against Extension.
16
Rethink Evaluation Purpose Evaluation questions can come from any place on the logic model. If we know that A→B → C, why keep measuring C? Important implications for program quality standards and measures. Rather, ask “What single piece of information, if known, would strengthen confidence in your program?” Strengthen the program, strengthen the theory, strengthen the field.
17
Conclusion Deep understanding of a program’s theory of change is essential to sound programming. Increased understanding of theory results in more relevant evaluation questions. Consequently, Extension becomes increasingly able to provide valid and reliable data for decision making.
18
Conclusion Learning organizations have a hunger for new information that makes them more efficient and effective. Through evaluation, members of the organization gain new information, insights, and perspectives on their programs that enable them to work in new ways. As they do, they rise to new levels of personal effectiveness and facilitate peak organizational performance.
19
References Braverman, M.T., Engle, M., Arnold, M.E., and Rennekamp, R.A. (Eds.). (2008). Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120. Jossey-Bass. Rennekamp, R.A. and Arnold, M.E. (2009). What Progress, Program Evaluation? Reflections on a Quarter-Century of Evaluation Practice in Extension. Commentary. Journal of Extension. In Press.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.