Stephanie Sowl Clarin Collins Audrey Amrein-Beardsley Teaching and Learning to Enhance Evaluation Practices: Striving to Optimize Student Learning Outcomes by Blending Evaluation Theory and Application Stephanie Sowl Clarin Collins Audrey Amrein-Beardsley
Purpose Importance of balancing theory and practice in teaching evaluation Necessity in doing a better job to ensure graduates are prepared for evaluation work
Background PhD Level Course Masters Level Course More theoretical Independently researched evaluation design and methodological approaches More practical Collaboratively designed and executed an actual program evaluation, directly applying the curriculum
Methodology Survey aligned to the Essential Competencies for Program Evaluators Self-Assessment (Ghere, King, Stevahn & Minnema, 2006) to investigate student perspectives Internal consistency of all constructs had alpha levels ranging from 0.76 to 0.97 t-tests and Cohen’s d were used for analysis Semi-structured interviews after evaluation completion with the program leaders of the program the students evaluated
Findings
Findings continued
Findings continued Program leaders were satisfied with the outcomes of the evaluation as some of their suspicions of issues within the program were shown to have merit with data collected The evaluation highlighted areas of weakness within the program that leaders wish to investigate further with a second program evaluation
Significance Can more appropriately help students learn about program evaluation in theory and in practice Arms students with necessary evaluation tools but also ability to clearly communicate to multiple audiences about data and findings Time-consuming to work with leaders for program evaluation and timeline for students to complete evaluation in a semester was aggressive, but it provided students an opportunity they would have otherwise not had
Any questions?
References Chelimsky, E. (2013). Balancing evaluation theory and practice in the real world. American Journal of Evaluation, 34(1), 91-98. doi:10.1177/1098214012461559 Christie, C. A. (2003). What guides evaluation? A study of how evaluation practice maps onto evaluation theory. New Directions for Evaluation, 97, 7-35. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hilsdale. NJ: Lawrence Earlbaum Associates, 2. Ghere, G., King, J.A., Stevahn, L., & Minnema, J. (2006). A professional development unit for reflecting on program evaluator competencies. American Journal of Evaluation, 27(1), 108-123. doi: 10.1177/1098214005284974