Download presentation
Presentation is loading. Please wait.
Published byIra Holland Modified over 9 years ago
2
L OOKING FOR R ESULTS : P RINCIPLES FOR E VALUATING S TUDENT S UCCESS I NITIATIVES Presenter: Rick Voorhees
3
Goals for Today’s Work Together You’ll be able to –Identify key questions for evaluating interventions –Distinguish between different types of evaluation –Make the link between evaluation and Continuous Quality Improvement –Identify four components of evaluation –Visualize how a logic model can be a powerful tool for understanding interventions
4
T HE BIG Q UESTION What types of learners change in what evident ways with which influences and resources?
5
C HANCES A RE... Everything else equal, what are the chances (probability) that –Males and females progress and graduate at the same rate? –Racial and ethnic groups progress and graduate at the same rate? –Financial aid recipients and non-recipients progress and graduate at the same rate? –Students referred to developmental education and those that aren’t progress and graduate at the same rate?
6
F ORMATIVE P ROGRAM E VALUATION Formative evaluation: (sometimes referred to as internal) is a method for judging the worth of a program while the program activities are forming (in progress). This part of the evaluation focuses on the process. Permits faculty, staff, and students to monitor how well goals are being met. The main purpose is to catch deficiencies so that the program can readjust while it is in progress Source: Performance, Learning, Leadership, & Knowledge (n.d.) Retrieved March 25, 2013 at http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations.html
7
S UMMATIVE P ROGRAM E VALUATION Summative evaluation: (sometimes referred to as external) is a method of judging the worth of a program at the end of the program activities (summation). The focus is on the outcome. Note: All evaluations can be summative (i.e., have the potential to serve a summative function), but only some have the additional capability of serving formative functions (Scriven, 1967). Source: Performance, Learning, Leadership, & Knowledge (n.d.) Retrieved March 25, 2013 at http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations.html
8
T HE B ASICS OF C ONTINUOUS Q UALITY I MPROVEMENT Graphic Source: ww.anzca.edu.au/fpm/resources/educational-documents/guidelines-on-continuous-quality-improvement.html A Act C Check D Do P Plan
9
F OUR C OMPONENTS OF A C ULTURE OF I NQUIRY Component One “What’s Wrong” Use disaggregated longitudinal cohort data to determine: 1)Which student groups are less successful than others (i.e. identify gaps in student success 2)Which high enrollment courses have the lowest success rates Component Two “Why” Collect, analyze, and use data from other sources (focus groups, surveys, literature reviews) to identify the underlying factors (barriers or challenges) impeding student success. Component Three “Intervention” Use data from Component Two to design new interventions, or revise current ones, to effectively address the underlying factors impeding student success. Review and consider changes to existing practices and policies that impact those factors Component Four “Evaluation and Modification” Collect, analyze, and use evaluation data to answer 1)To what extent did the intervention (including policy changes) effectively address underlying factors? 2)What extend did the interventions increase student success Source: K.P. Gonzalez. Using Data to Increase Student Success: A Focus on Diagnosis. Retrieved March 12, 2013 at http://www.achievingthedream.org/sites/default/files/resources/ATD_Focus_Diagnosis.pdf
10
Group 1 Group 2 Intervention Post-Test Measurement Post-Test Measurement True Experimental Design Pretest-Posttest Control Group Design Source: Campbell, D.T. & Stanley, J. (1963). Experimental and Quasi-Experimental Designs for Research. Wadsworth Publishing Random Assignment True Differences
11
Group 1 Group 2 Intervention Post-Test Measurement Post-Test Measurement Quasi-Experimental Design Posttest Only Control Group Design Could be a “historical cohort” Source: Campbell, D.T. & Stanley, J. (1963). Experimental and Quasi-Experimental Designs for Research. Wadsworth Publishing True Differences
12
P ERCENTAGE OF S TUDENTS P ERSISTING B Y E NROLLMENT IN A S TUDENT S UCCESS C LASS Source: Voorhees & Lee. (n.d.) Basics of Longitudinal Cohort Analysis. Retrieved April 15, 2012 at http://achievingthedream.org/sites /default/files/resources/ATD_Longitudinal-Cohort-Analysis.pdf
13
Marco Level Cohort Micro Level Cohort SSBTN Template Intervention Level
14
Kirkpatrick’s Four Level Evaluation Model Step 1: Reaction - How well did the learners like the learning process? Step 2: Learning - What did they learn? (the extent to which the learners gain knowledge and skills) Step 3: Behavior - What changes in performance resulted from the learning process? (capability to perform the newly learned skills while on the job) Step 4: Results - What are the tangible results of the learning process in reduced cost, improved quality, increased production, efficiency, etc.? Kirkpatrick, D. (1998). “Evaluating Training Programs: The Four Levels” (Second Edition). San Francisco: Berrett-Koehler Publishers, Inc.
15
C AUSE OR C ORRELATED ? Attribution of cause and effect is difficult when working with a highly complex institution with multiple programs and initiatives as well with students from a wide range of backgrounds and current environmental influences.
16
S OURCES OF E VALUATIVE D ATA Administrative data systems Focus groups Faculty journaling Student journaling External surveys Institutionally-developed surveys Interactions with college services Matching external databases
17
Census DateEnd-of-Term Most of What We Do Know About Students Happens Here Most of What We Don ’ t Know About Students Happens Here We Could Do So Very Much Better Here Academic Terms, Administrative Data Systems, and What We Know About Students The Academic Term
18
Resources and Inputs Activities OutputsOutcomes Planned Work Intended Results Assumptions L OGIC M ODELING : I NFORMING P LANNING AND E VALUATION Impact
19
L OGIC M ODEL E LEMENTS AssumptionsResourcesActivitiesOutputsOutcomesImpact The underlying assumptions that influence the program’s design, implementation, or goals Human, financial, and organization al resources needed to achieve the program’s objectives. Things the program does with the resources to meet its objectives. Direct products of the program’s activities: evidence that the program was actually implement ed. Changes in participants’ knowledge, behavior, skills, status, and level of functioning as a result of the program. Systemic, long-term change as a result of the program (as long as 7- years).
20
W HY U SE A L OGIC M ODEL ? 1. Program Design and Planning: serves as a planning tool to develop program strategy and enhance the ability to clearly explain and illustrate program concepts and approach to all college stakeholders 2. Program Implementation: forms the core for a focused management plan that helps identify and collect the data needed to monitor and improve programming 3. Program Evaluation and Strategic Reporting: presents program information and progress toward goals in ways that inform, advocate for a particular program approach, and teach program stakeholders
21
INPUTSOUTPUTS Program investments ActivitiesParticipationShortMedium What we invest What we do Who we reach What results Long- term Logic Models Aren’t Automatically Linear OUTCOMES
22
Assumptions Outcome Outputs Activities Inputs Your Results Your work Your beginnings Impact L OGIC M ODEL W ORKSHEET
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.