Download presentation
Presentation is loading. Please wait.
Published byMaarit Jurkka Modified over 5 years ago
1
Evaluating Life Skills Development Through 4-H Challenge Course Outcomes
VanderWey, S. Cooper, R. North American Association for Environmental Education National Conference Spokane WA - October 12, 2018
2
Introductions Scott VanderWey Robby Cooper Associate Professor
Director of 4-H Adventure Education Washington State University Robby Cooper Clinical Assistant Professor Scott… 2
4
A Land Grant Research University
Scott
6
Learning Objectives Gain awareness of challenge course evaluation findings. Gain understanding of effective program evaluation implementation and research methods. Develop ideas for your own program evaluation with the assistance of workshop facilitators.
7
Camp Long is a 67 acre part in West Seattle
Camp Long is a 67 acre part in West Seattle. It was already the site of the oldest artificial rock climbing area in North America, and the 4H Challenge Course added to the rich history of adventure education within Seattle Parks. Reaching Urban Youth
8
Camp Long Challenge Course
9
Think-Pair-Share Consider: Directions: Think to yourself.
What are you doing, or what would you like to be doing, to evaluate your programs? Directions: Think to yourself. Discuss with a partner. Share with your group.
10
The beginning… “We know it works” How can we show it?
Actually, no we don’t. No matter how good it is, it can be better. We can empirically demonstrate that it works, specifically what it does, to other people (funders, supervisors, peers, potential clients). Top Viewed as of December 31, 2012: 1. Empowerment: What is it? ,583 views 8. Analyzing Likert Data. April ,012 views Now we have an instrument FOR US. We design the programs – we decide if we have time. Is it important to know if our programs are doing what we intend them to do? Is it important to demonstrate that to stakeholders, bosses, potential clients? Then it is worth the time. Yes we do – That’s what we will do today. How can we show it?
11
What impacts would you expect?
The beginning… Do you think “it works?” Actually, no we don’t. No matter how good it is, it can be better. We can empirically demonstrate that it works, specifically what it does, to other people (funders, supervisors, peers, potential clients). Top Viewed as of December 31, 2012: 1. Empowerment: What is it? ,583 views 8. Analyzing Likert Data. April ,012 views Now we have an instrument FOR US. We design the programs – we decide if we have time. Is it important to know if our programs are doing what we intend them to do? Is it important to demonstrate that to stakeholders, bosses, potential clients? Then it is worth the time. Yes we do – That’s what we will do today. What impacts would you expect?
12
The Development Process
Not just creating a survey… Program Mission Procedures Goals Training Outcomes Implementation Measures Analysis Resources Revision Article sought to create measures of desired outcomes of Positive Youth Development programs. Researchers found both face and concurrent validity through interviews with a pilot sample and measuring the correlation of survey scales with associated variables.
13
Program Mission What is your Program Mission?
What do you and your people value? Article sought to create measures of desired outcomes of Positive Youth Development programs. Researchers found both face and concurrent validity through interviews with a pilot sample and measuring the correlation of survey scales with associated variables.
14
Goals Process Am I running my program well? Outcome Is my program making an impact? Article sought to create measures of desired outcomes of Positive Youth Development programs. Researchers found both face and concurrent validity through interviews with a pilot sample and measuring the correlation of survey scales with associated variables.
15
Adventure Education Outcomes Targeting Life Skills Model
16
* Program Outcomes * = self-efficacy Actually, no we don’t.
No matter how good it is, it can be better. We can empirically demonstrate that it works, specifically what it does, to other people (funders, supervisors, peers, potential clients). Top Viewed as of December 31, 2012: 1. Empowerment: What is it? ,583 views 8. Analyzing Likert Data. April ,012 views Now we have an instrument FOR US. We design the programs – we decide if we have time. Is it important to know if our programs are doing what we intend them to do? Is it important to demonstrate that to stakeholders, bosses, potential clients? Then it is worth the time. Yes we do – That’s what we will do today. * = self-efficacy
17
Measures Communication (Klein, et al. 2006; Adolescent Health)
Decision-making (Klein, et al. 2006; Adolescent Health) Teamwork (Annett, et al., 2000; Ergonomics) Self-efficacy (Sherer, et al., 1982; Psychological Reports)
18
Procedures & Resources
What can we realistically and reasonably accomplish that will provide a valid and reliable measurement? Time Resources Qualifications administration analysis Setting Ethics
19
Measures Concerns Reliability Validity
20
Procedures Notice to Parents sent out with program registration materials Facilitator administers surveys using packet containing: Facilitator Instructions Recruitment Script Processing Page Surveys Envelope
21
Procedures Pretest – Posttest Design 12 – 16 item pretest
21– 25 item posttest (satisfaction, demographics) Completed surveys are sealed in an envelope and mailed to WSU Pullman campus. Data is entered and analyzed by faculty and graduate assistants.
22
Training & Implementation
Clear Procedures for Implementation Fidelity Realistic Expectations Frequent Check-ins and Revision
23
Analysis & Revision Procedural Fidelity Training Audience
Realistic Expectations Survey
24
Reliability of Measures
Analysis Reliability of Measures Scale Reliability (Chronbach’s Alpha)* Version 1 Version 2 Communication .30 Decision-making .48 Teamwork .34 Self-efficacy .33 *Averaged pretest / posttest Alpha Alpha > .70 considered sufficient
25
Reliability of Measures
Analysis Reliability of Measures Scale Reliability (Chronbach’s Alpha)* Version 1 Version 2 Communication .30 .69 Decision-making .48 .79 Teamwork .34 .68 Self-efficacy .33 .61 *Averaged pretest / posttest Alpha Alpha > .70 considered sufficient
26
85% of participants said they enjoyed their experience at Camp Long
Results Camp Long, Seattle, WA Sample: 696 participants Satisfaction 85% of participants said they enjoyed their experience at Camp Long
27
Results N Mean Score Change t Sig. (p) Effect Size (d) Communication 576 .181 7.78 .000* .26** Decision-making 693 .089 2.92 .14 Self-efficacy 386 .194 4.52 .32** Teamwork 696 .103 4.32 .17 Table columns: Name of scale variable. Number of participants included in pretest-posttest comparison for that scale variable. The mean score of the group, at pretest, for that scale variable. The t-test score, which is the number that you get when you calculate the paired t-test (a comparison of mean scores in a related group). The p-value, which is the measure of significance. A p-value < .05 indicates a significant change in mean score from pretest to posttest. Results: Communication was highly significant. (change from pretest to posttest) Decision-making was not significant, but might be for repeat groups of participants, which we will look at as we survey more groups and specifically more repeat groups. Teamwork was significant. (change from pretest to posttest) Self-efficacy was not significant. (sample size is too small) *Statistical significance: p < .05 ** Small effect size: d > .2
28
Results
29
Results
31
Scott VanderWey vanderwey@wsu.edu
Robby Copper Kevin Wright
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.