Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dr. Carol Albrecht Research Team EXAMPLE of EVALUATION RESEARCH.

Similar presentations


Presentation on theme: "Dr. Carol Albrecht Research Team EXAMPLE of EVALUATION RESEARCH."— Presentation transcript:

1

2 Dr. Carol Albrecht Research Team EXAMPLE of EVALUATION RESEARCH

3 18 public school districts in the state of Texas were involved in Service Learning Projects. The State of Texas Service Learning Center hired Dr. Carol Albrecht and her students to evaluate this program. These power point slides outline the steps they took to complete this evaluation. We met with them to identify their objectives. They wanted to know how the program impacted public school children, teachers, community partners and parents of students.

4

5 Elementary Students High School Students TeachersParents Community Partners

6 One Identify the Objectives, Conceptualize Variables and Develop Indicators. Two Select the Sample. Three Construct the Indicators. Four Select the Research Design. Five Collect and Analyze the Data. Six Write Report and Present Findings.

7 Click here to see the surveys and codebooks.

8

9 An Old Chinese Proverb States: I HEAR, I FORGET I SEE, I REMEMBER I DO, I UNDERSTAND By “doing” this project we learned and really understood some important components of valid and reliable evaluation research.

10 We learned to Carefully Consider the Impact of Timing We learned to Select the “right” Indicators We learned to Deal with Social Desirability Use Multi- Methods and Multiple Units of Analysis We learned to

11 First Carefully Consider the Impact of Timing Selecting the “right” Indicators Dealing with Social Desirability Using Multi- Methods and Multiple Units of Analysis

12  Timing of the Pre-test  Many programs are ongoing, and this can have a major impact on the pre-test. have a major impact on the pre-test.  In our study, many of the students had already participated in a Service Learning already participated in a Service Learning activity at some point in their school years. activity at some point in their school years. So, we didn’t have a true “pre” test. The So, we didn’t have a true “pre” test. The “pre” test scores were contiminated by prior participation. prior participation.

13  Timing of the Post-Test  The “May” Effect  Outside environmental/social factors need to be considered.  In our study, we discovered that both teachers and students were more negative about almost EVERYTHING related to school at the end of the year. This may be explained by two factors: explained by two factors: First, they were just TIRED of school, and looking forward to vacation. Second, they had just taken standardized tests, TAKS. These tests were stressful for both teachers and students.

14 Carefully Considering the Impact of Timing Second Selecting the “right” Indicators Dealing with Social Desirability Using Multi- Methods and Multiple Units of Analysis

15 Selecting the Right Indicators  Head Start Program  In the 1960’s the head start program was launched. The objective was to increase the IQ scores of underrepresented populations including children living in poverty. Early research showed that standardized tests of IQ increased for several years, and the decreased, until there was no difference between the experimental and control groups. While some felt this was evidence for discontinuing the program, parents came forward arguing that the researchers weren’t using the right measurements.

16

17 Selecting the Right Indicators  Head Start Program  A group of researchers call The Perry Preschool Consortium, with The input of teachers and parents, identified (1) social, (2) educational and (3) socioeconomic indicators that differentiated preschool participants from a control group up to 19 years after participation in the program. The differences were compelling.

18 Accurate Interpretation of Indicators  Furthermore, this group argued that the decreasing IQ scores actually provided evidence that environmental factors CAN influence IQ – both positively and negatively. Thus being In an “enriched” environment (i.e., the Head Start Program) can increase IQ but then being transferred to an impoverished environment (i.e., public schools in poor neighborhoods) can decrease IQ.

19 Schooling Success High School Graduation or Equivalent College or Vocational Training Functional Competence Ever Classified as Mentally Retarded Time Spent in Special Education Social Responsibility Ever Detained or Arrested Teen Pregnancies Employed Receiving Welfare

20

21 Selecting the Right Indicators  Using focus groups and intensive interviews, we looked to (1)Teachers, (2)Parents (3)And student participants as well as past research, to help us identify valid and accurate indicators. Analysis of this qualitative data indicated (1)Some students did experience the desired impact. (2)We needed to use “control” variables to accurately assess the impact of Service Learning.

22 Selecting the Right Control Variables The following control variables were all highly significantly related to EVERY outcome measurement. Ownership of Project Students who were involved in planning the project, and felt they “had a voice” were significantly more likely to experience positive outcomes. Amount of Participation A significant number of students spent four hours or less involved in the project. This level of involvement did not result in positive outcomes. However students who spent more time did experience positive outcomes. Teacher’s Attitudes and Experience The teacher’s attitude toward the project was an important factor. If teachers were excited about the program then their students tended to have more positive outcomes.

23 1. Leadership Skills 2. Problem Solving Skills 3. Academic Aspirations 4. Liking for School Whether or not students felt they made decisions about the project Amount of time students indicated they participated in the project. Teacher’s self reported attitudes about the project. Student Success Results from focus groups with students and intensive interviews with teachers indicated that these were valid indicators of the quality and quantity of participations were related to outcomes.

24 Mean Score on Outcome Measurements by Amount of Time Student Planned or Participated in Project Students Planned -Four Hours or More YesNo Attitudes Toward Attending College Attitudes Toward School Problem Solving Skills Leadership Skills 12.81*** 19.84*** 17.01*** 17.15*** 12.50*** 18.45*** 15.58*** 15.85*** Students Participated - Four Hours or More YesNo Attitudes Toward Attending College Attitudes Toward School Problem Solving Skills Leadership Skills 12.89*** 19.54*** 16.91*** 17.04*** 12.18*** 18.38*** 15.09*** 15.46*** ***p<.0001 * p<.05 **p< 0.01 ***p< 0.0001

25 Mean Scores on Outcome Measurements by Sense of Ownership – Whether or Not Students Made Decisions Students Made Decisions YesNo Attitudes Toward College Attitudes Toward School Problem Solving Skills Leadership Skills 13.04*** 19.59*** 17.26***p 17.51*** 12.36*** 18.84*** 15.58*** 15.81*** * p<.05 **p< 0.01 ***p< 0.0001 Click here to see the Power Point Presentation.

26 CHART 1. High School Students’ Perception of How Good They are at Speaking in Front of Groups by Whether or Not They Made Decisions about Service Learning Projects (p <0.0001)

27 CHART 2. High School Students’ Perception of How Good They are at Finding Resources by Whether or Not They Made Decisions about Service Learning Projects (p <0.0001)

28 Carefully Considering the Impact of Timing Selecting the “right” Indicators Third Dealing with Social Desirability Using Multi- Methods and Multiple Units of Analysis

29 Beware of Social Desirability In Evaluation Research, Participant Often Evaluate a Program Positively – EVEN when the Program is “poor” and Ineffective. It may not be seen as socially acceptable to do otherwise.

30 Social Desirability Social Desirability  Why did we become concerned?  What are the “danger” signs?  How did we attempt to alleviate it?  How did we modify the construction of our surveys, our research design and of our surveys, our research design and analysis of the data to deal with this analysis of the data to deal with this problem? problem?

31 Social Desirability Social Desirability  Danger Signs  Type of research  Past literature indicates that respondents tend to be very positive when asked about their participation in a program even when it is a poor program. They don’ want to believe they wasted their time, and they often feel an obligation to express appreciation for those who implemented the program.

32 Social Desirability Social Desirability  Danger Signs  Self selection into the program  Students and teachers were not required to participate in the program. Therefore the program participate in the program. Therefore the program was more likely to attract participants who was more likely to attract participants who already had positive attitudes toward these already had positive attitudes toward these “types” of activities. “types” of activities.

33 Social Desirability Social Desirability  Danger Signs  Consistently high scores on every aspect of the program – no variation aspect of the program – no variation  Response Set can occur. This is where respondents give you the same response (usually positive) without seriously considering the question.  The “ceiling” effect is a similar problem. This is when you get consistently highly positive scores is when you get consistently highly positive scores on the pre-test. In this case, there is little room on the pre-test. In this case, there is little room for improvement in scores. for improvement in scores.

34 Dealing with Social Desirability when Constructing your Survey/Questionnaire Dealing with Social Desirability when Constructing your Survey/Questionnaire Check List Make participation voluntary and make answers anonymous or confidential. Make participation voluntary and make answers anonymous or confidential. Vary negative/positive statements in Vary negative/positive statements in Your index Your index Avoid misleading/biased questions Avoid misleading/biased questions Make statements or questions very specific Make statements or questions very specific

35 Dealing with Social Desirability when Constructing your Survey/Questionnaire Dealing with Social Desirability when Constructing your Survey/Questionnaire Check List – continued Make participation voluntary and make answers anonymous or confidential. Make participation voluntary and make answers anonymous or confidential. Put “sensitive” questions at the end Put “sensitive” questions at the end Ask how they would change program “under ideal circumstances”. Ask how they would change program “under ideal circumstances”. Avoid (1) yes or (2) no answers – ask “degrees” of positive or negative. Avoid (1) yes or (2) no answers – ask “degrees” of positive or negative. Ask for their input in improving the program – rather than simply Ask for their input in improving the program – rather than simply evaluating the program for instance: evaluating the program for instance:  NOT – Is this a successful program, but rather - what factors increase or decrease the success of this program.

36 Dealing with Social Desirability in Your Research Design Dealing with Social Desirability in Your Research Design Check List If possible, don’t evaluate your own program If possible, don’t evaluate your own program  An “outsider” would tend to be more objective and participants would be more likely to provide unbiased answers. Have a variety of participants evaluate the program so Have a variety of participants evaluate the program so you can look for consistencies/inconsistencies in answers. you can look for consistencies/inconsistencies in answers.  Students  Teachers  Parents of participants  Community Partners

37 Dealing with Social Desirability in Your Research Design Dealing with Social Desirability in Your Research Design Check List - continued Use multi-methods so you can compare results across Use multi-methods so you can compare results across to see if you get similar results and look for additional to see if you get similar results and look for additional insights. These could include: insights. These could include:  Focus groups  Participant observation  Surveys  Intensive interviews  Content analysis Content analysis is especially important for researchers who identify tangible products (e.g., bushels of grains) as their outcomes.

38 Dealing with Social Desirability when Analyzing the Data Dealing with Social Desirability when Analyzing the Data Check List Compare your program with other programs Compare your program with other programs Compare across different levels of participation within your sample to see if there are variations Compare across different levels of participation within your sample to see if there are variations Compare across different types of participation within your sample (i.e., in our study, we compared across types of Compare across different types of participation within your sample (i.e., in our study, we compared across types of Service Learning projects). Service Learning projects).

39 Dealing with Social Desirability when Analyzing the Data Dealing with Social Desirability when Analyzing the Data Check List  Compare across different “types” of participants (This would include males vs. females, parents vs. children, rural vs. include males vs. females, parents vs. children, rural vs. urban dwellers). urban dwellers).  Compare scores across questions – especially questions that measure the same outcomes. measure the same outcomes.  Compare answers across time fall vs. summer participants fall vs. summer participants The most important thing to remember here is to NOT just ask if the program was successful, but rather, HOW and WHEN it is most successful.

40 Carefully Considering the Impact of Timing Selecting the “right” Indicators Dealing with Social Desirability Using Multi- Methods and Diverse Groups Fourth

41 Different Research Designs can provide both additional insights and further support for your results. Focus groupsSurveys Content Analysis Participant Observation Case StudiesField Trials Laboratory Experiments Intensive Interviews

42 Evaluation of Service Learning Project Intensive Interviews with Teachers Focus Groups with Students Mail-Out Surveys with Community Partners. Face to Face Surveys with Parents On-line surveys of Service Learning Coordinators Surveys with Students

43 Evaluation of the Service Learning Program  Examples of data  that the program is producing the desired outcomes outcomes  collected from Teachers  Using Telephone Surveys  Using Focus Groups

44 Descriptive Statistics for Elementary and Middle/High School Service Learning Teachers: Extent to Which Teachers Agree with the Following Statements about the Impact of Service Learning in Their Classroom Elementary Middle/High School NumberPercentNumberPercent Positive Addition to Classroom Learning Agree/Strongly Agree Agree/Strongly Agree Beneficial for the ALL Students Agree/Strongly Agree Agree/Strongly Agree Motivates Students to be Involved Agree/Strongly Agree Agree/Strongly Agree Helps Students Learn Curriculum Agree/Strongly Agree Agree/Strongly Agree Should be Required for All Students Agree/Strongly Agree Agree/Strongly Agree727566474090.0093.7587.5058.7550.00 73 73 74 74 66 66 56 56 50 50 96.05 96.05 97.37 97.37 88.89 88.89 73.68 73.6865.79

45 Descriptive Statistics: Identification of GREATEST BENEFITS by Middle/High School and Elementary Service Learning Teachers Elementary Middle/High School PercentPercent Benefits for Students Service to Others Service to Others Understanding of World Understanding of World Personal Growth Personal Growth Help Learn Curriculum Help Learn Curriculum 50.00 50.00 14.71 14.71 30.89 30.89 4.41 4.41 32.43 32.43 25.68 25.68 32.43 32.43 9.46 9.46 Benefits for Teachers Student Growth Student Growth Service to Others Service to Others Involvement with Students Involvement with Students Break in TAKS Break in TAKS82.2616.13 1.61 1.61--73.9115.94 7.25 7.25 2.90 2.90

46 As One Teacher Stated, “ Service Learning is the most powerful and impactful thing I ever did in the classroom as a teacher. It hooked me, and I am a believer in the power. Another Teacher Claimed, “ I think this program has transcended anything that anyone expected when they began the program. It has extended beyond what they thought it could achieve.”

47 One Teacher Argued, “ I could have never ever taught the lessons they learned about human nature.” While Another Claimed, “ It teaches kids the skills that are not book skills….skills like how to think, how to plan, how to organize, how to manage - stuff you can read about in a book, but until you do it, you don’t know you have the ability to do it.”

48 One Teacher Stated, “ school is not as…engaging as when they learn through these projects…they are learning all of these things by action – their great public speaking skills, their writing skills, their marketing…” Another Teacher Explained, “ in the writing TAKS, we had to write with a prompt so it kind of helped with the writing and the reading TAKS too.”

49 Evaluation of the Service Learning Program  Examples of data  that the program is producing the desired outcomes outcomes  collected from parents and community partners  using telephone surveys  using focus groups

50 Descriptive Statistics For Parents and Community Partners; Descriptive Statistics For Parents and Community Partners; An Evaluation of the Service Learning Program Parents Community Partners MeanRangeMeanRange Positive Addition to Classroom Beneficial for All Students Motivates Students to be Involved Helps Agency Achieve Goals Is Valued by Agency Is Valued by Community *on 5 pts. Scale (SD to SA) **sample size is small 5.004.834.83------011------4.944.834.894.394.674.56111333

51 One Community Partner Described Their Relationship with the School, “ We Actually came to the schools…and we were looking for assistance. It’s a great marriage. We are still married.” And when Describing the Benefits for Students, “.. we’ve watched students mature into more socially aware students - much more mature. It’s amazing. It’s just amazing.”

52  Timing of data collection is important.  Selecting reliable/valid indicators is critically important. Spend some time doing this.  IF you are doing Evaluation Research, plan ways to reduce the impact of social desirability on your results.  Use multi-methods when feasibility to provide additional insights and greater support for your results.  Try to gather information from all the different groups that may be impacted by the program (i.e., parents, students etc.).

53  Dr. Carol Albrecht  Assessment Specialist USU Ext  carol.albrecht@usu.edu  (979) 777-2421


Download ppt "Dr. Carol Albrecht Research Team EXAMPLE of EVALUATION RESEARCH."

Similar presentations


Ads by Google