Download presentation
Published byPiers Jackson Modified over 9 years ago
1
Narrowing the Gap between Evaluation and Developmental Science: Implications for Evaluation Capacity Building Tiffany Berry, PhD Research Associate Professor Associate Director Claremont Evaluation Center Claremont Graduate University June 29, 2015 It’s my honor to be with you here tonight and I appreciate the warm welcome, hospitality, and the beauty of South Africa. Tonight, I’m going to discuss how to strategically narrow the rather wide gap we have between evaluation as a discipline and developmental science; how we can harness a bidirectional transaction between the two disciplines (focusing on what developmental science could offer the evaluation community as well as how evaluation can improve developmental science). Then, I’m going to end on the implications these issues have for building evaluation capacity.
2
Disciplinary Training
Research University Full-time Educational Evaluator Child and Adolescent Development Evaluation Context of Work Disciplinary Training My interest in this work stems from constantly being at the intersection of multiple worlds.
4
Developmental Research Process
Evaluation Process Developmental Research Process Youth Programs Evaluation Improve Programs and Youth Outcomes Research Community Research Improve Theory and Knowledge
5
How can we strategically narrow the gap between developmental science and evaluation practice? How can we build capacity within both fields?
6
Evaluation Practice “Developmental Sensitivity” Developmental Science
Tools, Mechanisms, Value, Development “in context”
7
Where does developmental sensitivity conceptually fit within the discipline of evaluation?
8
#1: AEA Guiding Principles
“Evaluators have the responsibility to understand and respect differences among participants, such as differences in their culture, religion, gender, disability, age, sexual orientation and ethnicity, and to account for potential implications of these differences when planning, conducting, analyzing, and reporting evaluations” (AEA, 2004, emphasis added). The American Evaluation Association (AEA) has officially acknowledged the fact that different age populations require special understanding and consideration in their Guiding Principles for Evaluators, stating Kids are unique group – change quickly, development is marked by peaks and valleys, qualitative and quantitative growth What remains largely undetermined however, is how evaluators “account for potential implications of these differences” when dealing with underage program participants.
10
#2: Cultural Competence
The culturally competent evaluator is one who “draws upon a wide range of evaluation theories and methods to design and carry out an evaluation that is optimally matched to the context” (AEA Cultural Competence Statement, 2011) Youth as a context to attend to…
11
Childhood as a context to understand
12
#3: Need Many programs serving children and youth ECD Programs 1
13
#3: Need Insufficient and poor quality evaluations with youth populations (Bialosiewicz & Berry, in preparation) 57% 47% 45% 70% 46% 88% Peer reviewed articles Fields represented: Psychology/Psychiatry, Health, Education, Interdisciplinary, Social Work, Zoology, Law Level of evaluation: Local level in the US (41%), state wide or multistate in the US (6%), International (53%). Articles were coded on 31 variables Description of the article (Authors and affiliation, substantive field, year) Description the program (Purpose, components, target population, etc.) Description of the evaluation (Design, focus/purpose, participants, independent and dependent variables, moderators, implementation, measures, timing of assessments, etc.) The coding categories related to the evaluation were intended to extract information that could be assessed in comparison to best-practices for developmentally sensitive assessment of child and youth populations A checklist for developmentally sensitive YPE was developed first through an extensive literature review of developmental psychology and applied developmental psychology research. The checklist was then expanded and refined though feedback and input from the evaluation community at a roundtable session at the 2011 conference of the American Evaluation Association This presentation will examine some of the key findings in light of some of the guidelines presented in this checklist 57% did not assess implementation 47% did not take into account natural maturation 45% did not take a whole child approach – examining more than one limited outcome variable 70% did not report on the psychometric properties of the scales they used with young children 46% did not disaggregate by key moderators (based on individual characteristics) 88% did not disaggregate by level of participation 2
14
#3: Need Early intervention holds the most promise for putting youth on positive developmental trajectories 3
15
How do we define developmental sensitivity?
16
Principles of Development
Environment Individual Interactions Age Sensitive periods Milestones Domains Cognitive Physical Social-Emotional Moral Behavioral Age: Infancy Importance of attachment consequences of social development in later childhood Adolescents Importance of sense of autonomy consequence of self-esteem Language Acquisition To master a second language (or language in general), exposure early in life is compulsory Perceptual Development Attachment PYD encompasses a more broad model of development including six domains of competencies to be developed (Catalano, Berglund, Ryan, Lonczek, Hawkins, 2004). More holistic approach to development (focusing on positive growth, as well as the reduction of problems in these realms), thus the inclusion of emotional, moral, and behavioral development. Within our evaluation we attempted to assess students’ develop across multiple developmental domains. With the exception of grades, test scores and academic attitudes all the other constructs were were assessed through our student survey. Discuss how we measured the whole child via these constructs on the student survey
17
How do we apply developmental sensitivity to educational evaluation practice?
18
CDC Evaluation Framework
Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: CDC Evaluation Framework
19
Realistic expectations for development? Knowledge
Engage Stakeholders Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: Realistic expectations for development? Knowledge of child development? Understand the importance of developing quality attachment relationships in young childhood? If they are running an adolescent program, do they understand the importance of developing identity and autonomy?
20
Describe the Program Developmentally appropriate?
Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: Developmentally appropriate? Capitalize on multiple developmental domains? Aligned with sensitive periods? Articulate a logic model; clarify program goals and objectives in relation to actual implementation; analyzing program context Developmental appropriate: Are they a program targeting adolescents and they ask them to color to improve their cognitive ability? Are they integrating services across developmental domains? Developmentally appropriate design? Capitalize on multiple developmental domains? Aligned with sensitive periods?
21
Focus the Evaluation Design
Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: Account for maturation? Stability and variability across domains? Mediators and moderators? Ecological context? Purpose of evaluation; how results are to be used; describing methods; identifying evaluation questions; articulation of evaluation procedures How to account for maturation; stability and variability across domains Risk and resiliency Mediators and moderators; individual context interactions
22
Gather and Analyze Evidence
Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: Developmental precursors? Standardized assessments across domains? Sensitive measures? Mixed methods? Decisions about what to measure, which assessments to use, and who will respond to assessments Role of developmental precursors (e.g., self-regulation as precursor to academic achievement) Standardized assessments for cognitive/academic; less so for socio-emotional domains; role of multiple informants Developmentally sensitive measures (response options); role of observations and qualitative research are important Pilot test
23
Understanding youth in context
Justify Conclusions Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: Understanding youth in context? Embrace and measure variability; statistical modeling to identify pathways Analyzing data and making conclusions about the program; linking conclusions to evidence of effectiveness Understanding youth in context Embrace and measure variability in program participants Why they joined; levels of risk; performance across contexts Statistical modeling to identify pathways and moderators
24
Ensure Use and Share Lessons Learned
Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: Communicate findings back to program AND scholarly outlets to promote understanding of “youth in context” Make stakeholders aware of the findings; make findings considered in decisions that affect the program Communicate findings back to program AND scholarly outlets to promote understanding of “youth in context”
25
Developmental Sensitivity Developmental Sensitivity
Engage Stakeholders PHASE I: Describe the Program PHASE II: Focus the Evaluation Design PHASE III: Gather and Analyze Evidence PHASE IV: Justify Conclusions PHASE V: Ensure Use and Share Lessons Learned PHASE VI: CDC Evaluation Framework
26
Evaluation Practice “Developmental Sensitivity” Developmental Science
Tools, Mechanisms, Value, Development “in context”
27
How can we conceptualize evaluation practice in way that will inform developmental science?
28
Evaluation Practitioners
Afterschool Programs Context Eval Theory Program Theory Social Science Theory Service Providers Research Community Improve Programs Improve Theory Consistent attendance – what predicts that? We start with a problem – for example, society needs youth with as Rich Lerner would say, the 6 C’s (competence, caring, connection, confidence, contribution, character). Youth programs are developed to try to build these skills in youth and the research community is engaged to understand how to measure positive youth development, how to conceptually define it, what the consequences and antecedents for PYD are, etc. If we harnessed information from both sources to use evaluation as a bridge, then we would engage in evaluation processes that consisted of these three features: understanding context, mechanisms, and meaningful collaboration. When we use mechanisms, we are referring to not only the core processes that bring about intervention effects, but also developmental mechanisms (what are the mechanisms that are responsible for a change in child development?). Context - when applied to youth development programs, we often think of Bronfenbrenner and his ecological systems of development – how is a child’s ecology (home, school, community, etc.) interacting with the program services to produce intended effects? Or, how do these developmental or program mechanisms work in particular contexts? The interaction between the context and mechanisms that bring about change is the key here. For example, I’ve done a lot of work in the afterschool context, and we’ve identified several different activity contexts that seem to relate to differential outcomes for youth (and the outcomes are different for different ages of youth). Too often in the basic research literature, we don’t focus on practitioners or try to learn from them. Tapping into practitioners knowledge, or collaboration, is key – and something that Huey Chen, for example, has written extensively about as an undervalued part of the evaluation enterprise. Practitioners’ institutional knowledge will serve us well as we investigate the relations between how different mechanisms play out in different contexts. Now, each of these different features of the evaluation are informed by different theories – evaluation theory (understanding what framework might be best applied to a particular mode of inquiry), program theory (how do stakeholders think the program works), and social science theory (what critical frameworks/research/theories are particularly relevant to understanding the evaluand and program theory)? The results of this evaluation inquiry can be applied to improve programs as well as improve/inform developmental theory for the broader research community. But, another dissemination mechanism that’s often missed is evaluation practitioners – all of you! How can we create better systems so that the broader evaluation community is learning and engaging in relevant program work? Collaboration Mechanisms Dissemination Evaluation Practitioners
29
Developmental Science
Evaluation Practice Developmental Science
30
Implications for Building Evaluation Capacity
Programs Organizational learning approach Let evaluators learn from you Evaluators Tools and Measurement Social Science Theory Age as a unique context Institutions Promote inter-disciplinary training Develop and reward scholar-practitioners Eval Training Programs – increase content knowledge in substantive domains related to context of program Increase knowledge of child development Increase program staff’s knowledge through evaluation as education
31
Evaluation is a bridge Theory Practice
32
Tiffany.berry@cgu.edu; 909.607.1540
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.