Taking Student Success to Scale. High Impact Practices in the States
Assessment as Culture Management & Professional Development Jerry Daday Executive Director Center for Innovative Teaching & Learning Western Kentucky University
Criteria for high quality hips Expectations set at appropriately high levels Intentional (clear Essential Learning Outcomes (ELOs); structured experience Significant investment of time and effort Preparation, orientation, and training Interaction with faculty and peers Experience with diversity Frequent and constructive feedback Periodic and structured opportunities for reflection Relevance through real world applications (i.e. hands-on experience) Public demonstrations of competence Kuh & O’Donnell (2013)
outcomes Transactional Outcomes Retention Persistence GPA Essential Learning Outcomes AAC&U LEAP Essential LO Value rubrics used for assessment (Value rubrics used for assessment) Deep Learning Applied Learning Community/relationship building
HIPs Inventory Last spring, we conducted an inventory of HIPs used in 53 of our majors at WKU 88% of these majors offer a writing-intensive course 77% offer a capstone experience 66% offer collaborative/project based assignments 64% offer common intellectual experience 54% offer global-learning 43% offer undergraduate research & creative activities
HIPs Inventory On average, departments reported that students receive 5 HIPs within each major 1 major = 10 HIPs 2 majors = 9 HIPs 3 majors = 8 HIPs
Validity Are we measuring what we think we are measuring? Before we can accurately/validily measure a concept (like a HIP), we need to engage in . . . Conceptualization: process of defining a concept Operationalization: process of specifying measures/indicators of a concept
Value of taxonomies Explicitly define HIPs Ensure fidelity of the HIP by expressing purpose/intent Essential Learning Outcomes (ELOs) Specific attributes that define the HIP Guide professional development of faculty & staff PD tool for improvement Validation of activity’s impact for tenure & promotion and annual review Precursor to assessment effort Must define parameters before meaningful assessment of student learning can take place
Value of taxonomies Programs within a department or university Taxonomies may be used to guide the develop of Individual courses First-Year-Experience Living-Learning-Communities Undergraduate Research Service Learning Programs within a department or university Internships & Study Abroad University-wide initiatives
Example: CSU Service Learning (Slide 1)
Example: CSU Service Learning (Slide 2)
Intermission
HIP: What Works and How Do We Know? Jennifer Merriman, PhD Executive Director Research August 25, 2017
Why should we care about program evaluation? If it is not working, why keep doing it? Why should we care about program evaluation? $$$$
Overview HIP Theory of Action Heterogeneity of Programs Revised Theory of Action Claims Formative and Summative Evaluation Fidelity of Implementation Developing an evaluation to help determine if your program is HIGH IMPACT
Improved Student Outcomes Theory of Action HIP Improved Student Outcomes The causal logic of a program’s impact on outcomes
Heterogeneity of Practices Documentation of the specific practices yields more accurate evaluation of impact What do we mean by HIP? How is it operationalized?
Variation in which practices offered Institution A B C First-Year Seminar and Experiences √ Common Intellectual Experiences Learning Communities Writing-Intensive Courses Collaborative Assignments and Projects Undergraduate Research Diversity/Global Learning Service Learning Internships Capstone Courses and Projects
Variation in practice operationalization Institution Collaborative Assignments and Projects A B C Study Groups √ Team-based assignments Cooperative projects Meet twice a year Meet once a month Meet weekly Professor-led Student-led
Improved Student Outcomes Theory of Action HIP Improved Student Outcomes Which practices? Implemented in what ways? For which students? In what context? Which outcomes? Why and how will these practices influence student outcomes? How does implementation affect outcomes? Are outcomes seen for all students? In all contexts? 20
Revised Theory of Action Specificity and context matter Revised Theory of Action Urban institution serving many 1st gen students Student Outcomes Engagement GPA Retention Other? Team-based assignments Meet weekly throughout semester Student-led Collaborative Assignments and Projects Specific implementation details here Specifics Internships Undergraduate Research All Students Only Policy Students Only STEM Students
Black Box vs Specific Practices Which practices actually drive outcomes? A black box approach might mask effectiveness, OR heterogeneous effects might yield null effects on average Black Box vs Specific Practices Student Outcomes: GPA Team-based assignments Meet weekly throughout semester Student-led Collaborative Assignments and Projects Specific implementation details here Specifics Internships Undergraduate Research All Students Only Policy Students Student Outcomes: None Only STEM Students Student Outcomes: None
Claims Too Vague Example Claims Students who participate in HIP are more successful Traditionally underserved minority students who successfully complete an undergraduate research project in a STEM field at 4-year public IHEs are 8 times more likely to go on to graduate school. Too Vague Figure out the kinds of claims you want to make BEFORE you begin developing your evaluation. The desired claims will drive evaluation design and data requirements.
Formative vs Summative Evaluation Evaluation Type Definition Uses Examples Formative Evaluates a program during development in order to make early improvements Helps to refine or improve program When starting a new program To assist in the early phases of program development How well is the program being delivered? What strategies can we use to improve this program? Summative Provides information on program effectiveness Conducted after the completion of the program design To help decide whether to continue or end a program To help determine whether a program should be expanded to other locations Should this program continue to be funded? Should we expand these services to all other after-school programs in the community?
Fidelity of Implementation The extent to which the delivery of an intervention adheres to the program model as intended by the developers of the intervention If null program effects are found it may be that the program is not effective OR that the program was not implemented with fidelity Five dimensions: Adherence: Program adherence refers to the extent to which program components are delivered as prescribed by the model. Adherence indicators can include program content, methods, and activities. 2) Exposure: Program exposure (i.e., dosage) is the amount of program delivered in relation to the amount prescribed by the program model. Exposure can include the number of sessions or contacts, attendance, and the frequency and duration of sessions. 3) Quality of delivery: Quality of delivery reflects the manner in which a program is delivered. Aspects of delivery quality can include provider preparedness, use of relevant examples, enthusiasm, interaction style, respectfulness, confidence, and ability to respond to questions and communicate clearly. 4) Participant responsiveness: Participant responsiveness refers to the manner in which participants react to or engage in a program. Aspects of participant responsiveness can include participants’ level of interest in the program; perceptions about the relevance and usefulness of a program; and their level of engagement, enthusiasm, and willingness to engage in discussion or activities. 5) Program differentiation: Program differentiation is the degree to which the critical components of a program are distinguishable from each other and from other programs. Fidelity of Implementation
intermission
Action Steps Describe your HIP programs carefully: Which programs do we offer? Who participates in the programs? How is the program operationalized? What are the expected outcomes for each program and why do we think we will see that outcome? This sets up the Theory of Action What claims do we want to make from the evaluation? Different stakeholders have different claims they want to make internally and externally—vet the claims with all pertinent groups Is the program still is development, continuously changing, or fully developed? This will determine whether to undertake formative or summative evaluation Data availability How will we standardize our metrics? Are all data available for the evaluation? When will the data be available? When do we need to report findings? What is the counterfactual? When you talk about program outcomes you need to think carefully about the comparison group—improved student success compared to what? What can I do next?
Key Questions Taxonomies can be used for faculty professional development to ensure HIPs are implemented with fidelity Taxonomies can also be used to set up program evaluation. For example . . . Is there a difference in outcomes based on these degrees? Does a “low intensity” HIP have the same impact as a “high intensity HIP”? What should we track for outcomes? What do you wish to measure? (transactional? essential learning? Deep or applied learning?) What about the 8 criteria of a HIP (Kuh & O’Donnell 2013)? Should these be what we are measuring as attributes across all HIPS? Or should attributes vary across HIPs?
Taking student success to scale. High Impact Practices in the States.
National Model and Laboratory for Student Success http://ts3.nashonline.org/high-impact-practices/