STATISTICAL AND METHODOLOGICAL CONSIDERATIONS FOR EXAMINING PROGRAM EFFECTIVENESS Carli Straight, PhD and Giovanni Sosa, PhD Chaffey College RP Group Conference.

Slides:



Advertisements
Similar presentations
Disaggregate to Appreciate Making SENSE of Texas’ Entering Community College Students 2012 TAIR Conference Corpus Christi, TX.
Advertisements

1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Anita M. Baker, Ed.D. Jamie Bassell Evaluation Services Program Evaluation Essentials Evaluation Support 2.0 Session 2 Bruner Foundation Rochester, New.
College Completion: Roadblocks & Strategies Appalachian Higher Education Network Conference Asheville, NC – June 10-12, 2014 Presented by: Zornitsa Georgieva.
BENJAMIN GAMBOA, RESEARCH ANALYST CRAFTON HILLS COLLEGE RESEARCHING: ALPHA TO ZETA.
Christine Bastedo Robert Magyar Spring,  Determine if students are meeting learning objectives across all sections of PSY 121, Methods and Tools.
Urban Universities: Student Characteristics and Engagement Donna Hawley Martha Shawver.
Handling Categorical Data. Learning Outcomes At the end of this session and with additional reading you will be able to: – Understand when and how to.
BENCHMARKING EFFECTIVE EDUCATIONAL PRACTICE IN COMMUNITY COLLEGES What We’re Learning. What Lies Ahead.
Quantitative Research
Nasih Jaber Ali Scientific and disciplined inquiry is an orderly process, involving: problem Recognition and identification of a topic to.
Giovanni Sosa, Ph.D. Chaffey College RP Conference 2013.
Chapter 3 Goals After completing this chapter, you should be able to: Describe key data collection methods Know key definitions:  Population vs. Sample.
Making a difference? Measuring the impact of an information literacy programme Ann Craig
Dr. Bonnie J. Faddis & Dr. Margaret Beam RMC Research Fidelity of Implementation and Program Impact.
MARTIN COMMUNITY COLLEGE ACHIEVING THE DREAM COMMUNITY COLLEGES COUNT IIPS Conference Charlotte, North Carolina July 24-26, 2006 Session: AtD – Use of.
Moderation & Mediation
Are there “Hidden Variables” in Students’ Initial Knowledge State Which Correlate with Learning Gains? David E. Meltzer Department of Physics and Astronomy.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Presenters Rogeair D. Purnell Bri C. Hays A guide to help examine and monitor equitable access and success Assessing and Mitigating Disproportionate Impact.
College Success Program Fall 2009 Annual Report. Fall 2009 Cohort For comparative analysis, the cohort of Clermont students whose first enrolled term.
College Success Program Fall 2010 Annual Report College Success Program.
Self Competence and Depressive Symptoms in Ethnic Minority Students: The Role of Ethnic Identity and School Belonging Praveena Gummadam and Laura D. Pittman.
Your Research Study 20 Item Survey Descriptive Statistics Inferential Statistics Test two hypotheses – Two hypotheses will examine relationships between.
Group Quantitative Designs First, let us consider how one chooses a design. There is no easy formula for choice of design. The choice of a design should.
Assessing SAGES with NSSE data Office of Institutional Research September 25 th, 2007.
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
Community College Survey of Student Engagement (CCSSE) Benchmarks of Effective Educational Practice Summary Report Background: The Community College Survey.
David Torres Dean, Institutional Research Riverside Community College District.
1 Self-Regulation and Ability Predictors of Academic Success during College Anastasia Kitsantas, Faye Huie, and Adam Winsler George Mason University.
Implications of Mediated Instruction to Remote Learning in Mathematics Joy L. Matthews-López Educational Testing Service Sergio R. López-Permouth David.
Discriminant Analysis Discriminant analysis is a technique for analyzing data when the criterion or dependent variable is categorical and the predictor.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 16.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Student Engagement and Academic Performance: Identifying Effective Practices to Improve Student Success Shuqi Wu Leeward Community College Hawaii Strategy.
Using Data to Improve Student Achievement Summer 2006 Preschool CSDC.
A Comparison of General v. Specific Measures of Achievement Goal Orientation Lisa Baranik, Kenneth Barron, Sara Finney, and Donna Sundre Motivation Research.
One-Way Analysis of Covariance (ANCOVA)
American Educational Research Association Annual Meeting AERA San Diego, CA - April 13-17, 2009 Denise Huang Examining the Relationship between LA's BEST.
Chapter 10 Finding Relationships Among Variables: Non-Experimental Research.
Observed Autonomy And Connection With Parents And Peers As Predictors Of Early Adolescent Sexual Adaptation Joseph P. Allen Felicia Hall University of.
Building the NCSC Summative Assessment: Towards a Stage- Adaptive Design Sarah Hagge, Ph.D., and Anne Davidson, Ed.D. McGraw-Hill Education CTB CCSSO New.
Overview and Common Pitfalls in Statistics and How to Avoid Them
Predicting Student Retention: Last Students in are Likely to be the First Students Out Jo Ann Hallawell, PhD November 19, th Annual Conference.
College Student’s Beliefs About Psychological Services: A replication of Ægisdóttir & Gerstein Louis A. Cornejo San Francisco State University.
F UNCTIONAL L IMITATIONS IN C ANCER S URVIVORS A MONG E LDERLY M EDICARE B ENEFICIARIES Prachi P. Chavan, MD, MPH Epidemiology PhD Student Xinhua Yu MD.
RESULTS OF THE 2009 ADMINISTRATION OF THE COMMUNITYCOLLEGE SURVEY OF STUDENT ENGAGEMENT Office of Institutional Effectiveness, April 2010.
Course Title: Using Epi Info™ 7 Using Classic Analysis (Continuation) April Epi Info™ 7 Training Software for Public Health Epi Info™ 7 Training.
Freshmen On-Track Analysis: Summary of Findings and Implications for Leadership.
Examining the Enrollment and Persistence of Students with Discrepant High School Grades and Standardized Test Scores Anne Edmunds, Ed.D. Higher Education.
Outline of Today’s Discussion 1.The Chi-Square Test of Independence 2.The Chi-Square Test of Goodness of Fit.
1 Bandit Thinkhamrop, PhD.(Statistics) Dept. of Biostatistics & Demography Faculty of Public Health Khon Kaen University Overview and Common Pitfalls in.
· IUPUI · Conceptualizing and Understanding Studies of Student Persistence University Planning, Institutional Research, & Accountability April 19, 2007.
CCSSE 2012 Findings for Southern Crescent Technical College.
Center for Institutional Effectiveness LaMont Rouse, Ph.D. Fall 2015.
MT ENGAGE Student Learning Outcomes and Assessment April 27, 2015.
Aron, Aron, & Coups, Statistics for the Behavioral and Social Sciences: A Brief Course (3e), © 2005 Prentice Hall Chapter 10 Introduction to the Analysis.
Chapter 7: Hypothesis Testing. Learning Objectives Describe the process of hypothesis testing Correctly state hypotheses Distinguish between one-tailed.
A Collaborative Approach to Assessing the Impacts of Service-Learning on Retention and Success Charlotte Belezos (Roxbury CC) Ted Carlson (Bunker Hill.
HLC Academy on Student Persistence and Completion – A Presentation on Statistical Analyses of Illinois Tech Data May 24, 2016 Illinois Institute of Technology.
Using Data to Improve Student Achievement Summer 2006 Preschool CSDC.
How do you use library instruction?. Library Instruction That Improves Self-Efficacy and Academic Achievement 2016 Innovations Conference, Chicago, Illinois.
Examining Achievement Gaps
Widening Participation whilst Narrowing Attainment Gaps between Student Groups: A Realistic Objective for Higher Education? Introduction: How this study.
University of Michigan
Categorical Data Aims Loglinear models Categorical data
The Impact of a Special Advising Program on Students’ Progress
IEPI – Participate | Collaborate | Innovate
Presentation transcript:

STATISTICAL AND METHODOLOGICAL CONSIDERATIONS FOR EXAMINING PROGRAM EFFECTIVENESS Carli Straight, PhD and Giovanni Sosa, PhD Chaffey College RP Group Conference Presentation April 1, 2013

Pitfalls of Significance Testing Assessment Item Number Correct Pretest Number Correct Posttest Statistically Significant? Item No Item No Item No Item 4 710No Item No Item 6 513No Item No Item 8 616No Item 9 315No Avg. Correct No N = 30

Pitfalls of Significance Testing NSSE Benchmark Sample University N > 1000 Comparison Group N > 10,000 Statistically Significant? Level of Academic Challenge Yes Active and Collaborative Learning Yes Student-Faculty Interaction Yes Enriching Educational Experiences Yes Supportive Campus Environment Yes Adapted from NSSE (2008)

Pitfalls of Significance Testing N = 187N = 408N = 200 N = 795 Avg. Grade (GPA Scale) p <.01 p <.05 p <.01

Significance Testing: Conclusions  P-values = Sample Size x Effect Size  Greatly influenced by sample size  Do not speak to the magnitude of the difference  Not well understood – even by ‘experts’

Practical Significance: Effect Size  Effect Size comes in various forms  Standardized (d, r) Cohen’s conventions: d =.20 – small;.50 – moderate;.80 – large r =.10 – small;.30 – moderate;.50 - large Discipline specific Aspirin Example (Rosenthal & Dimateo, 2002)

Effect Size Examples Assessment Item Number Correct Pretest Number Correct Posttest Statistically Significant? Effect Size (d) Item No.37 Item No.27 Item No.75 Item 4 710No.22 Item No.55 Item 6 513No.60 Item No.34 Item 8 616No.71 Item 9 315No.93 Avg. Correct No.61 N = 30

Effect Size Examples NSSE Benchmark Sample University N > 1000 Comparison Group N > 10,000 Statistically Significant? Effect Size (d) Level of Academic Challenge Yes.72 Active and Collaborative Learning Yes.44 Student-Faculty Interaction Yes.08 Enriching Educational Experiences Yes.23 Supportive Campus Environment Yes.30 Adapted from NSSE (2008)

Effect Size Examples N = 187N = 408N = 200 N = 795 Avg. Grade (GPA Scale) d =.86 d =.35 d = 1.19

Wilson’s Effect Size Calculator

Odds Ratios  Reflect a comparison of the relative odds of an occurrence of interest given the exposure to a variable of interest  OR = (A/B)/(C/D) Successful Not Successful Total Medium SE Low SE OR = /2.788 = 5.40

Odds Ratios  Interpreting Odds Ratios: OR = 1.50 – small; 2.50 – moderate; 4.25 – large  OR = 1 => Intervention does not affect odds of outcome  OR > 1 => Intervention associated with higher odds of outcome  OR Intervention associated with lower odds of outcome  Converting Odds Ratios to ds and vice versa:

Working with Beta Weights PredictorB (SE)Beta Self-Efficacy (Post)**.09 (.01).42 Age Range**.13 (.03).18 Af. American vs. Others* -.31 (.15)-.08 Hispanic vs. Others -.14 (.09)-.07 First-Gen Status.06 (.08).03 Asian vs. Others.10 (.16).03 Gender -.01 (.08)-.002 Work Hours<.01 (<.01).005 R 2 =.22 * p <.05; ** p <.01 Predictors of Course Performance among Fast Track Students Completing both the Pre and Post-Test Self-Efficacy (SE) Measure (N = 623)

Working with Beta Weights PredictorB (SE)BetaZero-Order rSemi-Partial rEffect Size |d| Self-Efficacy (Post)**.09 (.01) Age Range**.13 (.03) Af. American vs. Others* -.31 (.15) Hispanic vs. Others -.14 (.09) First-Gen Status.06 (.08) Asian vs. Others.10 (.16) Gender -.01 (.08) Work Hours<.01 (<.01) Predictors of Course Performance among Fast Track Students Completing both the Pre and Post-Test Self-Efficacy (SE) Measure (N = 623) R 2 =.22 * p <.05; ** p <.01

Basic Steps to Designing a Study that Measures Program Effectiveness Example: How Do Students Perform in Fast-Track Courses?  Select a reference point  Compared to whom/what?  Define what is meant by performance  Course completion rate?  Course success rate?  Retention rate?  Other?  Select appropriate statistical analysis  Conduct analyses and write up results

Select Comparable Cohorts Determine what/whom performance outcomes will be measured against  Goal is to select two cohorts that are the same in as many ways as possible, minus participation in the relevant program  Within-Group – observe outcomes of same students in program and out of program (no need for controls)  Between-Group – observe outcomes of different students, some of whom participated in the program and some of whom did not (control for pre-existing group differences)

Select Comparable Cohorts  Within group comparisons  Same students, compare performance in Fast-Track and non-Fast-Track courses during same time period  “Do students who earn GORs in both Fast-Track and non-Fast-Track courses perform better, worse, or the same in the two formats?”  Between group comparisons  Different students, one cohort earned a GOR in at least one Fast-Track course and one cohort earned no GORs in a Fast-Track course across the same time period  “Do students who earn GORs in Fast-Track courses perform better, worse, or the same as students who do not earn GORs in Fast-Track courses?”  Select variables to control so that “all else is equal”

Within-Group Comparisons 1)Determine time period of interest  Ensure that there are enough data to make comparisons and that programmatic changes were not implemented during the selected period Chaffey fast-track example:  Fast-track courses were first implemented in spring 2010, but significantly increased starting fall 2011  To obtain a strong sample size and ensure that some of the kinks were worked out, data were analyzed from fall 2011 and later  Using MIS referential files, select for fall 2011 and spring 2012 terms

Within-Group Comparisons 2)Code your data file so that student behavior in and out of the program can be measured Chaffey fast-track example:  Obtain a list of all fast-track sections from course scheduler or other party on campus  Use obtained list to flag all fast-track sections in MIS file  Search start and end dates and delete short-term sections from file (use xf02 “SESSION-DATE-BEGINNING” and xf03 “SESSION- DATE-ENDING”)

Within-Group Comparisons  Delete all cases in which a student did not earn a GOR in fall 2011 or spring 2012  Create coding system for fast-track and full-term sections (e.g., compute two new variables, fast-track = 1 if section is fast-track and full-term = 1 if section is full-term)  Aggregate number of fast-track sections and number of full-term sections by student id and term (this will give you two new variables in your dataset that reflect a count of GORs each student earned in fast-track and full-term courses for each semester)

Within-Group Comparisons 3)Select for students whose behavior reflects program participation and program non-participation across the selected time period Chaffey fast-track example:  Select cases in which the sum of fast-track GORs >= 1 and the sum of full-term GORs >= 1 (i.e., student has taken at least one fast- track and one full-term course)  Save selected cases to a new file

Within-Group Comparisons 4)Compare performance outcomes of same students in program and out of program N = 4,153 Same students All College Success Rate

Between-Group Comparisons 1)Determine time period of interest  Ensure that there are enough data to make comparisons and that programmatic changes were not implemented during the selected period Chaffey fast-track example:  Fast-track courses were first implemented in spring 2010, but significantly increased starting fall 2011  To obtain a strong sample size and ensure that some of the kinks were worked out, data were analyzed from fall 2011 and later  Using MIS referential files, select for fall 2011 and spring 2012 terms

Between-Group Comparisons 2)Code data file so that two distinct cohorts, one of which participated in the program and one of which did not participate in the program, are identified Chaffey fast-track example:  Obtain a list of all fast-track sections from course scheduler or other party on campus  Use obtained list to flag all fast-track sections in MIS file  Aggregate number of fast-track sections by student id and term (this will give you a new variable in your dataset that reflects a count of GORs each student earned in fast-track courses for each semester)

Between-Group Comparisons  Remove all records in which a GOR was not assigned  Create cohort variable with two mutually exclusive groups  Cohort 1 consists of anyone who earned a GOR in a fast-track course during the specified term (i.e., fast-track variable >= 1)  Cohort 2 consists of anyone who earned a GOR in a course or courses other than fast-track during the specified term (i.e., fast-track variable = 0)

Between-Group Comparisons 3)Compare cohort groups on a variety of pre-existing variables to measure differences outside of program participation (these will guide you in setting up controls for the next step) Chaffey fast-track example:  Gender, Ethnicity, Age, DPS Status, Enrollment Status, Academically Disadvantaged Status, First Generation Status, Term Units Attempted, Term Units Earned, Cumulative Units Attempted, Cumulative Units Earned, Cumulative GPA, Self-Efficacy, Assessment Scores

Example of Categorical Variable Comparisons Background Characteristics Fast-Track StudentsNon-Fast-Track Students |d||d| n%n% Gender Female1, , Male1, , Unknown First Generation Yes , No1, ,

Example of Continuous Variable Comparisons Academic Characteristics Fast-Track Students (n = 2,699) Non-Fast-Track Students (n = 16,732) |d||d| MSDM Term Units Att Term Units Earn Cum Units Att Cum Units Earn Cum GPA * Self-Efficacy ** * Fast-Track Students n = 2,689, Non-Fast-Track Students n = 16,643 ** Fast-Track Students n = 1,565, Non-Fast-Track Students n = 9,408

Between-Group Comparisons 4)Note where non-programmatic differences exist between cohort 1 and cohort 2, if observed Chaffey fast-track example:  Selecting for differences of d =.25 or higher, fast-track and non- fast-track students were different in three areas: first-generation college status, term units attempted, and term units earned

Between-Group Comparisons 5)Conduct analyses to compare cohort 1 and cohort 2 performance outcomes, controlling for observed pre- existing differences between groups Chaffey fast-track example:  Calculate a partial correlation to measure the relationship between cohort group and course success, while “controlling” for the effects of first generation status and units attempted (not units completed because it is too highly correlated with units attempted)

Between-Group Comparisons Zero-Order rPartial r Effect Size |d| Cohort Group Term Units Attempted * First-Generation Status * * p <.01 Correlates of Course Success among Students Earning a GOR in Fall 2011 (N = 19,431)

Cohort Comparison Conclusions  Students who earned at least one GOR each in fast-track and full-term courses in fall 2011 demonstrated statistically significantly higher course success rates in fast-track courses than in full-term courses. These findings, however, were not determined to be practically significant because of the large sample sizes and small effect size values.  Students who earned at least one GOR in a fast-track course in fall 2011 demonstrated course success rates that were not statistically significantly or practically different from course success rates of students who did not earn any GORs in fast-track courses in fall 2011.