What determines student satisfaction with university subjects? A choice-based approach Twan Huybers, Jordan Louviere and Towhidul Islam Seminar, Institute.

Slides:



Advertisements
Similar presentations
Dummy Dependent variable Models
Advertisements

The Robert Gordon University School of Engineering Dr. Mohamed Amish
Multiple Analysis of Variance – MANOVA
FACTORIAL ANOVA. Overview of Factorial ANOVA Factorial Designs Types of Effects Assumptions Analyzing the Variance Regression Equation Fixed and Random.
Logistic Regression Psy 524 Ainsworth.
Chapter 18: The Regression Approach to ANOVA If an Independent Variable has only two levels — even if the two levels represent two qualitatively different.
Lecture 28 Categorical variables: –Review of slides from lecture 27 (reprint of lecture 27 categorical variables slides with typos corrected) –Practice.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Introduction to Statistics Quantitative Methods in HPELS 440:210.
Critiquing Research Articles For important and highly relevant articles: 1. Introduce the study, say how it exemplifies the point you are discussing 2.
Overview of field trial analysis procedures National Research Coordinators Meeting Windsor, June 2008.
Data Analysis Statistics. Inferential statistics.
QUALITATIVE AND LIMITED DEPENDENT VARIABLE MODELS.
Final Review Session.
Topic 3: Regression.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Data Analysis Statistics. Inferential statistics.
Analysis of Covariance The function of Experimental design is to explain the effect of a IV or DV while controlling for the confounding effect of extraneous.
Data Analysis Statistics. Levels of Measurement Nominal – Categorical; no implied rankings among the categories. Also includes written observations and.
Chapter 14 Inferential Data Analysis
Review for Final Exam Some important themes from Chapters 9-11 Final exam covers these chapters, but implicitly tests the entire course, because we use.
Ordinal Logistic Regression “Good, better, best; never let it rest till your good is better and your better is best” (Anonymous)
4.12 & 4.13 UNDERSTAND DATA-COLLECTION METHODS TO EVALUATE THEIR APPROPRIATENESS FOR THE RESEARCH PROBLEM/ISSUE Understand promotion and intermediate.
Hypothesis Testing II The Two-Sample Case.
Multiple Discriminant Analysis and Logistic Regression.
Fall 2013 Lecture 5: Chapter 5 Statistical Analysis of Data …yes the “S” word.
Student Engagement Survey Results and Analysis June 2011.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Understanding Statistics
STA Lecture 161 STA 291 Lecture 16 Normal distributions: ( mean and SD ) use table or web page. The sampling distribution of and are both (approximately)
A statistical method for testing whether two or more dependent variable means are equal (i.e., the probability that any differences in means across several.
Multinomial Logistic Regression Basic Relationships
Statistics 11 Correlations Definitions: A correlation is measure of association between two quantitative variables with respect to a single individual.
Research PHE 498. Define Research Research can be considered as systematic inquiry: A process that needs to be followed systematically to derive conclusions.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Business Statistics for Managerial Decision Farideh Dehkordi-Vakil.
Chapter 1 Getting Started 1.1 What is Statistics?.
Lecture 5: Chapter 5: Part I: pg Statistical Analysis of Data …yes the “S” word.
Multiple Discriminant Analysis
Discrete Choice Modeling William Greene Stern School of Business New York University.
Discussion of time series and panel models
Single-Factor Studies KNNL – Chapter 16. Single-Factor Models Independent Variable can be qualitative or quantitative If Quantitative, we typically assume.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 12 Testing for Relationships Tests of linear relationships –Correlation 2 continuous.
Discrete Choice Modeling William Greene Stern School of Business New York University.
NIH and IRB Purpose and Method M.Ed Session 2.
Qualitative and Limited Dependent Variable Models ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
Jump to first page Inferring Sample Findings to the Population and Testing for Differences.
HYPOTHESIS TESTING FOR DIFFERENCES BETWEEN MEANS AND BETWEEN PROPORTIONS.
Nonparametric Statistics
[Part 5] 1/43 Discrete Choice Modeling Ordered Choice Models Discrete Choice Modeling William Greene Stern School of Business New York University 0Introduction.
Approaches to quantitative data analysis Lara Traeger, PhD Methods in Supportive Oncology Research.
Continuing Education Provincial Survey Winter 2012 Connie Phelps Manager, Institutional Research & Planning.
BUS 308 Entire Course (Ash Course) For more course tutorials visit BUS 308 Week 1 Assignment Problems 1.2, 1.17, 3.3 & 3.22 BUS 308.
Choosing and using your statistic. Steps of hypothesis testing 1. Establish the null hypothesis, H 0. 2.Establish the alternate hypothesis: H 1. 3.Decide.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
PS Research Methods I with Kimberly Maring Unit 9 – Experimental Research Chapter 6 of our text: Zechmeister, J. S., Zechmeister, E. B., & Shaughnessy,
Simulation-based inference beyond the introductory course Beth Chance Department of Statistics Cal Poly – San Luis Obispo
Methods of multivariate analysis Ing. Jozef Palkovič, PhD.
Logistic Regression: Regression with a Binary Dependent Variable.
Nonparametric Statistics
BINARY LOGISTIC REGRESSION
ANOVA Econ201 HSTS212.
Shapley Value Regression
Nonparametric Statistics
Single-Factor Studies
Single-Factor Studies
Analyzing Reliability and Validity in Outcomes Assessment
Chapter 6 Logistic Regression: Regression with a Binary Dependent Variable Copyright © 2010 Pearson Education, Inc., publishing as Prentice-Hall.
CLASSROOM ENVIRONMENT AND THE STRATIFICATION OF SENIOR HIGH SCHOOL STUDENT’S MATHEMATICS ABILITY PERCEPTIONS Nelda a. nacion 5th international scholars’
Presentation transcript:

What determines student satisfaction with university subjects? A choice-based approach Twan Huybers, Jordan Louviere and Towhidul Islam Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014

Overview 1. Introduction (student perceptions of teaching) 2. Study details (study design, data) 3. Findings (ratings instrument, choice experiment) 4. Conclusion

1. Introduction Higher education practice: widespread use of student perceptions of subjects/teaching Scholarly research, (contentious) issues: formative vs. summative; effects of grades, class size, etc.; teaching “effectiveness”/”quality”; value of student opinions Student satisfaction Student as a customer? Satisfaction vs. effectiveness Overall summary item in evaluation instruments (incl. CEQ)

Contribution of study is methodological: Use of DCE vs. ratings method (response styles) Use of DCE in student evaluation: NOT an alternative to classroom evaluation exercises (although BWS Case 1 could be) Instead, DCE as a complementary approach

2. Study details Evaluation items used in the study: Wording of 10 items derived from descriptions in Australian university student evaluation instruments Subject and teaching of the subject

Subject and teaching items used in the study

Evaluation items used in the study: Wording of 10 items derived from descriptions in 14 student evaluation instruments Covers subject and teaching of the subject Possible confounds in descriptions Teaching and learning, Methods and activities Reflects evaluation practice Same for ratings and DCE

Two survey parts: Evaluation instrument (rating scales) (“instrument”); and Evaluation experiment (choices in a DCE) (“experiment”) We controlled for order of appearance of the instrument and the experiment, and for respondent focus: 4 versions of the survey in the study design

Study design

PureProfile panel: 320 respondents randomly assigned to the 4 study versions, December 2010 Participant screening: student at an Australian-based university during previous semester completed at least two university subjects (classes) during that semester (to allow comparison between at least two subjects in the instrument)

Instrument: Names of all subjects in previous semester Most satisfactory (“best”) and least satisfactory (“worst”) subject nominated Each attribute for the “best” and “worst” subjects rated on a five-point scale: -2 to +2 (SD, D, neither D nor A, A, SA)

Experiment: Pairs of hypothetical subjects described by rating scale categories as attribute levels (range -2 to +2) Ratings assumed to be own ratings Each participant evaluated 20 pairs: 8 pairs: OMEP from 4 10 (8 blocks from the 64 runs) 12 pairs: OMEP from 2 10 (all 12 runs) 4-level OMEP: -2, -1, +1 and +2 2-level OMEP: -2, +2 Subject A had constant, “neutral” ratings descriptions Subject B ratings as per the above experimental design

University student characteristics: sample and Australia

3. Findings ANOVA: equal means for the three study versions, so pooled Binary logistic regression: Best (1) and Worst (0) subjects as DV, items ratings as IVs

Instrument, best vs. worst subject: Four items discriminate One item with counter-intuitive sign High correlation between ratings (for Best, Worst and Best minus Worst)

Experiment Responses from 12 individuals deleted (always chose A or always chose B) Mean choice proportion for each choice option in each pair, for each of the three study versions (for common set of 12 pairs): high correlation with sample proportions (≈ 0.94 ) → study versions pooled

Conditional binary logit estimation First: 4-level linear vs. 4-level non-linear (effects coded): LR test: no stat. difference, so 2-level and 4-level designs pooled Cond. Logit for all 20 pairs of 228 respondents Model fit and prediction accuracy (in-sample, out-of- sample): Comparing, for each choice option in each pair, the mean choice proportion with the predicted choice probability

All item parameter estimates discriminate re satisfaction Most important to student satisfaction: ‘the subject was challenging and interesting’ closely followed by ‘the teacher communicated and explained clearly in face- to-face, online, written and other formats’ Some results similar to Denson et al (2010) (final ‘overall satisfaction’ item in SET instrument as DV explained by subject ratings), in particular: the “challenging and interesting nature of a subject” (most important) and the “opportunities for active student participation” item (least important)

Instrument vs. Experiment (approximation): R 2 of parameter estimates = 0.18 Overall: experiment better distinguishes the relative contribution of items, i.e. better “diagnostic power” Note: higher number of observations in experiment

Scale-Adjusted Latent Class Models (SALCM) Identifying preference heterogeneity (co-variates) and variance heterogeneity simultaneously BIC used for model selection SALCM for 12 common pairs (2-level): One preference class Two scale classes: male students more variable in their choices than females SALCM for master set of 64 pairs (4-level): Similar results

SALCM, 12 common pairs, choice proportions vs. choice probabilities

SALCM, master pairs, choice proportions vs. choice probabilities

Individual-level model using WLS Empirical distribution of individual-level item parameter estimates Using 12 pairs from common design Small size for estimation Quite a few negative parameter estimates

WLS individual estimates; descriptive statistics (n=228)

Close correspondence between the results of Cond. Logit, SALCM-64 pairs (4-level), WLS and (slightly less so) SALCM-12 pairs (2-level)

4. Conclusion Ratings instrument and choice experiment to establish individual contributions of subject aspects to satisfaction The experiment provided greater discriminatory power ‘Challenging and interesting’ and ‘Teacher communication’ major drivers of satisfaction ‘Feedback’ and ‘Student participation’ among the least important ones

Methodological contribution to higher education literature Novel application of DCE to student evaluation Combine quantitative results with qualitative feedback Limitations/further research Relatively small sample size Potential confounding in items Application at university program level

What determines student satisfaction with university subjects? A choice-based approach Twan Huybers, Jordan Louviere and Towhidul Islam Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014