Download presentation
Presentation is loading. Please wait.
Published byGregory Pearson Modified over 9 years ago
1
What determines student satisfaction with university subjects? A choice-based approach Twan Huybers, Jordan Louviere and Towhidul Islam Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014
2
Overview 1. Introduction (student perceptions of teaching) 2. Study details (study design, data) 3. Findings (ratings instrument, choice experiment) 4. Conclusion
3
1. Introduction Higher education practice: widespread use of student perceptions of subjects/teaching Scholarly research, (contentious) issues: formative vs. summative; effects of grades, class size, etc.; teaching “effectiveness”/”quality”; value of student opinions Student satisfaction Student as a customer? Satisfaction vs. effectiveness Overall summary item in evaluation instruments (incl. CEQ)
4
Contribution of study is methodological: Use of DCE vs. ratings method (response styles) Use of DCE in student evaluation: NOT an alternative to classroom evaluation exercises (although BWS Case 1 could be) Instead, DCE as a complementary approach
5
2. Study details Evaluation items used in the study: Wording of 10 items derived from descriptions in Australian university student evaluation instruments Subject and teaching of the subject
6
Subject and teaching items used in the study
7
Evaluation items used in the study: Wording of 10 items derived from descriptions in 14 student evaluation instruments Covers subject and teaching of the subject Possible confounds in descriptions Teaching and learning, Methods and activities Reflects evaluation practice Same for ratings and DCE
8
Two survey parts: Evaluation instrument (rating scales) (“instrument”); and Evaluation experiment (choices in a DCE) (“experiment”) We controlled for order of appearance of the instrument and the experiment, and for respondent focus: 4 versions of the survey in the study design
9
Study design
10
PureProfile panel: 320 respondents randomly assigned to the 4 study versions, December 2010 Participant screening: student at an Australian-based university during previous semester completed at least two university subjects (classes) during that semester (to allow comparison between at least two subjects in the instrument)
11
Instrument: Names of all subjects in previous semester Most satisfactory (“best”) and least satisfactory (“worst”) subject nominated Each attribute for the “best” and “worst” subjects rated on a five-point scale: -2 to +2 (SD, D, neither D nor A, A, SA)
12
Experiment: Pairs of hypothetical subjects described by rating scale categories as attribute levels (range -2 to +2) Ratings assumed to be own ratings Each participant evaluated 20 pairs: 8 pairs: OMEP from 4 10 (8 blocks from the 64 runs) 12 pairs: OMEP from 2 10 (all 12 runs) 4-level OMEP: -2, -1, +1 and +2 2-level OMEP: -2, +2 Subject A had constant, “neutral” ratings descriptions Subject B ratings as per the above experimental design
14
University student characteristics: sample and Australia
15
3. Findings ANOVA: equal means for the three study versions, so pooled Binary logistic regression: Best (1) and Worst (0) subjects as DV, items ratings as IVs
17
Instrument, best vs. worst subject: Four items discriminate One item with counter-intuitive sign High correlation between ratings (for Best, Worst and Best minus Worst)
18
Experiment Responses from 12 individuals deleted (always chose A or always chose B) Mean choice proportion for each choice option in each pair, for each of the three study versions (for common set of 12 pairs): high correlation with sample proportions (≈ 0.94 ) → study versions pooled
19
Conditional binary logit estimation First: 4-level linear vs. 4-level non-linear (effects coded): LR test: no stat. difference, so 2-level and 4-level designs pooled Cond. Logit for all 20 pairs of 228 respondents Model fit and prediction accuracy (in-sample, out-of- sample): Comparing, for each choice option in each pair, the mean choice proportion with the predicted choice probability
24
All item parameter estimates discriminate re satisfaction Most important to student satisfaction: ‘the subject was challenging and interesting’ closely followed by ‘the teacher communicated and explained clearly in face- to-face, online, written and other formats’ Some results similar to Denson et al (2010) (final ‘overall satisfaction’ item in SET instrument as DV explained by subject ratings), in particular: the “challenging and interesting nature of a subject” (most important) and the “opportunities for active student participation” item (least important)
25
Instrument vs. Experiment (approximation): R 2 of parameter estimates = 0.18 Overall: experiment better distinguishes the relative contribution of items, i.e. better “diagnostic power” Note: higher number of observations in experiment
26
Scale-Adjusted Latent Class Models (SALCM) Identifying preference heterogeneity (co-variates) and variance heterogeneity simultaneously BIC used for model selection SALCM for 12 common pairs (2-level): One preference class Two scale classes: male students more variable in their choices than females SALCM for master set of 64 pairs (4-level): Similar results
28
SALCM, 12 common pairs, choice proportions vs. choice probabilities
30
SALCM, master pairs, choice proportions vs. choice probabilities
31
Individual-level model using WLS Empirical distribution of individual-level item parameter estimates Using 12 pairs from common design Small size for estimation Quite a few negative parameter estimates
32
WLS individual estimates; descriptive statistics (n=228)
33
Close correspondence between the results of Cond. Logit, SALCM-64 pairs (4-level), WLS and (slightly less so) SALCM-12 pairs (2-level)
34
4. Conclusion Ratings instrument and choice experiment to establish individual contributions of subject aspects to satisfaction The experiment provided greater discriminatory power ‘Challenging and interesting’ and ‘Teacher communication’ major drivers of satisfaction ‘Feedback’ and ‘Student participation’ among the least important ones
35
Methodological contribution to higher education literature Novel application of DCE to student evaluation Combine quantitative results with qualitative feedback Limitations/further research Relatively small sample size Potential confounding in items Application at university program level
36
What determines student satisfaction with university subjects? A choice-based approach Twan Huybers, Jordan Louviere and Towhidul Islam Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.