Amir Ghaisi Grigorios Fountas Panos Anastasopoulos Fred Mannering

Slides:



Advertisements
Similar presentations
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
Advertisements

CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
1 The Correlates of Prestige Across Graduate and Professional Schools Kyle Sweitzer Data Resource Analyst Michigan State University Fred Volkwein Professor.
Stat 112: Lecture 9 Notes Homework 3: Due next Thursday
Using Regression Analysis in Departmental Budget Allocations Andrew L. Luna, University of North Alabama Kelly A. Brennan, The University of Alabama.
Hypothesis Testing in Linear Regression Analysis
Rajesh Singh Deputy Librarian University of Delhi Research Metrics Impact Factor & h-Index.
Inference for Linear Regression Conditions for Regression Inference: Suppose we have n observations on an explanatory variable x and a response variable.
Slide 1 Estimating Performance Below the National Level Applying Simulation Methods to TIMSS Fourth Annual IES Research Conference Dan Sherman, Ph.D. American.
Why publishing (and publishing in European Urology) is important for you Christian Gratzke Associate Editor European Urology How to Write a Manuscript.
MBP1010H – Lecture 4: March 26, Multiple regression 2.Survival analysis Reading: Introduction to the Practice of Statistics: Chapters 2, 10 and 11.
Objectives 2.1Scatterplots  Scatterplots  Explanatory and response variables  Interpreting scatterplots  Outliers Adapted from authors’ slides © 2012.
1 Tobit Analysis of Vehicle Accident Rates on Interstate Highways Panagiotis Ch. Anastasopoulos, Andrew Tarko, and Fred Mannering.
Political Science 30: Political Inquiry. Linear Regression II: Making Sense of Regression Results Interpreting SPSS regression output Coefficients for.
9-1 Copyright © 2016 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
LOGISTIC REGRESSION. Purpose  Logistical regression is regularly used when there are only two categories of the dependent variable and there is a mixture.
HAWKES LEARNING SYSTEMS Students Matter. Success Counts. Copyright © 2013 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 13.
Yandell – Econ 216 Chap 15-1 Chapter 15 Multiple Regression Model Building.
Stats Methods at IC Lecture 3: Regression.
Module II Lecture 1: Multiple Regression
Multiple Regression Analysis: Inference
Does Academic Research Destroy Stock Return Predictability. R
AAU Membership Metrics
GS/PPAL Section N Research Methods and Information Systems
EMPA P MGT 630.
Chapter 14 Introduction to Multiple Regression
Demonstrating Scholarly Impact: Metrics, Tools and Trends
Chapter 4 Basic Estimation Techniques
Regression Analysis AGEC 784.
journal metrics university of sulaimani college of science geology dep by Hawber Ata
Basic Estimation Techniques
How High Schools Explain Students’ Initial Colleges and Majors
Chapter 3: Describing Relationships
Political Science 30: Political Inquiry
Eastern Michigan University
Multiple Regression Analysis and Model Building
HLM with Educational Large-Scale Assessment Data: Restrictions on Inferences due to Limited Sample Sizes Sabine Meinck International Association.
Lecture 4: Meta-analysis
Understanding Standards Event Higher Statistics Award
POSC 202A: Lecture Lecture: Substantive Significance, Relationship between Variables 1.
Basic Estimation Techniques
I271B Quantitative Methods
EVAAS Overview.
Chapter 3: Describing Relationships
UC policy states:  "Superior intellectual attainment, as evidenced both in teaching and in research or other creative achievement, is an indispensable.
Inferential Statistics and Probability a Holistic Approach
Basic Statistical Terms
Tabulations and Statistics
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
10701 / Machine Learning Today: - Cross validation,
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Seminar in Economics Econ. 470
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Econometric Tests of Copyright Openness
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
Chapter 3: Describing Relationships
MGS 3100 Business Analysis Regression Feb 18, 2016
Chapter 3: Describing Relationships
Presentation transcript:

Amir Ghaisi Grigorios Fountas Panos Anastasopoulos Fred Mannering Statistical Assessment of Peer Opinions in Higher Education Rankings: The Case of U.S. Engineering Graduate Programs in Journal of Applied Research in Higher Education Amir Ghaisi Grigorios Fountas Panos Anastasopoulos Fred Mannering

Importance of University Rankings Students seek admission to highly ranked universities Universities use rankings to attract the best students and faculty, instill alumni pride, improve fundraising, etc. University funding from governmental and private entities can be influenced by rankings Because of the above, universities develop policies to support and improve their rankings

USNews rankings Easily the most widely followed university rankings in the U.S. Rankings are based on quantitative data (research expenditures, number of faculty, test scores of students, etc.), peer and recruiter opinions, and predetermined weightings In USNews rankings of engineering graduate programs peer opinions account for 25% of the total ranking score

Peer assessments Individuals from universities (typically engineering deans) are asked to rate the quality of specific programs on a scale from 1 (marginal) to 5 (distinguished) Peer scores a function of experiences with faculty and graduates of the schools they are evaluating, the school’s overall reputation, etc. Peer impressions may also be influenced by their exposure to past rankings, and by the factual information used to support those past rankings

Objective of this Research: Provide insight into the determinants of peer rankings in higher education by developing a statistical model of the average peer assessment scores of U.S. colleges of engineering from the 2018 Engineering Graduate program evaluations provided in USNews Findings will provide help guide university policies that may be used to improve peer impressions

Methodological Approach Engineering graduate programs are peer-rated on a scale of 1 (marginal) to 5 (distinguished), and USNews provides the resulting average peer assessment scores in tenths (such as 2.6, 2.7, etc.) Thus the peer score is reported as average that is continuous variable bounded by 1 and 5 This suggests a lower and upper censored Tobit regression would be appropriate, but statistical estimations indicated an uncensored regression was sufficient since few observations are near the censored boundaries

Methodological Approach (cont.) Using a standard uncensored regression: where, yn is the dependent variable (average graduate program peer assessment score for university n, is a vector of estimable parameters, Xn is a vector of explanatory variables for university n, εn is a normally and independently distributed error term with zero mean and constant variance σ2.

Unobserved Heterogeneity To account for heterogeneity (unobserved factors that may vary across universities), we allow for the possibility that each university may have its own parameter for one or more of the explanatory variables Estimable parameters are thus written as: where β is the mean of the parameter estimate and φn is a randomly distributed term (for example a normally distributed term with mean zero and variance σ2)

Unobserved Heterogeneity We also allow for the possibility for the mean and variance of the parameter to be a function of explanatory variables so (Mannering, 2018) Models are estimated using simulated maximum likelihood methods using 1,000 Halton draws for numerical integration Observation-specific mean parameters estimated using the simulated Bayesian approach proposed by Greene (2004)

Data peer assessment scores are gathered from USNews’ 2018 rating of U.S. engineering graduate programs USNews-related data provided with their 2018 graduate program rankings including: - Engineering graduate student enrollment; - Total number of engineering faculty; - Number of faculty in the National Academy of Engineering; - Average math Graduate Record Exam score of students; - Percent of engineering applications accepted; - Number of doctoral students graduated in the past year; - Research expenditures per faculty

Data (cont.) Additional data: - University membership in the American Association of Universities (AAU); - Whether the university is public or private; - National merit scholars admitted, - Incoming students’ average SAT scores, - Number post-doctoral appointees; - Citation information of engineering faculty and university faculty overall: Google Scholar including total citations, number of documents, h-index, and number of papers cited 10 times or more, etc.

Estimation Results 139 universities 13 statistically significant variables (10 produce parameters that are fixed across observations and 3 that vary across observations) Random parameters linear regression produced an R- squared value of 0.993 and an adjusted R-squared value of 0.992 (which accounts for the number of estimated parameters in the model)

Observed vs. Model-Predicted

College-related attributes Statistically significant variables and their effect on mean peer assessment scores: College-related attributes

Graduate Record Examination (GRE) scores of incoming graduate students Positively increases the average peer evaluation an 8 point increase in the math GRE scores of graduate students would result in roughly a 0.1 point increase in a college of engineering’s average peer assessment

Graduate Admissions Universities that admit more than 31% of their graduate applicants tend to have average peer assessment scores that are 0.104 less than universities that admit 31% or less

Number of doctoral students graduated Parameter estimate for this variable suggests that graduating 34 more doctoral students would result in a 0.1 increase in the average peer assessment score

Number of faculty Total number of faculty, was also found to positively influence average peer assessment scores, with larger faculty size more likely to result in a higher score For public universities, the model’s parameter estimate suggests that nearly 60 faculty would need to be added for a 0.1 increase in the average peer assessment score For private universities, the model’s parameter estimate suggests that about 35 faculty would need to be added for a 0.1 increase in the average peer assessment score

Faculty-size threshold Effects of having less than 200 faculty vary across universities Having fewer than 200 faculty has negative effect for roughly 68% of the universities and a positive effect for 32%. This dichotomy is likely picking up the fact that the faculty effect is not just about size, but about quality as well

Graduate student enrollment per engineering faculty member Was found to have a negative effect on average peer assessment scores This variable produced a statistically significant random parameter While negative for all universities, the fact that the values vary across universities suggests that unobserved factors relating to student and faculty quality are affecting the influence that this ratio has on average peer assessment

Graduate student enrollment per engineering faculty member Was found to have a negative effect on average peer assessment scores This variable produced a statistically significant random parameter While negative for all universities, the fact that the values vary across universities suggests that unobserved factors relating to student and faculty quality are affecting the influence that this ratio has on average peer assessment

National Academy Members Percent of NAE members positively influences peer scores, but effect is highly variable across universities The effect ranges from a 0.1 increase in the average peer assessment score due to a 1.5% increase in NAE membership, to virtually no effect on the average peer assessment score NAE effect is likely highly variable due to NAE faculty being at different stages in their career, the variance in academic credentials of NAE members

Faculty h-index Google Scholar h-index of the 10th most cited engineering faculty member has a positive effect on average peer assessment scores By definition, an author’s h-index value indicates that the author has published h papers, each of which has been cited at least h times While many colleges of engineering in the U.S. have been slow to adopt citation information such information has a statistically significant effect on average peer assessment scores

Overall university-related attributes Statistically significant variables and their effect on mean peer assessment scores: Overall university-related attributes

American Association of Universities (AAU) membership AAU membership of the University improved peer assessment score of 0.138 holding all else constant AAU membership requires that a wide variety and high level of research and scholarly output be achieved by the university

Highly-cited university faculty Universities having 5 or more faculty at the university, as a whole, with more than 50,000 Google scholar citations enjoy higher peer rankings (an average 0.159 higher peer assessment scores) than those universities that do not have as many of these faculty Reflects highly-cited faculty and the university’s position in highly citable fields

Number of post-doctoral researchers The average number of annual postdoctoral researchers employed at the university as a whole (averaged over the past 15 years) positively influences the average peer assessment score of the college of engineering The number of post-doctoral students is a factor in AAU membership and other measures of university reputations

SAT scores of incoming students The past 10-year average of math plus verbal (1600 maximum) Scholastic Aptitude Test (SAT) scores of students entering the university positively influence the average peer assessment scores of engineering programs A 150 point increase in the average SAT score of all university students will result in a 0.1 increase in the average peer assessment score of the university’s college of engineering

Summary and Conclusions This paper explored the factors influencing the USNews average peer assessment scores of U.S. engineering programs by estimating a random parameters linear regression using data extracted from a number of sources Estimation results show that both College and University factors influence peer assessment scores, and the influence of some of these factors varies across universities

Summary and Conclusions (cont.) Research funding itself was found to be statistically insignificant (although it is needed to generate many of the variables found to be significant) This is a wake-up call to the many universities who blindly pursue research dollars without carefully thinking about the scholarly productivity that such dollars can potentially produce

Summary and Conclusions (cont.) U.S. engineering colleges have tended to use promotional brochures, website enhancements, and other means to influence how peers assess them But our statistical analysis suggests that peer assessments are rooted much deeper in the measurable accomplishments of the college’s faculty, the quality of its students, and the quantifiable achievements of its university as whole