Presentation is loading. Please wait.

Presentation is loading. Please wait.

Amy W. Apon* Linh B. Ngo* Michael E. Payne* Paul W. Wilson+

Similar presentations


Presentation on theme: "Amy W. Apon* Linh B. Ngo* Michael E. Payne* Paul W. Wilson+"— Presentation transcript:

1 Efficiency as a Measure of Knowledge Production of Research Universities
Amy W. Apon* Linh B. Ngo* Michael E. Payne* Paul W. Wilson+ School of Computing* and Department of Economics + Clemson University

2 Content Motivation Methodology Data Description Case Studies
Conclusion

3 Motivation Recent economic and social events motivate universities and federal agencies to seek more measures from which to gain insights on return on investment

4 Motivation Traditional measures of productivity:
Expenditures, counts of publications, citations, student enrollment, retention, graduation … These may not be adequate for strategic decision making Traditional Measures of Institutions’ Research Productivity: Are primarily parametric-based Often ignore the scale of operation

5 Research Question What makes this institution more efficient in producing research? What makes this group of institutions more efficient in producing research? How do we show statistically that one group of institutions is more efficient than the other group To answer the first two questions, we must be able to provide evidences for the third question

6 Efficiency as a Measure
Using efficiency as a measure of knowledge production of universities Extends traditional metrics Utilizes non-parametric statistical methods Non-parametric estimations of relative efficiency of production units No endogeneity: we are not estimating conditional mean function because we are not working in a regression framework Scale of operations is taken into consideration Rigorous hypothesis testing Is used as a straight forward measure of productivity Lacks theoretical foundation for hypothesis testing due to non-parametric nature

7 Background We define 𝑃 as the set of feasible combinations of p inputs and q outputs, also called the production set. There exists a maximum level of output on a given input (the concept of efficiency) The efficiency score is an estimation with regard to the true efficiency frontier Range: [0,1] Input Output Infeasible set Feasible set

8 Hypothesis Testing Procedure

9 Convexity Test for Convexity
Input Output Infeasible set Feasible set Test for Convexity Null hypothesis: The production set is convex Alternative: The production set is not convex Input Output Infeasible set Feasible set Input Output Infeasible set Feasible set

10 Constant Returns to Scale
Test for Constant Returns to Scale Null hypothesis: The production set has constant returns to scale Alternative: The production set has variable returns to scale Input Output Infeasible set Feasible set Input Output Infeasible set Feasible set

11 Group Distribution Comparison
Test for Equivalent Means (EM) Null hypothesis: 𝜇 1 = 𝜇 2 Alternative: 𝜇 1 > 𝜇 2 Test for First Order Stochastic Dominance (SD) between the two efficiency distributions: Null hypothesis: distribution 1 does not dominate distribution 2 Alternative: distribution 1 dominates distribution 2 Reserve for later: The order of group 1 and 2 is important, as these tests are one-sided tests

12 Case Studies University Level Departmental Level Grouping Categories
EPSCoR vs. NonEPSCoR Public vs. Private Very High Research vs. High Research “Has HPC” versus “Does not have HPC” Say out: we do not use the conclusion, only the collected data One might want to weight publication count with quality, that is an area of future work

13 Hypotheses Institutions from states with more federal funding (NonEPSCoR) will be more efficient than institutions from states with less federal funding (EPSCoR) Private institutions will be more efficient than public institutions Institutions with very high research activities will be more efficient than institutions with high research activities Say out: we do not use the conclusion, only the collected data One might want to weight publication count with quality, that is an area of future work

14 University: Data Description
NCSES Academic Institution Profiles NSF WebCASPAR Web of Science Aggregated data from Input: Faculty Count, Federal Expenditures Output: PhD Graduates, Publication Counts

15 University Test of Convexity: Test of Constant Returns to Scale:
p = : Fail to reject the null hypothesis of convexity Test of Constant Returns to Scale: p = : Fail to reject the null hypothesis of constant return to scale Mentioned the needed p-value to reject the null hypothesis (0.1)

16 University: EPSCoR vs NonEPSCoR
p-values for EM and SD tests Group 1: EPSCoR Group 2: NonEPSCoR Group 1: NonEPSCoR Group 2: EPSCoR Count 45 118 EM SD Mean Efficiency 0.325 0.385 4.3× 10 −26 0.993 0.999 -- While the first set of EM/SD tests indicates that the distribution of efficiency scores for EPSCoR institutions does not dominate the distribution of efficiency scores for NonEPSCoR institutions, The second set of EM/SD tests also rejects the notion that the distribution of efficiency scores for NonEPSCoR institutions is greater than the distribution of efficiency scores for EPSCoR institutions. This implies that NonEPSCoR institutions are at least as efficient as EPSCoR institutions Mentioned the needed p-value to reject the null hypothesis (0.1)

17 University: Public vs. Private
p-values for EM and SD tests Group 1: Public Group 2: Private Group 1: Private Group 2: Public Count 110 53 EM SD Mean Efficiency 0.396 0.311 3.1× 10 −86 0.011 0.999 -- The first set of EM/SD tests indicates that the distribution of efficiency scores for public institutions dominates the distribution of efficiency scores for private institutions, The second set of EM/SD tests also supports this result by rejects the notion that the distribution of efficiency scores for public institutions is greater than the distribution of efficiency scores for private institutions. This result shows strong evidence that public institutions are more efficient than private institutions Mentioned the needed p-value to reject the null hypothesis (0.1)

18 p-values for EM and SD tests
University: VHR vs. HR VHR HR p-values for EM and SD tests Group 1: VHR Group 2: HR Group 1: HR Group 2: VHR Count 80 83 EM SD Mean Efficiency 0.398 0.338 9.1× 10 −90 0.021 0.999 -- This result shows strong evidence that institutions with very high research activities are more efficient than institutions with only high research activities Mentioned the needed p-value to reject the null hypothesis (0.1)

19 Department: Data Description
National Research Council: Data-Based Assessment of Research-Doctorate Programs in the U.S. for Input: Faculty Count, Average GRE Scores Output: PhD Graduates, Publication Counts 8 academic fields have sufficient data: Biology Chemistry Computer Science Electrical and Computer Engineering English History Math Physics

20 Department Department p-values Test for Convexity
Test for Constant Returns to Scale Biology 0.032 -- Chemistry 0.466 0.060 Computer Science 0.368 0.999 Electrical and Computer Engineering 0.078 English 0.003 History 8.4×10 −5 Mathematics 0.626 0.894 Physics 0.214

21 Department: EPSCoR vs NonEPSCoR
p-value for EM/SD tests: Group 1: EPSCoR Group 2: NonEPSCoR p-values for EM/SD tests: Group 1: NonEPSCoR Group 2: EPSCoR Count/Mean Efficiency EM SD Biology 35/0.81 86/0.88 2.9× 10 −26 0.997 0.999 -- Chemistry 54/0.39 126/0.51 3.3× 10 −31 0.858 Computer Science 30/0.3 97/0.49 3.1× 10 −17 Electrical and Computer Engineering 34/0.66 102/0.87 1.7× 10 −6 English 27/0.91 92/0.89 9.5× 10 −272 0.648 History 30/0.92 107/0.92 0.0000 0.802 Mathematics 32/0.48 95/0.59 7.9× 10 −6 0.953 Physics 41/0.44 120/0.59 4.4× 10 −24 Mentioned the needed p-value to reject the null hypothesis (0.1)

22 Department: Public vs. Private
p-value for EM/SD tests: Group 1: Public Group 2: Private p-values for EM/SD tests: Group 1: Private Group 2: Public Count/Mean Efficiency EM SD Biology 82/0.85 39/0.89 0.999 -- 2.8× 10 −28 0.230 Chemistry 130/0.45 50/0.53 5.3× 10 −48 0.096 Computer Science 92/0.42 35/0.5 4.5× 10 −217 0.984 Electrical and Computer Engineering 97/0.79 39/0.86 1.7× 10 −96 0.127 English 81/0.89 38/0.92 0.9999 9.6× 10 −233 0.3626 History 87/0.92 50/0.91 4.4× 10 −228 0.9318 Mathematics 90/0.55 37/0.59 0.1734 0.8265 Physics 11/0.54 50/0.59 0.9138 0.0861 0.1917 Mentioned the needed p-value to reject the null hypothesis (0.1)

23 Department: VHR vs. HR VHR HR p-value for EM/SD tests: Group 1: VHR
Group 2: HR p-values for EM/SD tests: Group 1: HR Group 2: VHR Count/Efficiency EM SD Biology 67/0.89 40/0.79 0.999 -- 2.5× 10 −24 Chemistry 115/0.56 57/0.35 0.010 6.1× 10 −10 0.989 Computer Science 95/0.5 29/0.28 2.5× 10 −12 Electrical and Computer Engineering 94/0.83 37/0.77 1.2× 10 −24 0.950 English 85/0.89 32/0.91 5.5× 10 −76 0.246 History 101/0.92 33/0.91 0.0000 0.935 Mathematics 94/0.61 32/0.42 0.0001 1.3× 10 −5 Physics 117/0.63 42/0.35 0.968 Mentioned the needed p-value to reject the null hypothesis (0.1)

24 Implication Efficiency estimations, together with hypothesis testing, provide insights for strategic decision making, particularly at departmental level. Lower efficiency estimate does not mean a program is not doing well. Issues: Lack of data and integration/curation of data

25 Questions


Download ppt "Amy W. Apon* Linh B. Ngo* Michael E. Payne* Paul W. Wilson+"

Similar presentations


Ads by Google