GLOBAL UNIVERSITY RANKINGS AND THEIR IMPACT EUA Rankings Review Lesley Wilson EUA Secretary General SEFI Conference, 28 September 2011, Lisbon.

Slides:



Advertisements
Similar presentations
Using Rankings to Drive Internal Quality Improvements
Advertisements

Current initiatives in rankings: how do you see them from the perspective of your agency? Tia Loukkola 28 September 2009.
Quality and the Bologna Process Andrée Sursock Deputy Secretary General European University Association (EUA) EPC Annual Congress, March 2005, Brighton.
Mapping Diversity – The U-Multirank Approach to Rankings Gero Federkeil Workshop Universidade Nova de Lisboa, 29th June 2012.
Information Retrieval to Informed Action with Research Metrics Thomson Scientific Research Services Group 2007.
Ranking - New Developments in Europe Gero Federkeil CHE – Centre for Higher Education Development The 3rd International Symposium on University Rankings.
INCITES PLATFORM NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION (NOAA)
1 UNICA WORKING GROUPS ON RESEARCH EVALUATION AND ADMINISTRATIVE EXCELLENCE Prof. Véronique Halloin, GS of Fund for Scientific Research.- FNRS Prof. Philippe.
Using Rankings to Drive Internal Quality Improvements Dr. Kevin Downing City University of Hong Kong & Ms. Mandy Mok QS Asia.
3. Challenges of bibliometrics DATA: many problems linked to the collection of data FIELDS – DISCIPLINES : various classifications, not satisfactory INDICATORS.
1 Academic Rankings of Universities in the OIC Countries April 2007 April 2007.
1 Academic Ranking of World Universities Methodologies and Problems May 15, 2007 By Professor Nian Cai Liu Institute of Higher Education and Center for.
1 Using metrics to your advantage Fei Yu and Martin Cvelbar.
Ranking universities: The CHE Approach Gero Federkeil CHE – Centre for Higher Education Development International Colloquium “Ranking and Research Assessment.
Not all Journals are Created Equal! Using Impact Factors to Assess the Impact of a Journal.
University and Higher Education Rankings – What Relevance Do they Have? EI Affiliates Conference in the OECD member countries “FRAMING EDUCATION FOR THE.
The world’s first global, multi-dimensional, user-driven university* ranking (* includes all higher education institutions) Jordi Curell Director Higher.
THE ROLE OF CITATION ANALYSIS IN RESEARCH EVALUATION Philip Purnell September 2010.
INFORMATION SOLUTIONS Geography of Journal Publishing Fiesole Collection Development Retreat Series, Number 7 Melbourne 2005 Nancy K. Bayers, MLS Manager,
Types of Sources Used in Research Nancy McEnery, MLIS.
Edouard Mathieu Head of the Benchmarking Center Invest in France Agency * ARWU: Academic Ranking of World Universities 2005 A few remarks on ARWU*
Journal Impact Factors and H index
Rating and Ranking: Pros and Cons Dr. Mohsen Elmahdy Said Professor, Mechanical Design and Production Department Faculty of Engineering – Cairo University.
The CHE ranking The multi-dimensional way of Ranking Isabel Roessler CHE – Centre for Higher Education Development International Conference “Academic Cooperation.
Searching for Globally Feasible Indicators from Domestic Rankings Ya Lan Xie and Ying Cheng Graduate School of Education, Shanghai Jiao Tong University,
Excellence in scientific research Prof
Danube Rectors’ Conference. University of Excelence. Teaching, Learning, Research and Community Services 4 th - 7 th November, 2010, Cluj-Napoca Peer evaluation.
Ranking effects upon students National Alliance of Student Organization in Romania (ANOSR) Member of European Students' Union (ESU) Academic cooperation.
Wojciech Fenrich Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) University of Warsaw Prague, KRE 12,
Difficulties and Possibilities of University Rankings in Hungary Magdolna Orosz (Eötvös Loránd University Budapest, Hungary) Academic cooperation and competitiveness.
The Web of Science database bibliometrics and alternative metrics
The Role of Citations in Warwick’s Strategy and Improving Them Nicola Owen (Academic Registrar) Professor Mark Smith (PVC Research: Science and Medicine)
Bibliometrics toolkit: ISI products Website: Last edited: 11 Mar 2011 Thomson Reuters ISI product set is the market leader for.
Quality Assurance in Higher Education and Vocational Education and Training WS 8/Panel 1: Reflection on the demands on quality in HE Gudrun Biffl/AUT/WIFO.
Uwe Brandenburg Options and limits of measurability: the experience from the Excellence Ranking in the light of the global ranking discussion.
Baltic Sea Region University Network Marketing and Networking for Internationalization Seminar at Vilnius University, 25 November 2011.
Rajesh Singh Deputy Librarian University of Delhi Measuring Research Output.
Bibliometrics: coming ready or not CAUL, September 2005 Cathrine Harboe-Ree.
Beyond the RAE: New methods to assess research quality July 2008.
Impact factorcillin®: hype or hope for treatment of academititis? Acknowledgement Seglen O Per (BMJ 1997; 134:497)
Gero Federkeil Expert Seminar „Quality Assurance and Accreditation in Lifelong Learning“, Berlin, February 2011 Rankings and Quality Assurance.
Quality Assurance & University Rankings. Shanghai Ranking (Shanghai Jiao Tong University) THES (Times Higher Education Supplement) CHE Ranking »Centrum.
International Activities Committee – June 12, 2014 University Rankings: An overview of research indicators used in rankings instruments.
Where Should I Publish? Journal Ranking Tools eigenfactor.org SCImago is a freely available web resource available at This uses.
A Bibliometric Comparison of the Research of Three UK Business Schools John Mingers, Kent Business School March 2014.
Academic Cooperation and Competitiveness. University Ranking Methodologies 17 th - 20 th September, 2009, Cluj-Napoca Competitiveness at Babe-Bolyai University.
Academic cooperation and competitiveness. University ranking methodologies TRANSPARENCY TOOLS VS. RANKINGS Prof. univ. dr. Radu Mircea Damian Chair, CDESR.
ESSENTIAL SCIENCE INDICATORS (ESI) James Cook University Celebrating Research 9 OCTOBER 2009 Steven Werkheiser Manager, Customer Education & Training ANZ.
1 Making a Grope for an Understanding of Taiwan’s Scientific Performance through the Use of Quantified Indicators Prof. Dr. Hsien-Chun Meng Science and.
RESEARCH EVALUATION - THE METRICS UNITED KINGDOM OCTOBER 2010.
Universiteit Antwerpen Conference "New Frontiers in Evaluation", Vienna, April 24th-25th Reliability and Comparability of Peer Review Results Nadine.
University rankings, dissected Răzvan V. Florian Ad Astra association of Romanian scientists Center for Cognitive and Neural Studies, Cluj, Romania.
Rosie Drinkwater & Professor Lawrence Young Group Finance Director, Pro Vice-Chancellor (Academic Planning & Resources) League Tables Where are we, why.
MEASURING RESEARCHERS: FROM RESEARCH PUBLICATIONS AND COMMUNICATION TO RESEARCH EVALUATION Lucie Vavříková 1.
The Thomson Reuters Journal Selection Policy – Building Great Journals - Adding Value to Web of Science Maintaining and Growing Web of Science Regional.
Communication Challenges for Nordic Universities NUAS Conference August, 2012 Prof. Andrejs Rauhvargers, EUA Senior Adviser Session A2b: Rankings.
الله الرحيم بسم الرحمن علیرضا صراف شیرازی دانشیار و مدیر گروه دندانپزشکی کودکان رئیس کتابخانه مرکزی و مرکز علم سنجی دانشگاه علوم پزشکی مشهد.
Duncan Ross Director, data and analytics Times Higher Education.
Classification & Ranking in Higher Arts Education New EU developments and the role of ELIA.
INTRODUCTION TO BIBLIOMETRICS 1. History Terminology Uses 2.
Academic Ranking of World Universities
Where Should I Publish? Journal Ranking Tools
Which University Ranking is best for you?
The Impact of African Science: A Bibliometric Analysis Scientometrics 102(2): (2015) Hugo Confraria and Manuel Mira Godinho (MERIT-UNU and.
Prof.Dr. Melih Bulu, Istinye University March 23
Johannes Sorz, Bernard Wallner, Horst Seidler and Martin Fieder
Advanced Scientometrics Workshop
Serbia in the Rankings Professor Ellen Hazelkorn
UC policy states:  "Superior intellectual attainment, as evidenced both in teaching and in research or other creative achievement, is an indispensable.
Dr. Edriss Ali, AGU - Dubai
Presentation transcript:

GLOBAL UNIVERSITY RANKINGS AND THEIR IMPACT EUA Rankings Review Lesley Wilson EUA Secretary General SEFI Conference, 28 September 2011, Lisbon

I. Purpose and principles of the review Addresses the most popular global university rankings as well as some other international rankings. Provide universities with analysis of the methodologies, not judging or ranking the rankings themselves Use only publicly available and freely accessible information on each ranking, rather than surveys or interviews with the ranking providers Efforts were made to discover what is actually measured, how the scores for indicators are calculated how the final scores are calculated, and what the results actually mean.

II. Selection of rankings included in the Review Academic rankings with the main purpose of producing university league tables Academic Ranking of World Universities (Shanghai) Times Higher Education World University Ranking – with Quacquarelli Symonds (until 2009) with Thomson Reuters Best Universities Ranking – US News & World Report with Quacquarelli Symonds Global Universities Ranking – Reitor (Рейтор, Russia) …3…

Selection of rankings (contd.) Rankings concentrating on research only Leiden Ranking (Leiden University) Ranking of Research Papers for World Universities – (HEEACT, Taiwan) Assessment of University-Based Research – EU Multirankings – using a number of indicators without the intention of producing league tables CHE/die Zeit University Ranking (CHE, Germany) CHE Excellence Ranking U-Map classification (CHEPS) European Multidimensional University Ranking System (U-Multirank) – EU funded project

Selection of rankings (contd.) Web rankings Webometrics Ranking of World Universities (Webometrics, Spain) Benchmarking based on learning outcomes Assessment of Higher Education Learning Outcomes Project (AHELO – OECD)

…6… III. Characteristics – 1. Global rankings cover not more than 3-5% of world’s universities

…7… 2. The scores almost disappear at the count of 400 universities What does this means for the scores of the remaining 16’600 universities?

3. Main Considerations emerging from the analysis The indicators used cover elite universities only Indicator scores are not the indicator values themselves but the results of mathematical operations Composite scores always contain subjective elements (reflecting qualitative choices made by each ranking provider) Choosing between simple counts or calculating relative values is not neutral (size or concentration) …8…

4. Indicators used to define research mission Number of publications: ARWU 2 indicators, total weight 40%: publications in Nature and Science & publications indexed SCI & SSCI HEEACT – 4 indicators overall weight of 50% THE-QS – none, THE-TR 1 indicator weight of 4.5% Number of Citations (or citations/staff) Other research indicators: Research reputations surveys Research income, Intensity of PhD production …9…

5. Rankings & the teaching mission of universities ARWU - Quality of education = alumni that have been awarded a Nobel Prize/ Field medal THE-QS – staff/student ratio, employer survey (answers world wide (3281 in 2009, 2339 in 2008) CHE – a number of indicators based of student opinion + reputation surveys of professors, THE-TR: reputational survey on teaching, student/staff ratio, income per academic, partly, PhDs awarded per academic, PhDs vs. bachelor degrees awarded U-Map and U-multirank – a number of indicators characterizing learning environment rather than its quality …10…

IV - Main Biases and Flaws identified 1. Natural sciences and medicine vs. social sciences bias Bibliometric indicators primarily cover journal publications Natural and life scientists primarily publish in journals, Engineering scientists - in conference proceedings, Social scientists and humanists – in books …11…

Example:21 broad subject areas defined by ISI 1.Agricultural Sciences 2.Biology & Biochemistry 3.Chemistry 4.Clinical Medicine 5.Computer Science 6.Ecology/Environment 7.Economics & Business 8.Engineering 9.Geosciences 10.Immunology 11.Materials Science 12. Mathematics 13. Microbiology 14. Molecular Biology & Genetics 15. Neuroscience 16. Pharmacology 17. Physics 18. Plant & Animal Science 19. Psychology/Psychiatry 20. Social Sciences, General 21. Space Sciences …12…

2. Different publication and citation cultures in different fields Table from presentation of Cheng at IREG 2010 conference in Berlin …13…

3. Field-normalisation – solutions and issues Field-normalised citations per publication indicator (Leiden ‘Crown indicator’) New attempt (2010) - mean-normalised citation score (MNCS) …14…

4. Impact factor – to be used with care... especially in social sciences and humanities, expert rankings do not correlate very well with impact factors. (EU WG on assessment of university research, 2010) “the impact factor should not be used without careful attention to the many phenomena that influence citation rates, as for example the average number of references cited in the average article. The impact factor should be used with informed peer review” (Garfield, 1994) “by quantifying research activity and impact solely in terms of peer-publication and citations, rankings narrowly define ‘impact’ as something which occurs only between academic ‘peers’” (Hazelkorn, 2011). …15…

5. ‘Peer review’ biases and flaws Why are reputation surveys called “Peer reviews”? ‘Peers’ are influenced by previous reputation of the institution i.e. a university which ranks highly in one ranking is likely to obtain a high reputation score Limiting the number of universities nominated (THE rankings) makes approach elitist – and probably previous reputation dependent Using pre-selected lists rather than allowing ‘peers’ free choice means that large numbers of institutions are left out Is a 5% response rate a sufficient result? …16…

V. – Recent Developments Assessment of university-based research (AUBR): Analysis of research indicators, and their suitability, working out a methodology research assessment. U-Map uses indicators that characterise the focus and intensity of various aspects in HEIs EU-Multirank will be a multidimensional ranking including all aspects of an HEI’s work – education, research, knowledge exchange and regional involvement. No composite score is produced. OECD’s AHELO project is an attempt to compare HEIs internationally on the basis of actual learning outcomes. 3 testing instruments are being developed: for measuring generic skills & for testing discipline- specific skills, in economics and engineering. …17…

VI Consequences - Even keeping current position requires great effort (1) Salmi attributes this phenomenon to the ‘Red Queen effect’ (Salmi, 2010). “In this place it takes all the running you can do, to keep in the same place”, says the Red Queen in Lewis Carroll’s ‘Through the Looking Glass’. The principle was later also articulated in relation to biological systems (Valen, 1973), as for “an evolutionary system, continuing development is needed just in order to maintain its fitness relative to the systems it is co-evolving with” (van Heyligen, 1993). …18…

2. The risks of overemphasising rankings Rankings encourage universities to improve their scores. Universities are strongly tempted to improve performance specifically in areas that are measured in rankings, e.g. research in sciences and medicine. The risks are that universities by concentrating funds and efforts on rankings universities may pay less attention to issues that are not rewarded in ranking scores, e.g. quality of teaching, regional involvement, widening access, lifelong learning, social issues of students and staff etc. …19…

3. Considering how can rankings be improved? There will be no improvement from extending 5 distant proxies to 25 – they will still remain proxies... Improve coverage of teaching – most probably through measuring learning outcomes, Lift biases, eradicate flaws of bibliometric indicators: field, language, regional, but first of all – address non- journal publications properly! A number of university rankings claim that they help students to make their choices. So change rankings in such a way that they reflect what students need, not what you can measure! Rankings cover only top institutions but impact all universities – can rankings be changed to address a larger number? …20…

VII - Main conclusions 1. The arrival on the scene of global rankings has galvanised the world of higher education. Since then universities cannot avoid national and international comparisons, and this has caused changes in the way universities function. 2. De facto, the methodologies of global rankings give stable results for only universities. The majority of universities are left out of the equation – but the result is that all HEIs are judged according to criteria that are appropriate for the top research universities only. 3. Rankings so far cover only some university missions. …21…

Main conclusions (contd.) 4. Rankings, it is claimed, make universities more ‘transparent’. However, the methodologies, especially those of the most popular league tables, still lack transparency themselves. 5. The lack of suitable indicators is most apparent when measuring teaching performance. The situation is better when evaluating research. However, even the bibliometric indicators have their biases and flaws. Efforts are made to improve methodologies, usually addressing the calculation method, while the real problem is the use of inadequate proxies, or the omission of part of the information due to methodological constraints …22…

Main conclusions (contd.) 6. At present, it would be difficult to argue that the benefits offered by the information that rankings provide, as well as the increased ‘transparency,’ are greater than the negative effects of the so-called ‘unwanted consequences’ of rankings. 7. New attempts as the AUBR EU Research Assessment, U-Map, U-Multirank and AHELO, all aim to improve the situation. These new tools are still at various stages of development or pilot implementation, and all of them still have to overcome difficult issues, particularly problems of data collection and the development of new proxies. 8. Higher education policy decisions should not be based solely on rankings data. …23…

Each in his own opinion Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong! by John Godfrey Saxe (1816–1887)