Baltic Sea Region University Network Marketing and Networking for Internationalization Seminar at Vilnius University, 25 November 2011.

Slides:



Advertisements
Similar presentations
Using Rankings to Drive Internal Quality Improvements
Advertisements

1 Bologna Shaping the Agenda Bologna today and tomorrow Lesley Wilson Secretary-General, European University Association.
EAC HIGHER EDUCATION POLICY
Analysis of systemic reasons for lower competitiveness of European universities Global rankings do not demonstrate higher (or lower) ‘competitiveness’
Mapping Diversity – The U-Multirank Approach to Rankings Gero Federkeil Workshop Universidade Nova de Lisboa, 29th June 2012.
How quality and competitiveness of European universities are reflected in global academic rankings a Bologna perspective on competitiveness: it can help.
Ranking - New Developments in Europe Gero Federkeil CHE – Centre for Higher Education Development The 3rd International Symposium on University Rankings.
INCITES PLATFORM NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION (NOAA)
1 UNICA WORKING GROUPS ON RESEARCH EVALUATION AND ADMINISTRATIVE EXCELLENCE Prof. Véronique Halloin, GS of Fund for Scientific Research.- FNRS Prof. Philippe.
Using Rankings to Drive Internal Quality Improvements Dr. Kevin Downing City University of Hong Kong & Ms. Mandy Mok QS Asia.
1 Academic Rankings of Universities in the OIC Countries April 2007 April 2007.
1 Academic Ranking of World Universities Methodologies and Problems May 15, 2007 By Professor Nian Cai Liu Institute of Higher Education and Center for.
Ranking universities: The CHE Approach Gero Federkeil CHE – Centre for Higher Education Development International Colloquium “Ranking and Research Assessment.
GLOBAL UNIVERSITY RANKINGS AND THEIR IMPACT EUA Rankings Review Lesley Wilson EUA Secretary General SEFI Conference, 28 September 2011, Lisbon.
The world’s first global, multi-dimensional, user-driven university* ranking (* includes all higher education institutions) Jordi Curell Director Higher.
Journal Impact Factors and H index
Rating and Ranking: Pros and Cons Dr. Mohsen Elmahdy Said Professor, Mechanical Design and Production Department Faculty of Engineering – Cairo University.
The CHE ranking The multi-dimensional way of Ranking Isabel Roessler CHE – Centre for Higher Education Development International Conference “Academic Cooperation.
Emerging Multinational Initiatives in Tertiary Education SHEEO Higher Education Policy Conference August 13, 2010 Maureen McLaughlin.
Excellence in scientific research Prof
Danube Rectors’ Conference. University of Excelence. Teaching, Learning, Research and Community Services 4 th - 7 th November, 2010, Cluj-Napoca Peer evaluation.
Ranking effects upon students National Alliance of Student Organization in Romania (ANOSR) Member of European Students' Union (ESU) Academic cooperation.
Wojciech Fenrich Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) University of Warsaw Prague, KRE 12,
Assessment of Higher Education Learning Outcomes (AHELO): Update Deborah Roseveare Head, Skills beyond School Division Directorate for Education OECD 31.
Difficulties and Possibilities of University Rankings in Hungary Magdolna Orosz (Eötvös Loránd University Budapest, Hungary) Academic cooperation and competitiveness.
The Web of Science database bibliometrics and alternative metrics
The Role of Citations in Warwick’s Strategy and Improving Them Nicola Owen (Academic Registrar) Professor Mark Smith (PVC Research: Science and Medicine)
Quality Assurance in Higher Education and Vocational Education and Training WS 8/Panel 1: Reflection on the demands on quality in HE Gudrun Biffl/AUT/WIFO.
Innovation for Growth – i4g Universities are portfolios of (largely heterogeneous) disciplines. Further problems in university rankings Warsaw, 16 May.
Uwe Brandenburg Options and limits of measurability: the experience from the Excellence Ranking in the light of the global ranking discussion.
What Can National Rankings Learn from the U-Multirank-Project ? Gero Federkeil, CHE, Germany IREG-Forum: National University Rankings on the.
Rajesh Singh Deputy Librarian University of Delhi Measuring Research Output.
Beyond the RAE: New methods to assess research quality July 2008.
Maastricht University Prof. dr. M. Paul, President.
The Web of Science, Bibliometrics and Scholarly Communication 11 December 2013
Gero Federkeil Expert Seminar „Quality Assurance and Accreditation in Lifelong Learning“, Berlin, February 2011 Rankings and Quality Assurance.
THOMSON SCIENTIFIC Patricia Brennan Thomson Scientific January 10, 2008.
THOMSON REUTERS—GLOBAL INSTITUTIONAL PROFILES PROJECT DR. NAN MA SCIENCE AND SOLUTION CONSULTANT THOMSON REUTERS OCT 19 TH, 2010.
Quality Assurance & University Rankings. Shanghai Ranking (Shanghai Jiao Tong University) THES (Times Higher Education Supplement) CHE Ranking »Centrum.
International Activities Committee – June 12, 2014 University Rankings: An overview of research indicators used in rankings instruments.
An overview of main bibliometric indicators: Dag W. Aksnes Data sources, methods and applications Nordic Institute for Studies in Innovation, Research.
Where Should I Publish? Journal Ranking Tools eigenfactor.org SCImago is a freely available web resource available at This uses.
ACE Opening Session 2002 News from the Recognition Field Lesley Wilson Secretary General European University Association (EUA)
A Bibliometric Comparison of the Research of Three UK Business Schools John Mingers, Kent Business School March 2014.
Academic Cooperation and Competitiveness. University Ranking Methodologies 17 th - 20 th September, 2009, Cluj-Napoca Competitiveness at Babe-Bolyai University.
Trends 2015 The implementation of the European Higher Education Area – 15 years on. Presentation and discussion on what the impact has been for European.
Academic cooperation and competitiveness. University ranking methodologies TRANSPARENCY TOOLS VS. RANKINGS Prof. univ. dr. Radu Mircea Damian Chair, CDESR.
1 Joint EAIE/NAFSA Symposium Amsterdam, March 2007 John E Reilly, Director UK Socrates-Erasmus Council.
ESSENTIAL SCIENCE INDICATORS (ESI) James Cook University Celebrating Research 9 OCTOBER 2009 Steven Werkheiser Manager, Customer Education & Training ANZ.
1 Making a Grope for an Understanding of Taiwan’s Scientific Performance through the Use of Quantified Indicators Prof. Dr. Hsien-Chun Meng Science and.
RESEARCH EVALUATION - THE METRICS UNITED KINGDOM OCTOBER 2010.
The European Credit Transfer System (ECTS) More details in the site: Dr Michalis Glampedakis Professor Technological Institution (University)
University rankings, dissected Răzvan V. Florian Ad Astra association of Romanian scientists Center for Cognitive and Neural Studies, Cluj, Romania.
CEIHE II CONFERENCE SANTANDER APRIL 2008 Dr Peter W A West Secretary to the University.
Rosie Drinkwater & Professor Lawrence Young Group Finance Director, Pro Vice-Chancellor (Academic Planning & Resources) League Tables Where are we, why.
MEASURING RESEARCHERS: FROM RESEARCH PUBLICATIONS AND COMMUNICATION TO RESEARCH EVALUATION Lucie Vavříková 1.
Communication Challenges for Nordic Universities NUAS Conference August, 2012 Prof. Andrejs Rauhvargers, EUA Senior Adviser Session A2b: Rankings.
الله الرحيم بسم الرحمن علیرضا صراف شیرازی دانشیار و مدیر گروه دندانپزشکی کودکان رئیس کتابخانه مرکزی و مرکز علم سنجی دانشگاه علوم پزشکی مشهد.
Duncan Ross Director, data and analytics Times Higher Education.
Classification & Ranking in Higher Arts Education New EU developments and the role of ELIA.
INTRODUCTION TO BIBLIOMETRICS 1. History Terminology Uses 2.
Academic Ranking of World Universities
Which University Ranking is best for you?
Prof.Dr. Melih Bulu, Istinye University March 23
Johannes Sorz, Bernard Wallner, Horst Seidler and Martin Fieder
On the feasibility of a new approach
U-Multirank – The first Multidimensional Global University Ranking
CESAER Task Force Benchmarking: SHARING / MONITORING / INFLUENCING
Advanced Scientometrics Workshop
Dr. Edriss Ali, AGU - Dubai
Presentation transcript:

Baltic Sea Region University Network Marketing and Networking for Internationalization Seminar at Vilnius University, 25 November 2011

Purpose and principles of review Addresses the most popular global university rankings Providing universities with analysis of the methodologies, not judging or ranking the rankings themselves Only publicly available and freely accessible information was used Efforts were made to discover what is actually measured, how the scores for indicators are calculated how the final scores are calculated, and what the results actually mean.

Selection of rankings Shanghai Ranking Times Higher– QS (until 2009) Thomson Reuters US N&WR/ QS Reitor (Рейтор) Leiden Ranking Taiwan Ranking University Research Assessment CHE/die Zeit U-Map classification U-multirank AHELO Webometrics …3…

…4… Global rankings cover not more than 3-5% of world’s universities

…5… Decrease of scores within the Top 400 universities How big can be the scores of reaming for 16’600 universities?

Indicators covering elite research universities only “Quality of faculty” = staff winning Nobel prizes (ARWU, Reitor) “Highly Cited” = belonging to worlds Top 200 in 21 areas, i.e altogether (ARWU) “Peer review” = nominating 30 best universities from pre-selected list (THE-QS and other QS-based rankings) Reputation survey(s) = nominating 30 best (THE- QS, USN&WR, THE-TR) Universities considered: selection from elite group of universities: ARWU, THE, Reitor, Leiden …6…

Indicator scores are usually not the indicator values themselves Each indicator has a dimension or denominator, e.g.: articles count, staff numbers, citations per academic To make indicator scores dimensionless, either -values are usually expressed as percentage of the result of the “best” university -Z-score is another option (difference between the current measure and the mean value divided by standard deviation) …7…

Composite scores always contain rankers’ subjective view of quality In all cases where a composite score is calculated from several indicators, ranking providers assign weights to each indicator in the overall score. This means that the ranking provider’s subjective judgement determines which indicators are more important (e.g. citations – 10%, reputation – 40%) In other words, the composite score reflects the ranking provider’s concept of quality. …8…

Choosing between simple counts or relative values is not neutral Using absolute values ranking favours large universities Using relative values ranking allows small but efficient universities compoete with large ones. Predominantly using absolute numbers are, e.g. ARWU (Shanghai) and Webometrics HEEACT (Taiwan), THE-QS and THE-TR mainly use relative values (except for reputation surveys). Leiden University offers both size-dependent and size- independent Rankings/indicators …9…

Rankings and the research mission of universities: indicators Publication count SCI & SSCI, Scopus: - production Publication count in Nature & Science - excellence Publications per staff - staff research productivity Citations (count) – overall force of HEI Citations - per paper or per staff - impact Citations to articles in the top impact journals – excellence Research income (by competition or direct allocation) Research reputation surveys …10…

Rankings and the Teaching mission HEIS Indicators: Alumni that have been awarded a Nobel Prize Staff/Student ratio Teaching reputation surveys Teaching income Dropout rate Time to degree PhD/ undergraduate ratio All of the above are distant proxies, some strongly questionable Learning outcomes – are we there yet? …11…

BIASES AND FLAWS Natural sciences and medicine vs. social sciences bias Bibliometric indicators primarily cover journal publications BUT while natural and life scientists primarily publish in journals, Engineering scientists - in conference proceedings, prototypes Social scientists and humanists – in books/ monographs …12…

Several indicators count by ISI 21 broad areas: 1.Agricultural Sciences 2.Biology & Biochemistry 3.Chemistry 4.Clinical Medicine 5.Computer Science 6.Ecology/Environment 7.Economics & Business 8.Engineering 9.Geosciences 10.Immunology 11.Materials Science 12. Mathematics 13. Microbiology 14. Molecular Biology & Genetics 15. Neuroscience 16. Pharmacology 17. Physics 18. Plant & Animal Science 19. Psychology/Psychiatry 20. Social Sciences, General 21. Space Sciences …13…

Different publication and citation cultures in different fields Table from presentation of Cheng at IREG 2010 conference in Berlin …14…

Field normalisation – solutions and issues Field-normalised citations per publication indicator (Leiden ‘Crown indicator’) C i is the number of citations of the publication i e i is the expected number of citations of publication i given the field and the year Criticisms – prefers older publications, – blurs the picture …15…

Mean-normalisation – solutions and issues New attempt (2010) - mean-normalised citation score (MNCS) Good idea, but: now the results are unstable for the very newest publications ( e s change rapidly) To avoid the new flaw, a modified MNCS2 indicator is used which leaves out publications of the last year But after all, it just improves mathematics, not the issue that WoS and Scopus insufficiently cover books …16…

‘Peer review’ biases and flaws Why calling reputation surveys “Peer reviews”? ‘Peers’ are influenced by previous reputation of the institution (including positions in other rankings) just try nominating 30 universities you know as best in teaching in your subject… Limiting the number of universities nominated (THE, QS based rankings) makes approach elitist – and strengthens previous reputation dependence Using pre-selected lists rather than allowing ‘peer’s’ free choice results in leaving out huge numbers of institutions Is 5% response rate a sufficient result? …17…

Risks of overdoing Even keeping current position in ranking requires great effort ( Red queen effect, J.Salmi, 2010 ) Rankings encourage universities to improve scores Universities are tempted to improve performance specifically in areas measured in rankings There are risks that universities will concentrate funds and efforts on scores and pay less attention to issues that are not rewarded in ranking scores such as: quality of teaching, regional involvement, widening access, lifelong learning, social issues of students and staff etc. …18…

Rankings and reforms in the EHEA You will not be rewarded in rankings for improving access to next cycle, establishing internal quality culture in universities, implementing ESG, linking credits and programmes with learning outcomes, establishing qualifications frameworks, improving recognition of qualifications and credits, establishing flexible learning paths for LLL, establishing recognition of non-formal and informal learning, improving social conditions of students, making HE more accessible 19

Direct abuses merging universities just to get onto league tables standardised test scores of applicants number of academic staff student/staff ratio – using different definitions of staff and students, the ratio could be between 6:1 to 39:1) faculty salary – just plan when you should pay reputation survey by students – tell students to lie Nobel laureates – hire them More ciations? – fund medicine not humanities Want to move a university 52 positions up in the table? Want to use completely different indicatos than announced? Go ahead… 20

Can rankings be improved? There will be no improvement from extending 5 distant proxies to 25 – they will still remain proxies... Improve coverage of teaching – most probably through measuring learning outcomes, Lift biases, eradicate flaws of bibliometric indicators: field, language, regional, but first of all – address non-journal publications properly! Change rankings so that they in reality help students to make their choices. Addressing elite only, ranking results impact life all universities – it is time to produce rankings that cover all universities! …21…

Informing student choices – CHE university rankings

Student profile Teaching and learning Research Knowledge exc International Regional U-map

The new developments: U-map U-Map has two visualisation tools allowing to classify HEIs and to make detailed comparison of selected HEIs. Source: U-map …24…

The new developments: U-Multirank U-Multirank is a multidimensional ranking including all aspects of an HEI’s work – education, research, knowledge exchange and regional involvement. No composite score is produced. Has to be seen in future: how well self-reported and student satisfaction data will work in international context, whether other parties will turn Multirank into a league table and what will be the consequences …25…

The new developments: AHELO OECD’s AHELO project is an attempt to compare HEIs internationally on the basis of actual learning outcomes. Three testing instruments will be developed: one for measuring generic skills and two for discipline-specific skills, in economics and engineering. Questions yet to be answered are: whether it is possible to develop instruments to capture learning outcomes that are perceived as valid in diverse national and institutional contexts. …26…

Main conclusions 1. Since arrival of global rankings then universities cannot avoid national and international comparisons, and this has caused changes in the way universities function. 2. Criteria that are appropriate for the top research universities only are applied for judging all unkversities 3. Rankings so far cover only some of university missions. 4. Rankings, it is claimed, make universities more ‘transparent’. However, the methodologies, especially those of the most popular league tables, still lack transparency themselves. …27…

Each in his own opinion Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong! by John Godfrey Saxe (1816–1887)

Thanks for your attention