Download presentation
Presentation is loading. Please wait.
Published byShawn Warren Modified over 9 years ago
1
Standards in science indicators Vincent Larivière EBSI, Université de Montréal OST, Université du Québec à Montréal Standards in science workshop SLIS-Indiana University August 11th 2011
2
Current situation Since the early 2000s, we are witnessing: 1)Increase in the use of bibliometrics in research evaluation; 2)Increase in the size of the bibliometric community; 3)Increase in the variety of actors invoved in bibliometrics (e.g. no longer limited to LIS or the STS community); 4)Increase in the variety of existing metrics for mesuring research impact: H-index (with its dozen varieties); engenvalues, SNIP and SCIMAGO impact indicators, etc. 5)No longer an ISI monopoly (Scopus, Google Scholar + several other initiatives (SBD, etc.).
3
Why do we need standardized bibliometric indicators? 1)Symptomatic of the immaturity of the research field – no paradigm is yet dominant; 2)Bibliometric evaluations are spreading at the levels of countries, institutions, research groups and individuals; 3)Worldwide rankings are spreading and often yield diverging results 4)Standards shows the consensus in the community and allows for various measures to be : 1)Comparable 2)Reproducable
4
Impact indicators Impact indicators have been used for quite a while in science policy and research evaluation. Until quite recently, only a handful of metrix were available or compiled by research groups involved in bibliometrics: 1) raw citations 2) citations per publication 3) Impact factors Only one database was used: ISI Only one normalization was made: by field (when it was done!)
5
Factors to take into account in the creation of a new standard 1)Field specificities: citation potential and aging characteristics. 2)Field definition: at the level of journal or at the level of paper? Interdisciplinary journals? 3)Differences in the coverage of databases 4)Distributions vs. aggregated measures 5)Skewness of citation distributions (use of logs?) 6)Paradox of ratios (0 1 ∞) 7)Averages vs medians vs ranks 8)Citation windows 9)Unit vs fractional counting 10)Equal or different weight for each citation?
6
Ex. 1: Impact indicators Example of how a very simple change in the calculation method of an impact indicator can change the results obtained – even when very large number of papers are involved. All things are kept constant here: same papers, same database, same subfield classification, same citation window. The only difference is the order of operations leading to the calculation: average of ratio (AoR) vs ratio of averages (RoA). Both these methods are considered as standards in research evaluation. 4 levels of aggregation are analyzed: individuals, departments, institutions and countries
7
Relation between RoA and AoR field normalized citation indicators at the level of A) individual researchers (≥20 papers), B) departments (≥50 papers), C) institutions (≥500 papers) and D) countries (≥1000 papers)
8
Figure 2. Relationship between (AoR – RoA) / AoR and the number of papers at the level of A) individual researchers, B) departments, C) at the level of institutions (≥500 papers), D) countries.
9
Ex. 2: Productivity measures Typically, we count the research productivity of units by summing the distinct number papers they produced and dividing it by the total number of researchers of the unit. Another method is to assign papers to each researcher of the group, and then perform the average of their individual output. Both counting methods are correlated, but nonetheless yield different results:
10
Difference in the results obtained for 1223 departments (21,500 disambiguated researchers)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.