Download presentation
Published bySusan Grant Modified over 9 years ago
1
Journal Metrics Iran Research Excellence Forum Tehran, October 2014
Dr. Basak Candemir Customer Consultant, Elsevier BV
2
Today’s Agenda Impact Factor SJR SNIP Altmetrics H-index and variants
Scopus Journal Analyzer and other metrics SciVal and Field-Weighted Citation Impact
3
Bibliometrics A set of methods used for the analysis of scientific literature, using bibliometric data included in the documents Number of articles published, citations made to papers.. Can be used to analyse the quality and quantity of work produced by an author, an institution, a journal, a whole discipline, a country…
4
“There is no single ‘best’ indicator that could accommodate all facets of the new reality of bibliometrics.” - Wolfgang Glänzel, Head of bibliometrics group Professor at KU Leuven, Belgium Addition of two independent journal ranking metrics to assist with journal evaluation supporting decisions
5
Impact Factor Originated by Eugene Garfield in 1955 evolving into Science Citation Index in 1961. Approximates the average number of citations per recent paper for a journal Calculated yearly starting from 1975
6
Currently in the market? – Impact Factor
Impact Factor pros Easy to understand Pervasive - stranglehold Impact Factor cons Little transparency – underlying database not publicly available – Impact Factors cannot be reconstructed Citation windows available are biased 2 years favours rapidly moving fields 5 years favours slowly moving fields Subject field differences* Easy to mislead and manipulate*
7
Criticisms of the Impact Factor: 1
Only a limited subset of journals is indexed by ISI Only uses the articles cited by the ~10,000 “ISI journals” Some disciplines are especially poorly covered Biased toward English-language journals ISI has recently added several hundred non-English journals Short (two year) snapshot of journal Some disciplines use older material more or take time to cite new research JCR now also includes the 5-year data Is an average; not all articles are equally well-cited E.g., look up articles that have been published in the journal Chemical Senses (WoS / Cited Ref Search / Cited Work = Chem Senses) Liz to put into template
8
Criticisms of the Impact Factor: 2
Includes self-citations Only includes “citable” articles in the denominator of the equation, i.e., articles and reviews Editors may skew IF by increasing the number of review articles, which bring in more citations… Or by increasing the number of “news” items (e.g., Science, general medical journals) , which are cited (appear in numerator) but not considered “citable” (and so aren’t in the denominator) It is expensive to subscribe to the JCR Liz to put into template
9
Which Journal is the Best Journal?
Impact Factor 2012* Pain 6.125 Nature Genetics 38.597 Annals of Mathematics 3.027 Computers & Operations Research 2.374 Progress in Energy and Combustion Science 17.778 Addiction Biology 5.914 Remote Sensing of Environment 6.144 *Journal Citation Reports 2013 Answer: All of them are the best journals in their subject areas. With IF journals from different subject fields CANNOT be compared.
10
SJR- SCImago Journal Rank
Developed by Felix de Moya, CSIC (Spanish Research Council) Prestige metric- not all citations are the same Citations are weighted depending on the status of the source they come from
11
SJR- SCImago Journal Rank
Life Sciences journal Arts & Humanities journal High impact, lots of citations One citation = low value Low impact, few on citations One citation = high value SJR normalizes for differences in citation behaviour between subject fields
13
SNIP-Source Normalised Impact per Paper
Developed by Henk Moed - CWTS (Centre for Science and Technology Studies)-Leiden University Measures the average citation impact of the publications of a journal, correcting for the differences in citation practices between scientific fields and therefore allowing for more accurate between-field comparisons of citation impact SNIP is field normalized, dependent on likelihood of citation in subject field of source
14
SNIP: Source-normalized impact per paper
All 20K journals have a Source-normalized impact per paper (SNIP) measuring contextual citation impact by weighting citations per subject field - Peer-reviewed papers only - Field’s frequency and immediacy of citation - Database coverage - Journal’s scope and focus - Measured relative to database median impact per publication (IPP) + + + Citation potential in its subject field Journal IPP Cit. Pot. SNIP (IPP/Cit. Pot.) Inventiones Mathematicae 1.5 0.4 3.8 Molecular Cell 13.0 3.2 4.0
15
Example of publishers promoting journal metrics on their journal website
16
Altmetric “Altmetric for Scopus is a powerful 3rd party web application that runs within the sidebar of Scopus article and abstract pages. It's a quick and easy way to see all of the social or mainstream media mentions gathered for a particular paper as well as reader counts on popular reference managers.” Learn more at:
17
Integration of article level metrics into Scopus
Mendeley readership Statistics shows how many times Mendeley users have downloaded a specific article to their libraries. Altmetric is way to see all of the social or mainstream media mentions gathered for a particular paper as well as reader counts on popular reference managers
18
H-Index Originated by Jorge Hirsch in 2005 A group of papers has index h if h of the papers have at least h citations each, and the other papers have no more than h citations each. Attempts to measure both the productivity and impact of the published work of a scholar
19
An example from Iran… H-index is a metric which measures both productivity and quality of a researcher or an institution. For example; assume that a researcher has 5 articles in Scopus indexed journals and each of his articles received 100 citations. His h-index is 5. Because h-index is limited by number of publications. Yes, he writes very high quality of articles and receives many citations but he is not productive. He needs to contribute science more. On the other hand, assume that there is another reseracher and she has 100 articles in Scopus indexed journals. Each of her article received 2 citations. Her h-index is 2 because yes she is very productive but it is obvious that her articles do not contribute science much as she receives few citations. OR, she is doing science in a very niche science subject area, this is why she couldn’t receive many citations. It should be assessed by the reserachers and administrators also. Basically, for calculating h-index, a researcher’s articles are aligned from the most cited one to less cited one and the intersaction of the number of articles and number of citations gives the h-index of him/her. h-index of an entity is 9 if the top 9 most-cited publications have each received at least 9 citations; it is 13 if an entity’s top 13 most-cited publications have each received at least 13 citations; and so on
20
g-index & m-index g-index
Variant of h-index that emphasizes the most highly-cited papers in a data set m-index Variant of h-index that displays h-index per year since first publication G-index:Always the same or higher than h-index M-index: The h-index tends to increase with career length, and m-index can be used in situations where this is a shortcoming, such as comparing researchers within a field but with very different career lengths. The m-index inherently assumes unbroken research activity since the first publication The values of h-indices are limited by the Citation Count of an entity, and tend to be highest in subject fields such as biochemistry, genetics and molecular biology; this reflects distinct publication and citation behavior between subject fields and does not necessarily indicate a difference in performance. It is not advisable to compare the h-indices of entities that fall entirely into distinct disciplines, such as a Researcher in genetics with a Researcher in human-computer interaction
21
Analyze Journals in Scopus
22
Analyze Journals in Scopus
23
Average 5 year IF for different Subject Areas
Source: Elsevier analysis of Thomson Reuters’ data
24
“Using the Impact Factor alone to judge a journal is like using weight alone to judge a person’s health.” Source: The Joint Committee on Quantitative Assessment of Research: “Citation Statistics”, a report from the International Mathematical Union
25
SciVal metrics- a snapshot
26
SciVal metrics- Field-weighted citation impact
Indicates how the number of citations received by an entity’s publications compares with the average number of citations received by all other similar publications in the data universe. FWCI of 1 world average FWCI > 1 cited more than global average FWCI < 1 cited less than global average The Field-Weighted Citation Impact (FWCI) for a set of N publications is defined as: Similar publications are those publications in the Scopus database that have the same publication year, publication type, and discipline, as represented by the Scopus journal classification system: Publications assigned to ‘publication-driven assignment’: assumes that publications within a journal may have additional or different relevance to fields outside the core focus of the journal’s scope. Publication-driven assignment offers the benefit of being able to assign individual publications from a journal separately to their relevant classifications. This is important for publications in multi-disciplinary journals
27
Thank you, any questions?
Dr. Basak Candemir
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.