Presentation is loading. Please wait.

Presentation is loading. Please wait.

Author-level bibliometric indicators Interactive workshop

Similar presentations


Presentation on theme: "Author-level bibliometric indicators Interactive workshop"— Presentation transcript:

1 Author-level bibliometric indicators Interactive workshop
DARMA annual meeting, 2017, Nyborg Lorna Wildgaard, PhD Royal School of Library and Information Science, University of Copenhagen M / Marianne Gauffriau Research Support, University of Copenhagen and The Royal Library M / or

2 fundraising meetings supervison teaching public debate consulting
DARMA Bibliometrics: The analysis of publications and citations aka. Quantitative research evaluation fundraising meetings supervison teaching public debate consulting

3 Can you give examples of how the indicators are used?
DARMA Can you give examples of how the indicators are used? H-index “H-index fortæller noget om den enkelte forfatters publikations- mængde og hvor ofte forfatteren (eller artiklerne) er blevet citeret i andre videnskabelige publikationer.” “H-index can be used to track ones own impact factor in a field and may be asked for by future employers etc.” Journal Impact Factor “Journal impact factor is used by journals to track their impact and reflects the ratio of published articles to citations.” “In my field of research, the higher IF the better. Every researcher aims at promoting his/her work and I believe it is generally accepted that the classic IF is used as a measure of this.”

4 H-index from Web of Science – example Prof. Eske Willerslev
DARMA H-index from Web of Science – example Prof. Eske Willerslev

5 H-index from Web of Science - example Prof. Eske Willerslev
DARMA H-index from Web of Science - example Prof. Eske Willerslev

6 DARMA Journal Impact Factor (JIF) from Journal Citation Reports - example New England Journal of Medicine

7 Journal Impact Factor (JIF) - example New England Journal of Medicine
DARMA Journal Impact Factor (JIF) - example New England Journal of Medicine JIF

8 Example from call: European Research Council
DARMA Example from call: European Research Council European research Council (ERC) – Consolidator Grant Profile 2017 (PhD age 7-12 years) marianne

9 Example from call: European Research Council
DARMA Example from call: European Research Council European research Council (ERC) – Consolidator Grant Profile 2017 (PhD age 7-12 years) How do we define “major”? JIF? Top percentile normalised for field? How do we normalise when the criteria is “multi-disciplinary”? What about major national journals? Can these be regarded important internationally? Lorna

10 UNDEFENSIBLE USE OF JIF BY THE RESEARCHER
DARMA UNDEFENSIBLE USE OF JIF BY THE RESEARCHER 2001 (I.F. complessivo = IF cumulativo=99.506) 1.42) Cerveri I, Accordini S, Verlato G, Corsico A, Zoia MC, Casali L, Burney P, de Marco R, for ECRHS-Study Group (2001) Variations in the prevalence of chronic bronchitis and smoking habits in young adults across countries. The European Community Respiratory Health Survey (ECRHS). Eur Respir J, 18: (I.F. 01=2.989) 1.43) Verlato G, Polati E, Speranza G, Finco G, Gottin L, Ischia S (2001) Both right and left cervical cordotomies depress sympathetic indexes derived from heart rate variability in humans. J Electrocardiol, 34: (I.F. 01=0.627) 1.44) de Manzoni G, Verlato G, Di Leo A, Tasselli S, Bonfiglio M, Pedrezzoni C, Guglielmi A, Cordiano C (2001) Rhesus D-phenotype does not provide prognostic information additional to TNM staging in gastric cancer patients. Cancer Detection and Prevention, 25: (I.F. 01=1.324)

11 Example from call: European Research Council
DARMA Example from call: European Research Council European research Council (ERC) – Consolidator Grant Profile 2017 (PhD age 7-12 years) How do we define “major”? JIF? Top percentile normalised for field? How do we normalise when the criteria is “multi-disciplinary”? What about major national journals? Can these be regarded important internationally? How is main author defined? Are we looking at position on the author-byline? Does the applicant have to include co-author statements? A narrative describing contribution? “Pressured authorship”, after a washout period the superviser is a fellow researcher. Agentic vs communal qualities Where does the data come from? What is the comparison? What information do you want? How will this information be used?

12 IS THIS MORE INFORMATIVE?? (AGENTIC QUALITIES)
DARMA IS THIS MORE INFORMATIVE?? (AGENTIC QUALITIES)

13 BECAUSE WHAT DO INDICATORS MEASURE?
DARMA BECAUSE WHAT DO INDICATORS MEASURE? e=18 𝑿 =3 CPP=18 t=4 m=1 h=8 simple indicators 𝑿 (median) t geometric E (square root of citations to highly cited papers) (escess of h) Low precision f.eks h-indeks Easy to understand Limits are obvious, and can be supplemented by other indicators (quatitative/qualitative), other datasets and short narratives – requires a behavioural change in the readers of bibliometric analysis that they do not just look at the tables but read the supplementary texts. Open for new measures

14 DARMA

15 DARMA ET HAV AF INDIKATORER….. Don’t ramble. Be clear – ALI have families, explain colours using same variables Closer to raw output, less complicated mathematical properties Something missing, superficial analysis – next level, even if family do they yeild same results, mathematical properties are the same, performance characteristics Move paper textbox So we had to study them in more detail Through a literature review of published and informal documentation (blogs) (Paper 1) we identified 108 ALI, that we scored on a 5 point scale. We are only interested in indicators that are simple to apply and can be computed using availible data and not special softeare or advanced, complex mathematica. So indicators the scored 3 or less were kept in the set. To to this all indicators were put in a large matrix, Paper 2, and for each indicator the mathematical definition, what it is designed to indicate, advantages/disadvantages as discussed in the literature, complexity of data collection and calculation and other comments. Close to output simle, further away more complicated Family traits The method for learning more about ALI was investigated through the research papers Out of 108 indicators, 79 Increase awareness of the range of options there are. Indicators alone only capture a fractioin of certain aspects of researcher performance But this grouped species rather than characteristics Relationship between the analysed indicators and the publication activities they purport to measure Indicators of publication count (output): methods of counting scholarly and scientific works published or unpublished depending on the unit of assessment. Indicators that qualify output: Journal Impact: impact of researcher’s articles or chosen journals to formally suggest the potential visibility of the researcher’s work in the field in which he/she is active. Researcher Impact: impact of the researcher’s portfolio of work is indicated using a combination of indicators from the output and effect categories to formally suggest the productivity of the researcher and the visibility of their work in the field in which he/she is active. Indicators of the effect of output: Effect as citations: methods of counting citations, whole or fractional count. Effect of output as citations relative to number of publications relative to field: Indicators that compare the researcher’s citation count to expected performance in their chosen field. Effect of output as citations relative to number of publications relative to portfolio: Indicators that normalize citations to the researcher’s portfolio. Indicators that rank the portfolio of an individual: indicators of the level and performance of all of the researcher’s publications or selected top performing publications. These indicators rank publications by the amount of citations each publication has received and establish a mathematical cut-off point for what is included or excluded in the ranking. They are subdivided into the following: h-dependent indicators h-independent indicators h adjusted to field h adjusted for co-authorship Indicators of impact over time: indicators of the extent a researcher’s output continues to be used or the decline in use. Indicators of impact over time relative to the researcher’s portfolio Indicators of impact over time relative to field As indicators get more refined their complexity increases and as such we assume they are designed for the bibliometric community to use and not end-users. The results show that at the current time 1) certain publication activities and effects are more easily evaluated using bibliometrics than others, 2) assessment of publication performance cannot be represented by a single indicator, and 3) it is unwise to use citations as anything other than an indication of impact. Our clarification of how the indicators are calculated clearly demonstrates that the majority of indicators are different approximations of the average citations to publications in a dataset. Which indicator is the best approximation of the average is dependent on the data used in the calculation.

16 ONE AUTHOR, MANY INDICATORS
DARMA ONE AUTHOR, MANY INDICATORS Raw Data: 50 publications, 3538 cites, CPP 70.8, Max cites 1069 Core definitions h27, g50, f36, t41, w43, h(2)9, wu9, hg 36.7 Full citation indices rat-h 28, real h 27, W(q) 9(15), tapered h 38.7, j29.8, w407 Self-citation indices sharpened h (self only) 25, b (avg. self rate) 25.5, b (avg. self/co-au rate) 23.7, sharpened h (self/co-au) 24 Multiple author indices Hi 7.2, pure h 13.9, adapt pure h 24.3, hm 17.6, profit p 0.53

17 ONE AUTHOR, MANY INDICATORS
DARMA ONE AUTHOR, MANY INDICATORS Time-based indices m-quotient 1.7, ar 18.9, dynamic h 104, hpd 28, contemporary h 20, trend h 26, f index 17 Core description indices v56%, a66, m 69, r57.5, weighted h 49, π21.9, Q2 43.2, e50.8 Core vs tail indices H2 upper 72.7%, h2 center 20.5%, h2 tail 6.8%, k897.8, p61.6, ph ratio 2.3, multi-dimensional h (27,9,4,3,2,1,1,1)

18 DARMA

19 USE OF INDICATORS IN GRANT APPLICATIONS
DARMA USE OF INDICATORS IN GRANT APPLICATIONS Gender differences Researcher: Women believe that discrimination limits their opportunities in grant applications as there are relatively few women employed in high-level faculty positions, though masculinity lessens for lower level positions Evaluator: Evaluators rate CVs and journal articles lower on average for women than men Gender Stereotypes Researcher: “Communal” qualities, such as being nice or compassionate, detailed, are associated with women. Agentic qualities, i.e. being assertive or competitive, are associated with men Evaluator: The results of being competitive or assertive are measurable, e.g. winning awards, initiating projects, where in contrast researchers are not awarded a grant or published because they are “nice” or “compassionate”. Common psychological considerations Methodological type of publications

20 USE OF INDICATORS IN GRANT APPLICATIONS
DARMA USE OF INDICATORS IN GRANT APPLICATIONS Cultural differences Researcher: People from some countries promote themselves in an excessively less positive manner than others due to cultural differences in modesty, not self-esteem. Cross cultural differences affect self-enhancement Evaluator: Which criteria should be used to account for variance in measures of self-esteem across academic lifespan and the effect of age, gender, ethnic groupings and variances in self-esteem on academic profiles Uncertainty Researcher: The individual will seek and utilize whatever information is available that will increase their subjective validity. Evaluator: By providing relevant information uncertainty is reduced. Too much information can quickly become meaningless and hide relevant information How bibliometrics can objectively contribute to the application process

21 USE OF INDICATORS IN GRANT APPLICATIONS
DARMA USE OF INDICATORS IN GRANT APPLICATIONS Social comparison Researcher: The performance of relevant others is used to inform social comparison. Being out-performed and having to document it is detrimental to the researcher’s self-definition and is theorized to be extreme when the social comparisons are acquaintances or colleagues rather than strangers. In the event of a sharp discrepancy between the individual’s performance and the performance of others, the individual will be more susceptible to influence and the application will become unstable due to lack of self-confidence. Evaluator: Using social comparison indicators can provide positive self-enhancement possibilities. Documenting influence reduces uncertainty in the individual’s abilities. The individual will choose not to report the results of the indicators if they are exposed as low achievers compared to their peers resulting in incomplete or missing indicators. Pimping the application, if guidelines are not given or narrative explaining the indicator values not included. Narrative encourage a dialog between applicant and evaluator, but a framework needs to be provided, as applicants need to be able to edit themselves!

22 USE OF INDICATORS IN GRANT APPLICATIONS
DARMA USE OF INDICATORS IN GRANT APPLICATIONS Self-worth Researcher: The pressure to publish in certain sources means that researchers see their self-worth as contingent on publication success, which is easy to measure bibliometrically. Researchers in grant applications can be tempted to self-regulate their publishing success or failures to maintain positive self-views of themselves Evaluator:Simple ‘ready to use’ indicators do not set publication lists in context of the researcher’s gender, seniority, specialty, affiliation and discipline. Feedback Researcher: Self-improvement and self-protection arise in many situations and can come into conflict. In self-protection the individual may ignore indicators that result in useful negative feedback, whereas self- improvement would require attention to this information, even though it could be damaging to the researcher’s self esteem Evaluator: Feedback is circumstantial and depends on how malleable the evaluation is. “field relevant bibliometric indicators” – guidelines for use/and inform of use

23 DARMA SUMMARY Using bibliometric indicators does not make the evaluators or the application work any easier, it makes it more detailed. It is anticipated that the individual will seek and utilize whatever information is available that will increase their subjective validity and self worth. Likewise, if only partial and unreliable information are used to calculate the indicator, the less stable it will be and the researcher’s assessment uncertain. Knowing what data is and is not included in indicators can reduce misinterpretation that could cause fabricated self images and damaged reputations. If the individual provides substantiating, consistent evidence that informs the CV, the more stable it is. Where does the data come from to calculate the “field relevant bibliometric indicators”. How do you check the “truthfulness” of these values? Guidelines, requirements to calculation and use of these indicators

24 DARMA TAKE A BREAK FOR 5 MINS

25 DARMA ACTIVITY 1 In groups discuss what activities and qualities could be evaluated in an assessment of a researcher in applications for the Consolidator Grant? Use the fake profile for inspiration and your own experience/ideas Which activities and qualities have you identified? Which of these activities and qualities can be assessed using bibliometrics? HANDOUTS ERC Consolidator Grant profile and evaluation criteria, Alices’ profile, Alices’ publication list & CV Write your ideas here: and write the code: Marianne Think out of the box exercise – aim to understand the limitations of bibiliometrics 2 questions

26 DARMA Editor/reviewer; Output (peer reviewed, articles, books, anthologies, conference papers, data); Citations (c/p, h index, m index, scientific impact); member of prestigious boards; prizes/recognition; Collaboration (intra/inter org, national/international; business); innovations inc. Patents (societal impact); recruiting (summer school, bridge-building); guidance/supervision (phd’er, masters); knowledge sharing; Funding; Social impact (social media, debate); Non-peer review (interviews, radio/tv, fpublic speaking); visibility in the media, non-peer review; Organisation shills (conferences, events) keynotes; Personality/demographic academic level (educational background, publications, national guidelines); human aspects ; measurable:(books, articles, social media, networking and conferences) publications; lab tools/software; dedication (she is learning hindu); consultancy work; network; leadership; political work (development of national guidelines); loyalty; funding; bibliometric language analysis; Weighting of publication types (peer review/non peer review, citations); h index; JIF; PhD education; network analysis; purpose of the evaluation defines metrics; tools, lab techniques (look at publications); investors; publications/citations; University ranking of affiliated employers/company; publications from research groups where Alice has been the leader; network (co-author analysis/alt metrics); outreach programmes; PURE; Alices’ company (reputation, reviews); languages; non peer reviewed & non academic publications; social factors; Prizes; research leadership (type/year); Keynote speaker & invited speaker.

27 DARMA ACTIVITY 2 1. Which types of publications has Alice on her publication list? 2. Which proportion of these live-up to the criteria of the ERC Consolidator Grant profile? 3. Are there any publications that do not live-up to the ERC criteria, that you consider relevant in an application process? 4. From the articles that live-up to the ERC Consolidator Grant Profile, which ones do you consider Alice the main author? Write your ideas here: and write the code: Marianne One group presents their results Diskssion ad plenum after Does the assumption of main author (from author byline) change per fagområde? Narrative/author statement (next slide)

28 AUTHOR STATEMENT OR NARRATIVE
DARMA AUTHOR STATEMENT OR NARRATIVE Lorna recommended the databases, designed the search, implemented the search, extracted the references and wrote the method section (related to the search) and contributed to the limits of the article (related to the search). Lorna updated the search before publication and defended the rationale through the Peer Review process. All medical contect was written by the co-authors

29 AUTHOR STATEMENT OR NARRATIVE
DARMA AUTHOR STATEMENT OR NARRATIVE Lorna proposed the study and identified the third-party tools, the methodology was designed in collaboration. The analysis was conducted by Lorna, with guidance from Haakon to identify variables for assessment. The manuscript was on the whole written by Lorna, Haakon used his invaluable experience to fine tune the structure, language and readability of the text. Lorna adjusted the article after peer review

30 DARMA ACTIVITY 3a Applicants can add the journal impact factor of the sources they publish in. 1.What does the impact factor of a journal tell us? Discuss in groups and write your group answer at: and write the code The Impact factor tells us, how many citations the average article, review or note, published in a specific journal within the previous to years, has received. This tells us something about the use, visibility and relative 'importance', of a journal, especially when compared to others in the same field but, this can be comparable to comparing apples with pears Read more: Scully & Lodge (2005) Lorna Particpants write (per group) and we summarize their responses The scientific field to which the journal belongs influences IF. Scientific journals generally rank higher than clinical journals. Self-citation is also possible, increasing the IF. Errors, misprints and inconsistencies in citations can distort the IF. IFs are biased toward journals that mainly publish review articles.  Multi-author and consortia articles sometimes pose a problem (who/what should be cited) Greater availability tends to raise the IF.(Free electronic access, or the inclusion of a journal as part of the membership to a society) A change in journal title may adversely affect the IF

31 ACTIVITY 3b HANDOUT Description of impact factors and links to sources
DARMA ACTIVITY 3b Each group is given a journal and an journal ranking to investigate. All groups search for their journal on the BFI authority list 2. Use Eigenfactor Metrics, SJR, WoS and CWTS for the journals: a) Nature Biotechnology b) Methods in Ecology and Evolution c) Waste Management 3. Compare the subject category of the journal and journal ranking in the different tools. 4. What is the impact factor of the journal? 5. Which indicators and visualizations are available? Are they informative? HANDOUT Description of impact factors and links to sources Energy and environmental sciences (not in WoS) Methods in ecology and evolution (JIF 6.344) Waste management (3.829) If three groups 1 journal and divide the IF sources between them If 6 groups, 2 journals (3 groups journal 1, 3 groups journal 2) and divide the 4 sources between them The aim is to get them to compare the IF from different sources on the same journal. Does this affect their concepts of the quality/impact of the journal?

32 DARMA ACTIVITY 4 On the Consolidator application, applicants are encouraged to include “field relevant bibliometric indicators”. Typically an applicant will add the most famous one – the h index. The h index is a measure of the number of highly impactful papers a scientist has published. The larger the number of important papers, the higher the h-index, regardless of where the work was published. The h index is highly dependent on the amount of years a researcher has been publishing, the field he/she publishes in and the citation practices of the field. It produces a single number. Alice has an h-index in Google Scholar of 8. Lorna How many publications does Alice have on her Publication list? (38) Is 8 high or low? What are we comparing to? On the application, is it clear where the applicant has to get the data from? Will the indicators be comparible?

33 BECAUSE WHAT DO INDICATORS MEASURE?
DARMA BECAUSE WHAT DO INDICATORS MEASURE? CPP=18 h=8 Reminder of h

34 H INDEX FOR 340 SOIL RESEARCHER IN WOS, GS & SCOPUS
DARMA H INDEX FOR 340 SOIL RESEARCHER IN WOS, GS & SCOPUS Alice has the following h-indices: GS=8 WoS=5 Scopus=4 When we compare to a larger cohort, the h is put into context. Minasny et al 2013:

35 DARMA ACTIVITY 4 Calculate the IQP index for Alice. Use the information on the following slide. What does this indicator tell you about her?

36 Use the following information to calculate Alices’ IQP:
DARMA AN ALTERNATIVE: IQP Use the following information to calculate Alices’ IQP: Age: years since PhD defence= =12 Papers: all publications (WoS)= 9 Citations: total citations (WoS)= 134 1st subject area=Environmental Sciences 2nd subject area=Biotechnology applied microbiology 3rd subject area=Biochemistry molecular biology Based on data from WoS, using the index “cited reference search” Age current year - year defended PhD Papers – publications indexed in WoS Citations – number of citations in Web of Science Subject areas – the top three areas in which you have been cited Antonakis, J. and Lalive, R. (2008), Quantifying Scholarly Impact: IQp Versus the Hirsch h. J. Am. Soc. Inf. Sci., 59: 956–969. doi:  /asi.20802

37 JIF of 3 subject categories
DARMA JIF of 3 subject categories Cited just under the average paper for that speciality C 134 Number of papers performing above average for the specialty Results: show after the groups have calculated and fortolket tallene? These numbers need to be accompanied by a narrative! 1 IS THE SAME, 1.44 WOULD BE NEARLY 1½MORE, 2 WOULD BE TWICE AS OFTEN OUT OF THE 9 PAPERS IN THE ANALYSIS, 6 ARE PERFORMING ABOVE THE AVERAGE FOR THE SPECIALTY REFER TO iqp CALCULATOR bENCHMARKS

38 DARMA A FINAL THOUGHT… The Metric Tide (2015) - Recommendation 5 “Individual researchers should be mindful of the limitations of particular indicators in the way they present their own CVs and evaluate the work of colleagues. When standard indicators are inadequate, individual researchers should look for a range of data sources to document and support claims about the impact of their work. (All researchers)” Leiden manifesto (2015) – Principle 7 “Base assessment of individual researchers on a qualitative judgement of their portfolio. The older you are, the higher your h-index, even in the absence of new papers. The h-index varies by field […] It is database dependent: there are researchers in computer science who have an h-index of around 10 in the Web of Science but of 20–30 in Google Scholar. Reading and judging a researcher's work is much more appropriate than relying on one number. […]” Marianne


Download ppt "Author-level bibliometric indicators Interactive workshop"

Similar presentations


Ads by Google