Social networks as a potential source of bias in peer review Sándor Soós Zsófia Viktória Vida Beatriz Barros Ricardo Conejo Richard Walker Sándor Soós.

Slides:



Advertisements
Similar presentations
Critical Reading Strategies: Overview of Research Process
Advertisements

The Robert Gordon University School of Engineering Dr. Mohamed Amish
Relating research to practice Heather King Department of Education King’s College London.
Analysis and Modeling of Social Networks Foudalis Ilias.
Evaluating public RTD interventions: A performance audit perspective from the EU European Court of Auditors American Evaluation Society, Portland, 3 November.
Scopus. Agenda Scopus Introduction Online Demonstration Personal Profile Set-up Research Evaluation Tools -Author Identifier, Find Unmatched Authors,
Does It Matter Where We Publish? Adam Eyre-Walker University of Sussex.
Using Structure Indices for Efficient Approximation of Network Properties Matthew J. Rattigan, Marc Maier, and David Jensen University of Massachusetts.
Problem Identification
©2007 Prentice Hall Organizational Behavior: An Introduction to Your Life in Organizations Chapter 19 OB is for Life.
Integrating the Life Sciences from Molecule to Organism The American Physiological Society Transform a Cookbook Lab Moving Toward More Student-Centered.
Modeling and Finding Abnormal Nodes (chapter 2) 駱宏毅 Hung-Yi Lo Social Network Mining Lab Seminar July 18, 2007.
SCOPUS AND SCIVAL EVALUATION AND PROMOTION OF UKRAINIAN RESEARCH RESULTS PIOTR GOŁKIEWICZ PRODUCT SALES MANAGER, CENTRAL AND EASTERN EUROPE KIEV, 31 JANUARY.
Servant Leadership Paper The student will concentrate on their individual workplace or business as the focus of a 5-7 page research paper discussing Servant.
Tips to Publish in Journals Ismail Said Department of Landscape Architecture, FAB 23 Feb 2010.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Chapter 14: Nonparametric Statistics
I NTRODUCING FRONTIERS Richard Walker 13 January, 2011.
Social Science Data and ETDs: Issues and Challenges Joan Cheverie Georgetown University Myron Gutmann ICPSR – University of Michigan Austin McLean ProQuest.
Soc 3306a Lecture 4 The Research Report and the Literature Review.
W ORK P ACKAGE 2 C ONCEPTUAL MODEL Richard Walker 13 January, 2011.
Social Networking Techniques for Ranking Scientific Publications (i.e. Conferences & journals) and Research Scholars.
Major Non-Consensus Program and its review process design Wang Yue; Li xiaoxuan Institute of Policy and Management, Chinese Academy of Sciences Zheng yonghe.
The importance of enzymes and their occurrences: from the perspective of a network W.C. Liu 1, W.H. Lin 1, S.T. Yang 1, F. Jordan 2 and A.J. Davis 3, M.J.
The Research Enterprise in Psychology. The Scientific Method: Terminology Operational definitions are used to clarify precisely what is meant by each.
Anatomy of an Article P152 Week 4. Three types of articles Reports of empirical studies Literature reviews/meta-analyses –Statistical reviewing procedure.
Systematic Reviews.
WP4/T4.1 D4.1 Detailed Report on Indicators. Overall stragtegy Case studies Types of networks Types of impact indicators (Network) Model definition Algorithm.
The impact of mobility on productivity and career path (WP7) Aldo Geuna Cornelia Meissner Paolo Cecchelli University of Torino Fondazione Rosselli.
SCOPUS AND SCIVAL EVALUATION AND PROMOTION OF UKRAINIAN RESEARCH RESULTS PIOTR GOŁKIEWICZ PRODUCT SALES MANAGER, CENTRAL AND EASTERN EUROPE LVIV, 11 SEPTEMBER.
I NTRODUCING FRONTIERS Richard Walker 13 January, 2011.
1 Analysing the contributions of fellowships to industrial development November 2010 Johannes Dobinger, UNIDO Evaluation Group.
Management & Development of Complex Projects Course Code MS Project Management Perform Qualitative Risk Analysis Lecture # 25.
A Graph-based Friend Recommendation System Using Genetic Algorithm
Scientific Paper. Elements Title, Abstract, Introduction, Methods and Materials, Results, Discussion, Literature Cited Title, Abstract, Introduction,
 Remember, it is important that you should not believe everything you read.  Moreover, you should be able to reject or accept information based on the.
Research Concepts: Principles version 2.0
LEVEL 3 I can identify differences and similarities or changes in different scientific ideas. I can suggest solutions to problems and build models to.
The technical impact of social links in free software development Rishab Aiyer Ghosh MERIT/Infonomics, University of Maastricht Oxford Workshop.
ADS511 RESEARCH METHODOLOGY AND DATA ANALYSYS
The subject of a scholarly article is based on original research.
LexPageRank: Prestige in Multi- Document Text Summarization Gunes Erkan and Dragomir R. Radev Department of EECS, School of Information University of Michigan.
IMPACT OF QUALITY ASSURANCE SYSTEM IN EFFECTIVENESS OF VOCATIONAL EDUCATION IN ALBANIA IMPACT OF QUALITY ASSURANCE SYSTEM IN EFFECTIVENESS OF VOCATIONAL.
What is Science? or 1.Science is concerned with understanding how nature and the physical world work. 2.Science can prove anything, solve any problem,
Fig.1. Flowchart Functional network identification via task-based fMRI To identify the working memory network, each participant performed a modified version.
Universiteit Antwerpen Conference "New Frontiers in Evaluation", Vienna, April 24th-25th Reliability and Comparability of Peer Review Results Nadine.
Review Characteristics This review protocol was prospectively registered with BEME (see flow diagram). Total number of participants involved in the included.
1 Children First Intensive 2008 Grade 5 Social Studies Analyzing Outcomes for ESO Network 14 March 25, 2009 Social Studies Conference, PS/MS 3 Deena Abu-Lughod,
1 Friends and Neighbors on the Web Presentation for Web Information Retrieval Bruno Lepri.
Informatics tools in network science
The Structure of Scientific Collaboration Networks by M. E. J. Newman CMSC 601 Paper Summary Marie desJardins January 27, 2009.
What is Research?. Intro.  Research- “Any honest attempt to study a problem systematically or to add to man’s knowledge of a problem may be regarded.
Topical Analysis and Visualization of (Network) Data Using Sci2 Ted Polley Research & Editorial Assistant Cyberinfrastructure for Network Science Center.
CHAPTER ONE: INTRODUCTION TO ACTION RESEARCH CONNECTING THEORY TO PRACTICE IMPROVING EDUCATIONAL PRACTICE EMPOWERING TEACHERS.
Sample paper in APA style Sample paper in APA style.
Publication Pattern of CA-A Cancer Journal for Clinician Hsin Chen 1 *, Yee-Shuan Lee 2 and Yuh-Shan Ho 1# 1 School of Public Health, Taipei Medical University.
Connecting Theory to Practice Improving Educational Practice Empowering Teachers Chapter One: Introduction to Action Research.
Sul-Ah Ahn and Youngim Jung * Korea Institute of Science and Technology Information Daejeon, Republic of Korea { snowy; * Corresponding Author: acorn
Sul-Ah Ahn and Youngim Jung * Korea Institute of Science and Technology Information Daejeon, Republic of Korea { snowy; * Corresponding Author: acorn
REPORTING YOUR PROJECT OUTCOMES HELEN MCBURNEY. PROGRAM FOR TODAY: Report Reporting to local colleagues Reporting to the Organisation Tips for abstract.
Reporting your Project Outcomes Helen McBurney. Program for today: Report Reporting to local colleagues Reporting to the Organisation Tips for abstract.
Disciplinary structure and topical complexity in SSH—the IMPACT EV mission Sándor Soós, András Schubert, Zsófia Vida.
Johannes Sorz, Bernard Wallner, Horst Seidler and Martin Fieder
The sample under analysis at the end of the scoring was made up of 99 participants as follows:
Outline What is Literature Review? Purpose of Literature Review
An Efficient method to recommend research papers and highly influential authors. VIRAJITHA KARNATAPU.
Sándor Soós1 1Hungarian Academy of Sciences (MTA), Budapest, Hungary
Introduction of KNS55 Platform
What the Editors want to see!
Software Testing and QA Theory and Practice (Chapter 5: Data Flow Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Presentation transcript:

Social networks as a potential source of bias in peer review Sándor Soós Zsófia Viktória Vida Beatriz Barros Ricardo Conejo Richard Walker Sándor Soós Zsófia Viktória Vida Beatriz Barros Ricardo Conejo Richard Walker

Hungarian Academy of Sciences (HAS) Linbrary and Information Centre, Dept. Science Policy and Scientometrics Professional research evaluation and research monitoring for HAS and national research policy Science policy studies (Academic Career Research Programmes) Impact assessment of policy measures (institutional level) „Translational scientometrics” Basic research in scientometrics The organizational background

TTO: institutional successor of the „Budapest School of Scientometrics” (Tibor Braun, Glänzel Wolfgang, András Schubert) Founder and editorial group of Scientometrics (Springer-Akadémiai) Scientific background and activity European Projects (FP7) Science in Society Observatorium (SISOB) Evaluating the Impact and Outcomes of SSH research (IMPACT-EV) Open Access Policy Alignment Strategies for European Union Research (PA COST Actions Analyzing the dynamics of information and knowledge landscapes PEERE

Observatorium for Science in Society: Big data-driven discovery of actual social factors in S&T communities affecting outcomes Related to three specific areas: (1) Mobility (2) Knowledge sharing (3) Peer review SISOB workbench Peer review and the SISOB project

Toolkit (both methodological and technological) to identify: Biases (favoritism/discrimination) based on reviewer vs. author characteristics Walker, R., Barros, B., Conejo, R., Neumann, K., & Telefont, M. (2015). Bias in peer review: a case study. F1000Research, 4. Biases based on social organization (relations) of the scientific community Social networks as a potential source of bias in peer review (in preprint version) DATA provided by publishing house Frontiers In Peer review and the SISOB project

Biases by characteristics

Biases by social relations: indicator development Problem with traditional perspective: „the more central the author is, the more awarded in peer review” (Scores to papers, not authors) av. Score =10

Biases by social relations: indicator development Solution to the difficulty above: turn it upside down! Paper centrality (instead of author ~): for each paper P with authors {A1, …, An} and author centralities AC = { C(A1), …, C(An)}, the maximum value of AC was obtained along each measures. New question, operationalized: whether reviewer scores for papers reflect the authors include high centrality ones. Links and scores made independent, empirically commensurable

Peer review and SISOB The study made in

SISOB project is developing a methodology to systematically evaluate possible biases in different kinds of peer review system. We have developed a toolkit of techniques to detect social network effects on peer review outcomes. We would like to detect whether reviewer outcomes are affected by authors' prestige (their "centrality" in their respective communities), by their social relationships with reviewers (the distance between authors and reviewers in coauthoring and in author-reviewer networks) and by their membership of specific subcommunities. Introduction – The goal

Mean reviewer scores for papers by a given author are directly related to the lead author's position (centrality) in these networks. Mean reviewer scores for papers by a given author are inversely related to the reviewer's distance from the lead author. Reviewers belonging to the same subcommunity as an author will give higher scores than reviewers belonging to different communities. Hypotheses

Frontiers database Frontiers Open Access Publishing House (N=4550 ) Period: June 25, 2007 – March 19, 2012 The data included: the name of the journal, to which the paper was submitted, the article type (review, original research etc.), the name and institutional affiliations of the authors and reviewers of specific papers, individual reviewers scores and the overall review result (accepted/rejected) WebConf database Six computer science conferences (AH2002, AIED2003, CAEPIA2003, ICALP2002, JITEL2007, SINTICE07,UMPAP2011) (N=1204) Period The data included: name of conference, type of contribution, name, gender and institutional affiliations of the authors and reviewers of specific contributions, individual reviewers scores and the final decision (accepted/rejected) Data

Co-authorship networks We constructed a list of all authors in the two databases. For each author, we generated a list of other authors with whom the author had previously published at least one paper referenced in the Scopus database. On this basis, we identified co-authorship relationships present in the two databases. Two unweighted (undirected) co-authoring graphs. Frontiers: # nodes = ; WebConf: #nodes = 2149 Both graphs included a giant “connected” component (Frontiers: N = 8 690; WebConf: N=543;) and disconnected “islands”. Hereafter we use the connected component. Modelling author-reviewer relations

To test our hypotheses: we have used: author centralities author – reviewer distances We have defined: paper centralities paper distances sub communities Indicators

For each author, we computed the following centrality measures showing the author's position in the coauthoring network: Degree centrality, Betweenness centrality, Closeness centrality, Eigen centrality Page Rank centrality. Author centrality

For each author, we computed the following centrality measures showing the author's position in the coauthoring network: Degree centrality, Betweenness centrality, Closeness centrality, Eigen centrality Page Rank centrality. H1: Mean reviewer scores for papers by a given author are directly related to the lead author's position (centrality) in these networks Author centrality

C paper = max (AC i ) Where C paper is paper centrality Ac i is the value of a particular centrality measure for the ith author of the paper Paper centrality

Rank correlation between different measures of paper centrality and mean review scores Results - Paper centrality affects reviewer scores bw: betweenness centrality d: degree centrality; c: closeness centrality; e: eigen centrality; p: Page rank centrality Signif. codes: 0 ‘***’ ‘**’ 0.01 ‘*’ Frontiers Webconf

Rank correlation between different measures of paper centrality and mean review scores Results - Paper centrality affects reviewer scores bw: betweenness centrality d: degree centrality; c: closeness centrality; e: eigen centrality; p: Page rank centrality Signif. codes: 0 ‘***’ ‘**’ 0.01 ‘*’ Frontiers

Rank correlation between different measures of paper centrality and mean review scores Results - Paper centrality affects reviewer scores bw: betweenness centrality d: degree centrality; c: closeness centrality; e: eigen centrality; p: Page rank centrality Signif. codes: 0 ‘***’ ‘**’ 0.01 ‘*’ Webconf

We conjectured that large-scale patterns might be suppressing possible network biases. We identified subcommunities We repeated the experiment described in earlier for each individual subcommunity. The results were qualitatively similar. The Frontiers data showed no sign of correlation. The WebConf data showed signs of a small but significant correlation. Results – Communities

Layout of a subcommunity in the Frontiers co- authoring network size of node is proportional to page-rank centrality

Largest community detected by the “fast greedy” community detection algorithm when applied to the WebConf database size of node is proportional to page-rank centrality

Author-reviewer distance affects review scores (H2) We interpreted “distance” as the length of the shortest path connecting between authors and reviewers. We defined paper distance as the smallest distance between the reviewer and the authors of the paper int he network. Compared the paper distances against the score given by the reviewer to the paper. The comparison showed no correlation between the two variables either for Frontiers (rank correlation = -0,05) or for WebConf (rank correlation= -0.07) Results – distance vs. score

We submitted the giant component of the two networks We interpreted reviewer–author proximity as a binary variable Results – distance vs. score in subcommunities FrontiersWebconf Mean scores review scores when authors and reviewers are in different subcommunities (column 0) or in the same community (column 1).

There were no detectable correlations between author centrality and review scores in the Frontiers data The WebConf data showed small but significant correlations. Neither dataset showed any significant relationship between author-reviewer distance and review scores. This suggests an absence of favoritism. Summary

In both systems, reviewers belonging to the same community as authors gave (slightly) higher scores than reviewers coming from different subcommunities - an effect that was stronger for WebConf than for Frontiers. This effect does seem to signal some kind of bias – perhaps because reviewers are most familiar with the language, style and concepts of their own sub-communities. The effect detected was almost certainly too small to affect the outcome of the review process. It is possible, however, that biases in other peer review systems are stronger than those registered for Frontiers and WebConf. In such cases, the methodology presented here has the power to detect the bias. Summary

Thank you for your attention! The current version of the workbench is available at Acknowledge: FP7 Science in Society Grant No (SISOB project) Social networks as a potential source of bias in peer review