InCites TM

Slides:



Advertisements
Similar presentations
SCOPUS Searching for Scientific Articles By Mohamed Atani UNEP.
Advertisements

In the Format section, we have activated the Bibliographic style drop down menu. From this page, you can choose a specific journal or format (e.g. BMC.
Main Panel A: Subpanels and Chairs A1: Clinical Medicine - Christopher Day, Newcastle University A2: Public Health, Health services and Primary Care -
INFORMATION SOLUTIONS Citation Analysis Reports. Copyright 2005 Thomson Scientific 2 INFORMATION SOLUTIONS Provide highly customized datasets based on.
Journal Citation Reports on the Web Don Sechler Customer Education – Science and Scholarly Research
Using Incites to evaluate Research Performance Advanced
Web of Science Search and Navigation in the Web of Knowledge
FORMATION ON :  WEB OF SCIENCE Prepared by Diane Sauvé, B. Sc., M. Bibl. November 2013.
Shou Ray Information Service Co., Ltd.
INCITES PLATFORM NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION (NOAA)
Communicating the outcomes of the 2008 Research Assessment Exercise A presentation to press officers in universities and colleges. Philip Walker, HEFCE.
The Research Assessment Exercise in the United Kingdom Paul Hubbard International colloquium “Ranking and Research Assessment in Higher Education” 13 December.
Research at York Presentation to Council Alastair Fitter Pro-Vice-Chancellor, Research.
Journal Citation Reports on the Web. Copyright 2006 Thomson Corporation 2 Introduction JCR distills citation trend data for 7,600+ journals from more.
1 Using Scopus for Literature Research. 2 Why Scopus?  A comprehensive abstract and citation database of peer- reviewed literature and quality web sources.
1 Scopus Update 15 Th Pan-Hellenic Academic Libraries Conference, November 3rd,2006 Patras, Greece Eduardo Ramos
Bibliometrics: the black art of citation rankings Roger Mills OULS Head of Science Liaison and Specialist Services February 2010 These slides are available.
IUPWARE VUB 27/05/2009 1Brainstorm session on RS Brainstorm session on Web of Science Maher Albhaisi.
1 Using metrics to your advantage Fei Yu and Martin Cvelbar.
Search on Journal of Dairy Science ® An Overview April
InCites TM 1.
About use and misuse of impact factor and other journal metrics Dr Berenika M. Webster Strategic Business Manager 23 January 2009, Sydney.
Journal Citation Reports – The Impact Factor
Not all Journals are Created Equal! Using Impact Factors to Assess the Impact of a Journal.
Accessing journals by via PubMed Note the link to find articles through HINARI/PubMed. Using this option will be covered in later in the Short Course.
Journal Impact Factors and H index
New Web of Science Rachel Mangan Customer Education
Guillaume Rivalle APRIL 2014 MEASURE YOUR RESEARCH PERFORMANCE WITH INCITES.
Orvill Adams, Orvill Adams & Associates B.V. Orvill Adams Orvill Adams & Associates B.V. Measuring the Products of Medical Education.
Orientation to Web of Science Dr.Tariq Ashraf University of Delhi South Campus
The Research Excellence Framework Data and Audit May 2012.
The Latest in Information Technology for Research Universities.
Bibliometrics toolkit: ISI products Website: Last edited: 11 Mar 2011 Thomson Reuters ISI product set is the market leader for.
SCIENTIFIC SOLUTIONS Journal Citation Reports ® New Features of Version 4.0.
1 Scopus as a Research Tool March Why Scopus?  A comprehensive abstract and citation database of peer-reviewed literature and quality web sources.
Beyond the RAE: New methods to assess research quality July 2008.
A month in the life of a university bibliometrician Dr Ian Rowlands University of Leicester, UK.
T H O M S O N S C I E N T I F I C Marian Hollingsworth Manager, Publisher Relations July 18, 2007 Using Metrics to Improve your Journal Veterinary Journal.
THOMSON SCIENTIFIC Patricia Brennan Thomson Scientific January 10, 2008.
Research Assessment Exercise RAE Dr Gary Beauchamp Director of Research School of Education.
ISC Journal Citation Reprots تقارير استنادية للمجلات Mohammad Reza – Ghane Assistant Prof. in Library and Information Science & Director of Research Department.
OARE Module 5A: Scopus (Elsevier). Table of Contents About Scopus (Elsevier) Using Scopus Search Page Results/Refine Search Pages Download, PDF, Export,
Where Should I Publish? Journal Ranking Tools eigenfactor.org SCImago is a freely available web resource available at This uses.
A Bibliometric Comparison of the Research of Three UK Business Schools John Mingers, Kent Business School March 2014.
Bibliometrics toolkit Website: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Further info: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Scopus Scopus was launched by Elsevier in.
The Research Excellence Framework Impact: the need for evidence Professor Caroline Strange 22 June 2011.
SciVal Spotlight Training for KU Huiling Ng, SciVal Product Sales Manager (South East Asia) Cassandra Teo, Account Manager (South East Asia) June 2013.
ESSENTIAL SCIENCE INDICATORS (ESI) James Cook University Celebrating Research 9 OCTOBER 2009 Steven Werkheiser Manager, Customer Education & Training ANZ.
The REF assessment framework (updated 23 May 2011)
RESEARCH EVALUATION - THE METRICS UNITED KINGDOM OCTOBER 2010.
Main Panel A Criteria and Working Methods Cardiff School of Biosciences Ole H Petersen Chair.
MARKO ZOVKO, ACCOUNT MANAGER STEPHEN SMITH, SOLUTIONS SPECIALIST JOURNALS & HIGHLY-CITED DATA IN INCITES V. OLD JOURNAL CITATION REPORTS. WHAT MORE AM.
1 e-Resources on Social Sciences: Scopus. 2 Why Scopus?  A comprehensive abstract and citation database of peer-reviewed literature and quality web sources.
The Thomson Reuters Journal Selection Policy – Building Great Journals - Adding Value to Web of Science Maintaining and Growing Web of Science Regional.
THE BIBLIOMETRIC INDICATORS. BIBLIOMETRIC INDICATORS COMPARING ‘LIKE TO LIKE’ Productivity And Impact Productivity And Impact Normalization Top Performance.
Bibliometrics at the University of Glasgow Susan Ashworth.
Tools for Effective Evaluation of Science InCites David Horky Country Manager – Central and Eastern Europe
Searching for Scientific Research Using Environmental Index (EBSCO)
Data Mining for Expertise: Using Scopus to Create Lists of Experts for U.S. Department of Education Discretionary Grant Programs Good afternoon, my name.
Where Should I Publish? Journal Ranking Tools
Scopus - Elsevier (Advanced Course Module 8)
OARE Module 5A: Scopus (Elsevier)
Bibliometrics toolkit: Thomson Reuters products
Citation Analysis Your article Jill Otto InCites Other?
Advanced Scientometrics Workshop
Scopus - Elsevier (Advanced Course Module 8)
Introduction of KNS55 Platform
Journal evaluation and selection journal
Comparing your papers to the rest of the world
Scopus - Elsevier (Advanced Course: Module 8)
Presentation transcript:

InCites TM

Workshop Objectives: After this work shop you can: Understand the basic components of Incites (slide 3) Navigate the two principal modules :Research Performance Profile and Global Comparisons (RPP=slide 20, GC = slide 48) Understand the normalised indicators and how to use them (slide 11) Perform analysis of authors/institutions/subject areas/ collaborations using standard and normalised indicators (slide 28) Understand the Preset reports and what they inform on Create custom reports (slide 40) Save and share reports with colleagues (slide 42) Understand the use of citation data for the 2014 Research Excellence Frame Work and how Incites may be used to inform universities on submissions (slide 67) 2

Objective: Understand the basic components of Incites Incites is a customised, citation-based research evaluation tool on the web that enables you to analyse institutional productivity and benchmark your output against peers worldwide. All bibliographic and citation data is drawn from the Web of Science Incites platform offers 3 modules –Research Performance Profiles (RPP) –Global Comparisons (GC) –Institutional Profiles (not covered by workshop) 3

Research Performance Profiles A custom-built dataset created by Thomson Reuters to match customer specifications Datasets can be compiled using the following search criteria: Address (extracting from WOS records that contain at least one occurrence of an address e.g. Univ Manchester and variants as identified by the customer) Author (extracting from WOS records that contain specific authors/ or papers as identified by the customer) Other datasets are available for topic and journal Updated quarterly from date of issue. Customers can work with Incites team to request changes for better unification to improve further updates Incites can include source articles published between 1981 and 2012 as indexed in the Web of Science 4

Research Performance Profiles RPP can be used to inform on.. The overall performance of research at an institution The performance of authors The performance of departments The performance of collaborations The performance of areas of research The performance of individual papers The performance of papers in specific journals The impact/influence of published research The performance of papers funded by a funding agency 5

RPP- Web of Science data 1.All document types included that match customer specification (articles, reviews, editorials letters, etc..) 2.All authors indexed –Last name + initials –Variants included –Name as published 2007 forward –Full author name display in Author Ranking report in Author based dataset 3.All address indexed –Author affiliation as published –Main organisation (e.g. Univ Manchester) displayed in RPP 4.Funding information from 2008 onwards –Funding Agency as published Grant numbers in the Funding Acknowledgement 5.Web of Science Subject Area applied at journal level –249 WOS/JCR subject categories –Source records inherit all journal level categories (an article published in the Journal of Dental Research will inherit the categories Dentistry, Oral Surgery & Medicine) –Multidisciplinary journals categorised as ‘Multidisciplinary Sciences’ –For some multidisciplinary journals (Science, Nature, British Medical Journal etc..) articles reassigned a new WOS category based on analysis of citing/cited relationships 6.Journal Impact Factor from 2010 JCR 7.Author Keywords and Key Words Plus 6

RPP- Web of Science Data

RPP Key Metrics Journal Expected Citation Rate –Average citations for records of the same type, from same journal, published in the same year Category Expected Citation Rate –Average citations for records of same type, from same category, published in the same year Percentile in Field –Citation performance relative to records of same document type, from same category, published in the same year. Most cited paper awarded lowest percentile (0%) and least to non-cited awarded highest percentile (100%) H Index Journal Actual/ Journal Expected –Ratio of the actual citation count (of a paper) to the expected count of papers published in same journal, year and document type Category Actual/ Category Expected –Ratio of the actual citation count (of a paper) to the expected count for papers from same category, year and document type 8

Global Comparisons (GC) Global Comparisons contains aggregated comparative statistics for institutions, countries and fields of research Built by Thomson Reuters. Common to all customers. All customers see the same data in GC Bibliographic and Citation data drawn from Web of Science File depth from Updated annually Data for Articles, Reviews and Research Notes –Use Institutional Comparisons to compare performance of an institution or groups of institutions overall, across fields or within fields –Institutional name variant unification (main organisation) –Use National Comparisons to compare the performance of more than 180 countries and 9 geopolitical regions overall, across fields or within fields. Multiple Subject Categories –WOS- 249 subject categories –Essential Science Indicators – 22 broad categories –Regional Categories (UK, Australia, Brazil) –OECD 9

Global Comparison Key Metrics Web of Science documents Times Cited Cites per document (Average Impact) % Documents Cited (at least 1 citation) Impact Relative to Subject Area (average cites of an institution in a subject area compared to the expected impact in the subject area) Impact Relative to Institution (average cites of papers in a field compared to the average cites overall for the institution) % Documents in Subject Area (market share) % Documents in Institution % Documents Cited Relative to Subject Area % Documents Cited to Relative to Institution Aggregate Performance Indicator: this metric normalises for period, document type and subject area and is a useful indicator to compare institutions of different age, size and subject focus. 10

Objective: Understand the normalised indicators and how to use them The number of times that papers are cited is not in itself an informative indicator; citation counts need to be benchmarked or normalised against similar research. In particular: citations accumulate over time, so the year of publication needs to be taken into account; citation patterns differ greatly in different disciplines, so the field of research needs to be taken into account; and citations to review papers tend to be higher than for articles and this also needs to be taken into account.’ Source REF Pilot Study 11

NORMALISATION It is necessary to normalise absolute citation counts for: –Document type (reviews cited more than articles, some document types cited less readily) –Journal where published –Year of publication (citations accumulate over time) –Category (there is a marked difference in citation activity between categories) Golden rule: Compare like with like 12

Is this a high citation count? This paper has been cited 4148 times. How does this citation count compare to the expected citation count of other articles published in the same journal, in the same year? It is necessary to normalise for: Journal = Nature Materials Year = 2007 Document type = article 13

Create a benchmark- the expected citations Search for papers that match the criteria Run the Citation Report on the results page 14

Create a benchmark- the expected citations Articles published in ‘Nature Materials’ published in 2007 have been cited on average times. This is the Expected Count We compare the total citations received to a paper to what is expected 4148 (Journal Actual) / (Journal Expected) = The paper has been cited times more than expected. We call this Journal Actual/Journal Expected 15

Percentile in Field. How many papers in the dataset are in the top 1%, 5% or 10% in their respective fields? This is an example of the citation frequency distribution of a set of papers in a given category, database year and document type. The papers are ordered none/least cited on the left, moving to the highest cited papers in the set on the right. We can assign each paper to a Percentile in the set. 100% 50% 0% In any given set, there are always few highly cited papers (top 1%) In any given set, there are always many low cited/ none cited papers (bottom 100%) Only document types article, note, and review are used to determine the percentile distribution, and only those same article types receive a percentile value. If a journal is classified into more than one subject area, the percentile is based on the subject area in which the paper performs the best, i.e. lowest value 16

No All Purpose Indicator This is a list of a number of different purposes a university might have for evaluating its research performance. Each purpose calls for particular kinds of information. Identify the question the results will help to answer and collect the data accordingly 17

Incites Access Enter username and password or IP Authentication 18

Incites Start Page 19 These are the two principal modules. Click on ‘Get Started’ to open a module

Objective: Navigate the two principal modules: 1. Research Performance Profiles RPP is custom built for each institution –Article level statistics –Aggregations as a whole dataset or create custom subsets 20 Run a preset report on the whole dataset Create a custom report to analyse a subset of papers

Executive Summary- an overall synopsis 107, 781 source papers timespan 949,293 citing papers Tables to highlight frequently occurring authors, subject areas and most cited authors Green bar = papers published per year, scale on left side Blue bar = citations received to papers published in that year, scale on right side 21

Source Article Listing-paper level metrics Order the papers by the metrics available in drop down menu Times Cited Percentile in Field 2 nd Generation Citations Click on article title to navigate to the record in Web of Science Article bibliographic information Article citation data and normalised metrics 22

Source Article Listing Key Metrics- for individual paper evaluation METRICMEASUREIDENTIFY Times CitedTotal cites to paperHighest cited papers Second Generation CitesTotal cites to the citing papersLong term impact of a paper Journal Expected CitationsAverage Times Cited count to papers from same journal, publication year and document type Papers which perform above or below what is expected compared to similar papers from same journal and same period Category Expected CitationsAverage Times Cited count to papers from same category, publication year and document type Papers which perform above or below what is expected compared to similar papers in the same subject category from same period Percentile in Subject AreaPercentile a paper is assigned to with papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%) Papers which perform the highest or lowest in their field based on the papers citation count Journal Impact FactorAverage cites in 2010 to papers published in the previous 2 years in a given journal Journals which have high or low impact in

Summary Metrics- a dashboard of indicators Percentile Graph For each percentile range, the “expected” number of papers (article, review & notes) in each would be equal to that same “Percentile”, meaning… We’d expect 5% of this institutions papers to rank in the 5 th Percentile. However, 6.79% of this institution’s papers rank in the 5 th Percentile. 6.79% - 5% = 1.79% Therefore, the number of papers this institution has placed in the top 5% of all papers published exceeds what is expected by 1.79% This 1.79% is what is presented on the graph, in Green because it exceeded the expected. Below-expected would be presented in Red Citation data and normalised metrics which give an overview of the overall performance of the papers in the data set 24

METRICMEASUREIDENTIFY % Cited to %Un Cited% of papers in dataset that have received at least one cite Amount of research in dataset with no impact Mean PercentileAverage Percentile for set of papers in dataset. Percentile is assigned to a paper within a set of papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%) Average ranking of papers in dataset. How well the papers perform compared to papers from same category/year/document type Average cites per document Efficiency (or average impact) of author papersDataset with high/low average impact (using when making comparisons) Mean Journal Actual/Expected Citations Average ratio for papers in dataset. Ratio is relationship between actual citations to each paper to what is expected for papers in same journal/ publication year and document type Papers that perform above (1) or below the expected journal citation count Mean Category Actual/Expected Citations Average ratio for papers in the dataset. Ratio is relationship between actual citations to each paper to what is expected for papers in same category/ publication year and document type Papers that perform above (1) or below the expected category citation count Percentage articles above/ below what is expected 1% of papers are expected to be in top 1% percentile. Green bar indicates by what percentage the papers are performing better than expected. Red bar indicates the percentage by which the papers are performing lower that expected at a given percentile range How well the papers in the dataset are performing at the specific percentile ranges (1%, 5%, 10% 50%). Summary Metrics Key Indicators (for an author, institution, department..) 25

Funding Agency Listing Click on the WOS document column to view the papers funded by the agency Order the Funding Agencies by the indicators in the drop down menu 26

Article Type Listing Use the Article Type Listing to examine the weighting of each document type in the dataset and differences in performance/ impact between the document types 27

Objective: Perform analysis of authors/collaborations/subject areas using citation data and normalised metrics 28

Author Ranking Report Order authors using the citation and normalised metrics in the menu It may be necessary to establish thresholds to focus on authors who achieve a minimum parameter such as:  Papers published  Citations received Create an ‘Author Ranking Report’ in Custom Reports and establish the thresholds required. 29

Author Ranking Report 30

Author Ranking Report for Author Dataset Full author names Only authors who have been identified by the customer appear in this report Less contamination from co- authors from other institutions as viewed in an Address Dataset 31

METRICMEASUREIDENTIFY Times CitedTotal cites to an authors papersAuthors with highest /lowest total cites to their papers WOS documentsTotal number of papers by an author in datasetAuthors with highest/ lowest number of publications Average cites per document Efficiency (or average impact) of author papersAuthors with highest/lowest average impact h-indexAn authors research performance. Publications are ranked in descending order by the times cited. The value of h is equal to the number of papers (N) in the list that have N or more citations Authors with highest impact and quantity of publications in a single indicator Journal Actual/Expected Citations Average ratio for authors papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same journal/ publication year and document type Authors who’s papers perform above (1) or below what is expect in their respective journals. Useful when comparing authors in different fields/ career length Category Actual/Expected Citations Average ratio for authors papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same category/ publication year and document type Authors who’s papers perform above (1) or below what is expected in their respective subject categories. Useful when comparing authors in different fields/ career length Average percentileAverage Percentile for set of authors papers. Percentile is assigned to a paper within a set of papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%) Authors who’s papers are performing at the top or bottom of their respective fields Author Ranking Key Metrics 32

Time Series and Trend Report Total citations received to papers published in an individual year.  E.g. Papers published in 1981 have received 23,789 citations. Raw data in table below Papers published per year.  1981= 1833 documents. Raw data in table below Average citations to papers published in an individual year.  Papers published in 1981 have been cited an average of times. Raw data in table below.  Use this indicator to identify the year/s in which the research had the highest average impact. 33

Collaborating Institutions Report Order the collaborations using the indicators in the menu. The Collaborating Institutions report is extremely important in not only identifying most frequent collaborating institutions, but those collaborations producing the most influential research. In practical terms, one can identify collaborations that produce the most return on investment. Sorting by Category Actual/Expected Cites is an easy way to identify this. Customise this report to focus on collaborations that meet a minimum threshold. 34

Collaborating Countries Report Order the country level collaborations using the indicators in the menu. Customise this report to focus on collaborations that meet a minimum threshold. 35

Collaboration Reports Key Metrics METRICMEASUREIDENTIFY Times CitedTotal cites to set of papers (collaboration)Institutions/countries with which the research has the most impact (cites) WOS documentsTotal number of papers published in collaboration with an institution/country Institution/ countries with which your researcher collaborate the most Average cites per document Efficiency (or average impact) of papersInstitution/ countries with which the research has the highest/lowest average impact h-indexPerformance of a set of papers. Publications are ranked in descending order by the times cited. The value of h is equal to the number of papers (N) in the list that have N or more citations Institutions/ countries with which the collaboration has the highest impact and quantity of publications as measured in this single number indicator Journal Actual/Expected Citations Average ratio for collaboration papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same journal/ publication year and document type Collaboration with an institution or country with which the papers perform above or below what is expect when compared to similar papers in their respective journals Collaboration with best return on investment Category Actual/Expected Citations Average ratio for authors papers. Ratio is relationship between actual citations to each paper to what is expected for papers in same category/ publication year and document type Collaboration with an institution or country with which papers perform above or below what is expected in their respective subject categories Average percentileAverage Percentile for set of collaboration papers. Percentile is assigned to a paper from a set of papers from same subject category/year/ document type ordered most cited (0%) to least cited (100%) Collaborations with which the papers on average rank high (0%) or low (100%) with regard to their total cites in the respective fields the papers belong to 36

Subject Area Ranking Report Order the subject areas using the indicators in the menu. Use this report to determine the intensity of publication output for each subject area and compare the performance of papers across disciplines. 37

Journal Ranking Report Order the journals using the indicators in the menu. Use this report to identify the journals in which the source papers are published and compare the performance of papers in these journals using the standard and normalised metrics. 38

Impact and Citation Ranking Reports 949,293 Citing Papers in dataset Examine the citing papers to determine:  Who is influenced (authors, institutions)  Where is the influence (countries)  What is influenced (fields, journals and article type) 39

Objective: Create Custom Reports 1. Specify a report type from the menu 2. Select the metrics to be included in the report 3. Set the time period 4. Use the delimiters to create a custom dataset 5. You can preview the papers that match the parameters specified, run the report or save the selections 40

Create Custom Reports- Preview Documents Use the Refine Document Collection to refine your custom dataset Save your Refined Collection to ‘Folders’ 41

Objective: Save and share reports with colleagues 42

Folders My Saved Reports –Save reports you generate My Saved Custom Report Selections –Save selections for the report you frequently run My Saved Document Collections –Save collections (subset) of the documents Shared Reports Shared Custom Report Selections Shared Document Collections 43

Save Selections Save your selection to ‘My Folders’ Provide a title for your saved selection 44

Open a Custom Report Click on the title of report to open it Create a folder, share the report or delete 45

Shared Reports Click on the title of any report in the ‘Shared Reports’ folder to open it 46

Create PDF’s You can print, export to excel, or create a PDF of any report 47

Objective: Navigate the two principal modules 2. Global Comparisons Institutional Comparisons –Compare output and impact for institutions National Comparisons –Compare output and impact for countries Updated on an annual basis is the current file WOS documents include articles, reviews and notes only 48

Institutional Comparisons Compare the overall impact and productivity of a single UK institution for period  Select Comparison Tab  Select UK  Select Univ Manchester  Select All Years (cumulative graph) 49

Institutional Profiles- a single institution View standard citation data and the accompanying normalised metrics:  85.37% of the papers have been cited  The impact of documents from Univ Manchester relative to the world is greater than 1. This indicates that documents from this institution have a higher ratio of cites per documents than the world average.  The percentage of documents cited relative to world is greater than 1, indicating that documents from this institution received more citations per document than the world average.  The aggregate performance indicator (API) measures the impact of an institution or country relative to an expected citation rate for the institution or country. The indicator is normalized for field differences in citation rates as well as size differences among entities and time periods. According to the current definition of API: in a given time period the total citations accrued for all papers, in all fields, is divided by the sum of the average citation rates for each paper, respective to their fields and time periods  The API for Univ Manchester is greater than 1, indicating that the papers are performing above expected. 50

Institutional Comparisons- multiple institutions compared in a field of interest Compare the overall performance of selected UK institutions in a particular field  Select Comparison Tab  Select UK  Select institutions of interest  Select subject (WOS, ESI, RAE 2008)  Select All Years (Cumulative) 51

Institutional Comparisons- multiple institutions compared in a field of interest Generate graphs for each indicator in the table Use the ‘Subject Metrics’ to inform on how papers from each institution perform in that subject when compared to what is expected in that subject area. 52

Institutional Comparison Compare trended performance of selected UK institutions in a field  Select Comparison tab  Select UK  Select UK institutions of interest  Select field (WOS,ESI, RAE 2008)  Select in 5 year groupings (or use the time period to select a preferred time period) Trended graphs are useful for tracking changes over time, illustrating changes that may have arisen from policy decisions, hiring of staff, investment etc.. 53

Institutional Comparisons Compare the overall performance of a single institution in multiple areas of research  Select Institution Tab  Select UK  Select Univ Manchester  Select fields (ESI works best)  Select time period (overall or trended) Use the % in institution graph to examine the areas of research with a strong focus at that institution 54

Institutional Comparisons Compare the trended/overall performance of All institutions in a single field  Select ‘Subject Area’ tab  Select UK or other UK grouping (Russell Group etc..)  Select All United Kingdom or All for other grouping  Select time period (overall or trended ) Use the ‘impact relative to field’ graph to identify institutions that have an impact greater to what is expected in that field (above 1). 55

Global Comparisons Examine the overall performance of a single country during the period –Select Compare Tab –Select UK –Select England –Select All Years Cumulative  7.37% of all Web of Science (world) documents have England in the Address field.  The impact of documents from England relative to the world is greater than 1. This indicates that documents from England have a higher ratio of cites per documents than the world average.  The percentage of documents cited relative to world is greater than 1, indicating that documents from England received more citations per document than the world average. 56

Global Comparisons Compare the trended performance of a single country in a subject area for a preferred period of time –Select UK –Select England –Select field (ESI, WOS, RAE 2008) –Select in 5 year groupings 57

Global Comparisons Use the % of documents in Country indicator to examine the changes in a field of research over time in that country 58

Global Comparisons Compare the overall performance of multiple countries for the period –Select Comparison Tab –Select country grouping –Select countries of interest –Select time period Overall (Cumulative) 59

Global Comparisons Use the ‘Impact Relative to World’ indicator to identify countries that have a higher ratio of cites per document than the world average cites per document (red line in graph) 60

Global Comparisons Compare the trended performance of multiple countries in a subject area for a preferred period of time –Select Subject Area Tab –Select country grouping –Select ‘All’ grouping –Select a field (WOS, ESI, OECD) –Select in 5 year groupings (or any other preferred time period) 61

Global Comparisons Use the % documents in country to track changes in a field of research over time between countries. 62

Global Comparisons Compare overall performance of selected country groupings for time period –Select Comparison Tab –Select country groupings –Select All Years (Cumulative) 63

Global Comparisons Use the ‘% documents in world’ graph to examine each groupings share of the worlds total research output 64

Global Comparisons Compare the trended performance of selected country groupings in a subject area for a preferred period of time –Select Comparison Tab –Select country groupings –Select Field (WOS, ESI, OECD) –Select in 5 year groupings or any preferred time period 65

Global Comparisons Use the ‘% documents in Subject Area’ indicator to examine changes in each territories’ share of papers in an area of research over time 66

Objective: Understand the use of citation data for the 2014 Research Excellence Frame Work and how Incites may be used to inform universities on submissions 67

Research Excellence Framework 2014 Purpose: new system for assessing the quality of research in higher education institutions in the UK Inform UK funding bodies allocation of grant for research (£1.76 billion for research) Conducted by HEFCE, SFC, HEFCW & DEL 36 Units of Assessment Process of expert review by expert panels Assessment criteria: 3 elements –Output: assess quality of research output in terms of their ‘originality, significance and rigour with reference to international research quality standards. Weighting 65% –Impact: Significant additional recognition will be given where researchers have built on excellent research to deliver demonstrable benefits to the economy, society, public policy, culture or quality of life. Weighting 20% –Environment: asses the research environment in terms of its ‘vitality and sustainability’ including its contribution to vitality and sustainability of the wider discipline or research base. Weighting 15% Research outputs: details of up to FOUR research outputs produced by each member of submitted staff during publication period (1 st January 2008 to 31 St December 2013) 68

Use of Citation Data by Panels Some panels to consider number of times an output has been cited and use of appropriate benchmarks Expert review as primary means of assessing ‘originality, significance and rigour’ Panels recognise limited value of citation data for –recently published outputs (period) –No citation data available for certain types of output –The variable citation patterns for different fields of research –Possibility of negative citations –Limitations of outputs in languages other than English –Equality implications from ‘Analysis of data from the pilot exercise to develop bibliometric indicators for the REF: The effect of using normalised citation scores for particular staff characteristics’ 69

REF Units of Assessment 2014 Units of Assessment that may use citation data to inform assessment –Sub-panel 1: Clinical Medicine –Sub-panel 2: Public Health, Health Services and Primary Care –Sub-panel 3: Allied Health Professions, Dentistry, Nursing and Pharmacy –Sub-panel 4: Psychology, Psychiatry and Neuroscience –Sub-panel 5: Biological Sciences –Sub-panel 6: Agriculture, Veterinary, and Food Science –Sub panel 7: Earth Systems and Environmental Sciences –Sub-panel 8: Chemistry –Sub-panel 9: Physics –Sub-panel 11: Computer Science and Informatics –Sub-panel 17: Geography, Environmental Studies and Archaeology –Sub-panel 18: Economics and Econometrics Sub-panel 17 will only use citation data for physical geography ‘Process for gathering citation data for REF 2014’ 70

Pilot Exercise to develop bibliometric indicators for REF Report published institutions participated Data collected from WOS and Scopus Data normalised –‘54. The number of times that papers are cited is not in itself an informative indicator; citation counts need to be benchmarked or normalised against similar research. In particular: citations accumulate over time, so the year of publication needs to be taken into account; citation patterns differ greatly in different disciplines, so the field of research needs to be taken into account; and citations to review papers tend to be higher than for articles and this also needs to be taken into account.’ –55. We can normalise a citation count by dividing it by the average number of citations obtained by all items included in the bibliometric database that were published in the same year of the same document type and in the same field as the item under assessment 71

Source of Citations Self citations (difficult to define and measure). Not taken into account Institutional citations (citations coming from same institutions as output) International citations received by output. Defined as number of citations from papers with at least one author associated to non-UK address Computed proportion of outputs in submission that were result of international collaboration 72

REF Pilot Study Outcomes calculated for 3 main models: 1.address based 2.submitted staff in 2008 RAE 3.selected papers for authors (limited to 6 papers with highest normalised scores) Following indicators assessed the number of outputs included in the model the mean normalised citation score for these outputs the median citation score for these outputs the proportion of the outputs greater than twice world average the proportion of the outputs greater than four times world average the proportion of the outputs that are (as yet) uncited the proportion of citations to the outputs that are from the same institution the proportion of citations to the outputs that are from overseas the proportion of outputs that are an international collaboration 73

Using Incites in REF preparation Use Source Article listing to inform on bibliographic details and Raw Citation Count for papers Use Percentile in Subject Area to inform on publication selection (paper in top 10% or 25% of field compared to world papers in that field) Use Normalised Metrics (Category Actual/ Category Expected) to inform on papers with a citation impact that is twice or four times greater than the world average Use Category Expected indicator to inform on the normalised score for the output Use the Summary Metrics to inform on proportion of papers uncited (for subject area/author) Use the Summary Metrics to inform on Median Citation Score for selected papers Use Citation Impact Reports to inform on proportion of citations from oversees/ same institution for selected papers Use Subject Area Ranking to inform on which UOA to submit to Use Collaboration Reports to inform on papers that are a result of an international collaboration 74

Thank You! Support: