Presentation is loading. Please wait.

Presentation is loading. Please wait.

"Razzle Dazzle" on literature-based bibliometrics for Research Assessment P. Larédo Seminar "Research Assessment: What Next?" May 17-20, Washington.

Similar presentations


Presentation on theme: ""Razzle Dazzle" on literature-based bibliometrics for Research Assessment P. Larédo Seminar "Research Assessment: What Next?" May 17-20, Washington."— Presentation transcript:

1 "Razzle Dazzle" on literature-based bibliometrics for Research Assessment P. Larédo Seminar "Research Assessment: What Next?" May 17-20, Washington

2 Main Points The (de)limited sphere of relevance of 'traditional' bibliometrics Evolution of tools mainly pushed by a changing target: the web and "open" sources A review of the 4 types of uses: - evaluation of individuals - macro-positioning - topical mapping - academic positioning/assessment of research structures

3 The (de)limited sphere of relevance of 'traditional' bibliometrics Articles are only one among many outputs from research A prerequisite and a dependence: the existence of databases Differences in coverage: public vs private research, health/life vs natural/engineering, SS&H Statement 1: Does this explain the de facto limited 'evaluation' uses of bibliometrics?

4 About methods and tools (1) Since the 1980s (and co-citation / co-word), improvements in two directions - linguistic (complement indexing, more accurate searches) - more diverse formats and forms of information structuration Answering a central hypothesis: very rapid movement from "closed" (i.e. formatted in DBs) to "open" (i.e. the web) information

5 About methods and tools (2) Constat: unability of search engines to retrieve the "full environment" of a given issue. Results: - New terminology: webmetrics, datamining, etc. - Numerous new software (and start-ups) for data retrieval, rapatriation, storage and treatment e.g. from Leximappe to (a) Alta Vista, and (b) Sampler and Leximin. Statement 2: major changes are not in content but in sources

6 Categorizing Uses A core (but 'hidden') use: evaluating individuals Macro-positioning and related controversies "Topical" mapping: the booming sphere of positioning analyses and of networks A growing (?) trend: mapping and evaluation of research "structures"

7 The evaluation of individuals From "how good is he/she" articles, citations, journals (beware of cross field comparisons - cf. Solari/Magri, Scientometrics, 2000) To "where is she/he positioned" i.e. "how central are his/her research themes" and "how central is she/he in the field" The realm of co-word, co-citation and co-authorship cf. The example of Prion and Prusiner (M.A. De Looze, 2000). Statement 3: Individuals evaluation is the bread and butter of assessment bibliometrics

8 Macro-positioning: beware! The example of Sir Robert May 'indicator of efficiency of public research spending' Number of publications/spending on basic research (May, 1988 and Barré, SPP, 2001) The two levels of corrections - adapting numbers - correcting for 'other' parameters Result: from a 1:2.14 ratio between France and UK to a 1:1 ratio! Statement 4: very appealing to policy-makers but highly controversial

9 "Topical" mapping (1) Issues: topical coherence and alignment(cognitive structuration, thematic history, stars and networks), intra- field connectors and inter-field connections, emergenc eof new specialties… Central topic: research fields Growing interest around field emergence, specific problems (which do not give way to dedicated journals such as cystic fibrosis) and specific actions (such as public programmes)

10 "Topical" mapping (2) Classical sphere of co-word, with numerous clustering algorithms and more and more 'on line' linguistic modules (for full text) Multiple examples of use, but seldom within evaluation processes... Statement 5: a great potential for bibliometrics as a 'positioning tool', but dependent upon wider evaluation processes.

11 Assessment of research structures An hypothesis to test: a growing interest in "academic positioning" of research groups, faculty departments, "full" institutions Some markers: - Few cases but one 'scholarly case': Dutch Faculty departments (CWTS, Van Raan). - Quite a few institutions' (self) evaluations looking at their global academic positioning (Dutch research councils, Norway and RCN, INSERM in France, …) - Growing number of 'national' policies focusing on 'centres of excellence' - See numerous papers on the productivity/performance of departments and research groups in economics!

12 Assessment of research structures (2) Lessons from the French case: when teaching and research structures are separated (even within universities), the academic recognition of "units" becomes one carefully analysed dimension Limitation: pre-conceived definition of activities. To be relevant, it requires to position 'academic outputs' within the production of structures studied. Need for preliminay characterisation of 'activity profiles' (Hopeful) statement 6: research structures as a new (but difficult) focus for bibliometrics

13 Statements: a recapitualtion (S1) de facto limited 'evaluation' uses of bibliometrics (S3) focused on Individuals (S4) Macro-positioning is very appealing to policy-makers but highly controversial (S5/S2) a great potential for 'topical mapping' bibliometrics as a 'positioning tool' (thanks to software enrichment to tackle 'open' sources), but dependent upon wider evaluation processes... (Hopeful S6): evaluation of research structures may well appear as a new (but difficult) focus for bibliometrics


Download ppt ""Razzle Dazzle" on literature-based bibliometrics for Research Assessment P. Larédo Seminar "Research Assessment: What Next?" May 17-20, Washington."

Similar presentations


Ads by Google