Download presentation
Presentation is loading. Please wait.
Published byDoreen Simmons Modified over 9 years ago
1
Arnold H. Rots & Sherry L. Winkelman Chandra Data Archive Smithsonian Astrophysical Observatory 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM31
2
Context For the Chandra Data Archive we have: A complete bibliography for the entire mission, covering more than 50 journals With complete linking to the individual datasets in the archive using persistent identifiers With extensive bibliographic metadata which are paired with the datasets’ metadata in the archive Allowing custom compound datasets and submission of higher level data products Bidirectional harvesting of links with the ADS 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM32
3
Bibliography Objectives More or less in chronological order: Complete the record of a mission or observatory Usually entrusted to the librarians: a compilation of articles selected on the basis of a carefully defined set of criteria Provide observatory performance metrics Extracted from the existing bibliography: basically numbers of papers and numbers of citations Provide useful research information This requires an additional skill set 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM33
4
Number of Chandra Science Papers 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM34
5
Average Citation Count per Article 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM35
6
Limited Information These metrics can only provide limited information We know the character of the papers changes as the mission ages They may be enhanced by impact factors But they don’t provide information on how and how well the repository’s data are being used 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM36
7
Enter the Science Turning the bibliography into a research tool requires one more component: Linking articles to (individual!) datasets The crucial benefit: It allows linking bibliographic metadata with observational metadata Consequently: A rich parameter space for scientific data mining Opportunities for more informative performance metrics 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM37
8
New Parameters for Metrics Some of the most useful pieces of information: Observing date and publication date Exposure time Instrument Observing mode The next three slides show what the additional information allows one to do 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM38
9
Time till First Publication 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM39
10
Continued Publication 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM310
11
Percentage of Exposure Time Published 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM311
12
Getting Back to this Graph: The next slides show what else can be learned 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM312
13
Amount of Unique Exposure Time (in ks) Published Annually 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM313
14
Percentage of Available Exposure Time Published Annually 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM314
15
The Answer is… … 42 Regarding articles with archival content, in 2013: 20% of papers presented new data 20% of papers presented a mix of new and old data 60% of papers presented data previously published 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM315
16
Suggested Metrics (These metrics are indicated in slide 11) Median time τ till first publication Percentage of exposure time published at 4τ Percentage of exposure time published more than 5 times at 5τ Or: time delay after which 50% of exposure time has been published more than 5 (TBD) times 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM316
17
Differentiate It now is trivial to calculate these metrics for different types of instruments and observations We found, for instance: That the percentage of exposure time published is not the same for different instruments Grating observations have a longer median time till first publication: analysis takes more work DDT observations have a shorter median time: they are hot subjects – and have shorter, or no, proprietary time 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM317
18
Caveats However, Metrics should only be considered in context 93% of exposure time published is great for a pointed telescope like Chandra, but cannot be expected (and was never intended) for an all-sky monitor It takes resources – until we get better text analysis tools It needs to be a collaboration between librarians and scientists – which is what we are arguing for: both will benefit and it’s worth the effort 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM318
19
As a ResearchTool There are a host of other things that would be extremely helpful: Increasing bibliographic metadata through text analysis Devising a mechanism that allows variable granularity in PIDs and data retrieval Encouraging users to incorporate PIDs in manuscripts But that is a different story (told in poster DB1.05) 2015-08-07Rots & Winkelman - IAU XXIX 2015, FM319
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.