Download presentation
Presentation is loading. Please wait.
Published byMeredith Hodges Modified over 8 years ago
1
Measuring Electronic Resource Availability Sanjeet Mann University of Redlands SCELC Research Day March 5, 2013
2
Armacost Library infrastructure ILLIAD interlibrary loan system Full text targets (databases, ejournals) (79,757 unique titles) Serials Solutions 360 Link A&I databases (~77,000 titles indexed) Innovative Interfaces catalog/proxy (263,573 book and serial titles)
3
Why do electronic resource errors matter? Costs Frustrated expectations Undermined confidence Complicated instruction
4
Research question "How often does full text linking work?"
5
Availability studies Sample of items Available? Yes/No Error? Order encountered Probabilities Prioritize fixes
6
Development of the availability technique Print material availability card catalog user surveys (Reviewed in Mansbridge 1986, Nisonger 2007) Linear sequence (De Prospo 1973) Branching model (Kantor 1976) Applied to e-resources 500 articles from 50 high impact journals (Nisonger 2009)
7
OpenURL performance OpenURL-based reasons for availability error (Wakimoto et al. 1998) “Digging into the Data” on link resolver failure (Trainor and Price 2010) NISO Initiatives: KBART, IOTA, PIE-J (Chandler et al. 2011, Glasser 2012, Kasprowski 2012)
8
Usability studies focusing on e-resources Database link pages (Fry 2011, Ponsford et al. 2011b) Resolver menus (O’Neill 2009, Imler & Eichelberger 2011, Ponsford et al. 2011a) Discovery services (Williams & Foster 2011, Fagan et al. 2012) Entire process (Kress 2011)
9
Methodology Arts & Humanities RILM MLA Philosopher’s Index Social Sciences America: History & Life EconLit Sociological Index Sciences Biological Abstracts ComDisDome, MathSciNet 4 questions X 10 databases X 10 results [18:11] redlandsreference: what is your research topic? [18:11] meeboguest59808: Oral Motor Activity [18:11] redlandsreference: Is this for a Communicative Disorders class? 400 citations
10
Link testing http://works.bepress.com/sanjeet_mann/1/
11
Error coding What is an error? Six error categories Updated criteria
12
Armacost Library failure points: P vs. O
13
Error details 1: Proxy errors Domain missing from forward table Domain missing from SSL Certificate Timeouts trying to establish connection
14
Error details 2: Source errors Missing metadata Erroneous metadata (e.g. rft.date=0001-01-01)
15
Error details 3: Knowledge base errors Title not selected in knowledge base Title selected, but in poorly chosen collection Knowledge base holdings do not reflect access entitlement (embargo, back issues, etc.)
16
Error details 4: Link resolver error Confusion between two similar titles Unusual OpenURL syntax
17
Error details 5: Target errors Content not loaded (supplement, embargo) Records concatenated from full text and non-full-text databases Server downtime
18
Error details 6: ILLIAD errors Unicode metadata not displayed properly rft.title used for both book title and article title, affects chapters and dissertations
19
Results 250/400 items available (62.5%) 99/400 online full text (24.8%)
20
Sampling Necessary sample size for a yes/no condition is determined by: Plug values into the formula… p = 0.625 (250 / 400 successes) 1-p = 0.375 (150 / 400 errors) C = 0.95 (95% confidence) Zc = 1.96 (statistical textbook or http://www.measuringusability.com/pcalcz.php)http://www.measuringusability.com/pcalcz.php E = 0.05 (5% error) To use this, you need: Availability rate from a small pre-test Choose acceptable % confidence (95%) Choose acceptable margin of error (+/- 5%) I could have just used 360 citations…
21
Confidence Your confidence in a study of a particular sample size is given by: I could have just used 360 citations… (rewritten from previous formula, plug in values as before) Use table of areas under a standard normal curve to convert Zc to a confidence probability
22
Discussion
23
Discussion (continued)
24
Availability / Error by Discipline
25
Solutions Edit proxy forward table
26
Solutions Upgrade ILLIAD
27
Solutions Customize Serials Solutions
28
Solutions Simplify result screen interface and terminology
29
Summary 400 citations obtained through likely keyword searches of 10 A&I databases 62.5% availability (98% confidence, +/- 5%) 24.8% downloadable full text Responses include fixing proxy, kb holdings, interfaces, upgrading systems Strengths: quant + qual data, very flexible (n=100 allows 85% confidence) Weaknesses: Does not account for issues with interfaces, searching or evaluation faced by actual users
30
Towards availability testing with live students More barriers: confusing interfaces difficulty formulating searches and evaluating sources login errors How to test: cognitive walkthrough + recorded task protocols analysis informs information literacy and interface design Deliverables: availability % branching model usability report
31
Questions skatdesign.com Works cited and data set: http://works.bepress.com/sanjeet_mann
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.