Download presentation
Presentation is loading. Please wait.
1
ALCTS CRS C&RL IG Midwinter Meeting
Assessing Return on Investment for E-Resources: A Cross-Institutional Analysis of Cost-Per-Use Data Patrick L. Carr ALCTS CRS C&RL IG Midwinter Meeting San Diego. January 9, 2011 Assessing Return on Investment for E-Resources: A Cross-Institutional Analysis of Cost-Per-Use Data Libraries often rely on cost-per-use (CPU) data to measure the return on investment for their e-resource subscriptions. By comparing CPU data supplied by several libraries, this presentation will provide added context to CPU-based assessments. It will explore what a cross-institutional CPU analysis reveals about libraries’ varying returns for their subscriptions, and it will consider the potential that such an analysis has to increase returns on investment.
2
“Print was simpler.” As libraries have been making the transition from print to online collections, it is frequent to hear the print era characterized as being a simpler time in which libraries didn’t need to contend with the complexities of ERM.
3
Indeed: This has been replaced by…
Appendix B of the 2004 report of the DLF’s ERMI illustrates the increased challenges of ERM. Indeed, the appendix shows that the straightforward, linear process for PRM has been replaced by From Appendix B of the 2004 report of the Digital Library Federation’s Electronic Resources Management Initiative.
4
Indeed: This Has been replaced by…
And this. From Appendix B of the 2004 report of the Digital Library Federation’s Electronic Resources Management Initiative.
5
Indeed: This Has been replaced by…
“Print was simpler.” However, although print is simpler overall, there are exceptions. When it comes to usage-based evaluations of library collections, I think that the transition from print to online formats has actually brought a decrease in difficulty. From Appendix B of the 2004 report of the Digital Library Federation’s Electronic Resources Management Initiative.
6
Indeed: This Has been replaced by…
“Print was simpler.” Not when it comes to use-based evaluations. Indeed, with print resources, evaluation involved an array of oftentimes unreliable, time-consuming, and convoluted methods for assessing use. From Appendix B of the 2004 report of the Digital Library Federation’s Electronic Resources Management Initiative.
7
Methods for assessing print use
Circulation data Table count Slip method Direct observation Call slip analysis Photocopy request analysis ILL request analysis Adhesive labels Surveys Check-off method Subjective impression of use For example, a serials management textbook published in 1998 identified eleven different approaches for measuring print use. What these varying methods share is that they are costly, unreliable, and unwieldy. Thomas E. Nisonger, Management of Serials in Libraries (Englewood, Colorado: Libraries Unlimited, 1998):
8
We have far superior tools to assess use of e-resources.
Methods for assessing print use Circulation data Table count Slip method Direct observation Call slip analysis Photocopy request analysis ILL request analysis Adhesive labels Surveys Check-off method Subjective impression of use We have far superior tools to assess use of e-resources. With e-resources, libraries have far superior tools for measuring use. Thanks to the COUNTER standard, for example, we have a code of practice that e-resource access platforms can adopt to consistently record and exchange use information. Thomas E. Nisonger, Management of Serials in Libraries (Englewood, Colorado: Libraries Unlimited, 1998):
9
Cost-Per-Use: Annual Subscription Cost ÷ Annual Use
One result of this is that libraries can rely on CPU data as one metrics for measuring return on investment for their subscriptions. A CPU figure is simple a calculation of a resource’s cost divided by its use. Building on the patron-driven acquisition models that Rick discussed, we might say that one way to think about a CPU figure is that it enables a library to look at subscribed access as if it were patron-driven. In other words, if the library had been paying a set fee for each patron-initiated use, then the CPU is the rate that the vendor would be charging the library for each use.
10
Cost-Per-Use: A powerful tool for assessing return on investment.
When utilized in the context of other metrics, CPU is a powerful tool for evaluating ROI.
11
Cost-Per-Use: A powerful tool for assessing return on investment
Cost-Per-Use: A powerful tool for assessing return on investment. But are we using CPU data to its fullest potential? But are we using this tool to its fullest potential? When libraries collect CPU data, they do so in a bubble. In other words, they keep the information internal and do not share it with other libraries. But what happens if a library is able to consider CPU data within the context of CPU data from other libraries?
12
Chuck Hamaker’s idea: A cross-institutional analysis of CPU data
& This is a question that Chuck Hamaker recently considered. In November 2010, Chuck, who is a librarian at UNC Charlotte, gave a presentation at the Charleston Conference in which he took his university’s CPU data and looked at it in the context of CPU data supplied my university, East Carolina University.
13
I was intrigued by the possibilities of such an analysis and decided to build on Chuck’s analysis by also requesting CPU data from several other libraries within the UNC system. My basis for doing this was that, the more institutions supplying CPU data, the better equipped we are to assess what this data means and how we can use it. I was able to get two other institutions to supply their CPU data.
14
These institutions are UNCG and UNCW
These institutions are UNCG and UNCW. And now would be a good opportunity for me to thank these institutions for their efforts to generate CPU data and for their willingness to share this data; it really is quite appreciated
15
I supplied to UNCG and UNCW with a spreadsheet similar to what you see here and asked them to provide CPUs for any of the 78-listed resources to which they maintained subscriptions. The basis for selecting these particular resources was that they were those that Chuck Hamaker had used during his initial research. The coverage range from which the CPU calculations were made generally fell primarily in 2009.
16
Caveat: The coverage ranges for cost and use data didn’t always overlap completely.
There are a few caveats I’d like to note before getting into the data. One is that in some cases the date range used to collect use data did not overlap exactly with the date range of the subscription cost. So for example, the use data might have been for the 2009 calendar year but the cost data might have been March 2009 thru February 2010.
17
Caveat: Institution-by-institution access sometimes differed for resources.
Caveat: The coverage ranges for cost and use data didn’t always overlap completely. Another caveat—perhaps the biggest one—is that there are cases in which library by library subscriptions to the listed resources differed in terms of the specific content that is accessible. For example, ECU’s Annual Reviews package does not include a few new titles that Annual Reviews recently began publishing. However, the other three libraries may have had these new titles in their collections.
18
Caveat: Not all sources of use data were COUNTER compliant.
Caveat: Institution-by-institution access sometimes differed for resources. Caveat: The coverage ranges for cost and use data didn’t always overlap completely. A third caveat is that there are some cases in which the use data was not COUNTER compliant.
19
We can’t really use this study’s results to make sweeping conclusions.
Caveat: Not all sources of use data were COUNTER compliant. We can’t really use this study’s results to make sweeping conclusions. Caveat: Institution-by-institution access sometimes differed for resources. Caveat: The coverage range for cost and use data didn’t always overlap completely. What caveats such as these add up to is that this data is not sufficient to make sweeping conclusions about ROI for subscriptions. There are too many caveats and not enough libraries participating. Rather, this data suggests general insights into the types of information about ROI that a cross-institutional CPU analysis has the potential to yield. With this word of caution in mind, let’s look at some of the data. Given the large number of resources for which CPU data was collected, we’re not going to be able to consider all 78 of the resources that were listed. Rather, I’m going to focus on selected resources from four categories: publisher journal packages, full-text aggregators, site licenses to individual journals, and indexing and abstracting databases.
20
Publisher Journal Packages
UNC Charlotte ECU UNC Wilmington UNC Greensboro Resource Average Cost Per FT Article Download American Chemical Society Web Editions $5.32 $4.72 $4.84 $6.41 Annual Reviews $4.21 $4.22 $3.14 $5.27 Blackwell Synergy $11.51 $11.46 $7.59 $20.12 $6.88 Cambridge University Press $7.28 $8.59 $8.27 $5.86 $6.40 E-Duke Scholarly Collection $5.35 $4.73 $6.94 $4.39 Elsevier ScienceDirect $5.69 $3.89 $8.56 $6.08 Oxford University Press $5.24 $4.13 $3.77 $6.56 $6.49 Sage $6.15 $4.42 $4.79 $10.25 $5.15 Springer $10.49 $9.06 $6.64 $15.17 $11.08 Wiley $14.43 $14.17 $12.72 $23.64 $7.20 Let’s begin with publisher packages. You see here the four libraries’ CPU data for ten publisher packages ranging from large commercial publishers like Elsevier and Springer to the university presses of Oxford, Cambridge, and Duke. As you review this data, a few things of interest to note: while CPUs can differ significantly library by library, the three highest average CPUs are from commercial publishers: Wiley, Blackwell, and Springer. It’s also interesting thought that Elsevier has one of the lowest CPU numbers.
21
Full-Text Aggregators
UNC Charlotte ECU UNC Wilmington UNC Greensboro Resource Average Cost Per FT Article Download ATLA Religion $4.34 $1.25 $1.62 $8.90 $5.61 Communication & Mass Media Complete $0.52 $0.03 $0.63 $0.89 $0.53 EconLit $3.84 $3.49 $4.18 Literature Resource Center $3.68 $2.91 $7.15 $0.99 PsycArticles $0.61 $0.27 $1.06 $0.54 $0.58 SPORTdiscus $0.55 $0.35 $0.96 Here’s CPU data for a selection of full-text aggregators. The CPUs here are lower—in some instances vastly lower—than the CPUs for publisher packages. I’d speculate that part of the reason why that’s the case is that these types of databases are oftentimes heavily used by students, resulting in high use data. Moreover, because the titles in FT aggregators are often embargoed, not peer-reviewed, and lacking in perpetual access provisions, subscription costs are relatively low when you consider the number of titles that are accessible.
22
Cost Per FT Article Download
Journal Site Licenses UNC Charlotte ECU UNC Wilmington UNC Greensboro Resource Average Cost Per FT Article Download Nature $3.71 $3.08 $3.01 $5.35 $3.41 Nature Biotechnology $24.70 $18.00 $16.75 $39.36 Nature Cell Biology $42.92 $55.59 $13.10 $60.07 Nature Genetics $20.62 $14.82 $12.04 $35.01 Nature Medicine $20.61 $26.71 $7.13 $27.98 Nature Structural & Molecular Biology $68.76 $28.07 $109.45 The listed resources included site license access to a number of individual journals published by Nature Publishing Group. As you can see, except for the flagship publication Nature, the CPUs here are quite high—see for example the CPU averages of Nature Cell Biology and Nature Structural & Molecular Biology. ECU consistently has the lowest CPUs for these journals. The likely reason for this is that ECU has a school of medicine, which results in greater use of our NPG journals than is likely to occur at the other institutions.
23
Indexing & Abstracting Databases
UNC Charlotte ECU UNC Wilmington UNC Greensboro Resource Average Cost Per Search America, History and Life $0.41 $0.12 $0.66 $0.20 $0.64 Film and Television Literature Index $0.05 $0.04 $0.07 Historical Abstracts $0.49 $0.13 $0.77 $0.23 $0.82 Journal Citation Reports Web $7.99 $5.25 $10.73 Mathscinet $0.73 $0.11 $0.25 $1.72 $0.85 MLA International Bibliography $0.15 $0.17 $0.21 $0.09 Philosopher's Index $1.04 $0.69 $0.16 $2.26 RILM Abstracts of Music Literature $0.08 $0.10 Ulrichs Periodicals Directory $2.02 $4.32 $0.86 $0.89 Web of Science $1.80 $0.92 $2.24 $1.18 $2.87 Lastly, here’s CPU data for a selection of indexing and abstracting services. Because these resources do not include full-text, use could not be measured by article downloads. Instead, use was measured by searches. While the library by library CPUs of a few of these resources are nearly equal, it’s interested and puzzling that in other cases the library by library CPUs differed quite significantly.
24
Three Questions To conclude my presentation, I’d like to pose three questions. I must caution you that I don’t have confident or conclusive answers to any of them.
25
Three Questions What does a cross-institutional CPU analysis actually tell us? First, what does a cross-institutional CPU analysis actually tell us? Sure, it’s interesting to see such data but what conclusions can we reasonably draw from it? Because we don’t have cost or use data, we can’t say for sure what factors are behind any differences in CPU for a particular resource. Take, for example, a CPU that is inordinately low within the context of the other institutions. To what extent is this due to the institution’s heavy use of this product and to what extent is it due to special pricing that the institution was able to negotiate with the vendor? We simply don’t know, and, due to the fact that many licenses prohibit disclosure of pricing or use data, it is oftentimes the case that we’re not allowed to know. All that we do know based on CPU data is a library’s ROI—at least to the extent that CPU is an accurate measure of ROI. Beyond differences in ROI, another thing that the analysis would seem to bright to light is the extent to which there are inequalities in vendor pricing models. We saw from the data I shared that it was not unusual for institution by institution CPUs for a resource to differ by twofold, threefold, or more. What this would seem to suggest—at least according to the very caveated and limited data we have to look at--is that many vendors still have not developed pricing models that fairly reflect the value that institutions get from subscriptions.
26
Three Questions What does a cross-institutional CPU analysis actually tell us? How can we use what it tells us? My second question builds on the first one. Assuming we do have at least some sense of what a cross-institutional CPU analysis tells us, how do we use this information to our benefit? One possibility is in pricing negotiations. In the attempt to negotiate a reduced renewal price for a resource with a high CPU, a library or consortium might be able to point to the data and say to the vendor, “Look, the CPUs libraries are getting for your product are markedly higher than the CPUs for comparable products. In light of this information, it’s clear that your product is over-priced.” Of course, the flip-side of this is that the vendor of a product with a relatively low CPU might be able to justify significant price increases on similar grounds. Beyond pricing negotiations, another potential use of cross-institutional CPU data is that it might be of help to librarians wishing to get a sense of what CPU range is typical for a resource. This information, in turn, would enable them to make more informed renewal and cancellation decisions. It might also bring to the surface resources that libraries should be more heavily promoted or made more easily accessible to users.
27
Three Questions What does a cross-institutional CPU analysis actually tell us? How can we use what it tells us? Where do we go from here? My final question is where do we go from here? In other words, what’s the next step in this project? One possibility is that the usefulness of the data is outweighed by time needed to collect and coordinate it. If this is the case, there’s not really anywhere else to go and the project is over. That’s one possibility. Another is that cross-institutional CPU data is of significant interest to the library community. If this is the case, carrying the project forward would likely entail a number of complex tasks like coordinating the accurate collection of further institutions’ CPU data, aggregating this data, and making it available to those institutions that have contributed CPUs. Who would be tasked with doing all of this work? I don’t want it to be me, I can tell you that. And, in fact, I don’t think it could be any one person acting alone. Perhaps it’s something that one of the professional organizations or consortiums might take up.
28
Questions/Comments Patrick L. Carr
Head of Electronic & Continuing Resource Acquisitions Joyner Library East Carolina University Greenville, North Carolina 27858 phone:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.