ltr: Vol. 49 Issue 2: p. 26
Chapter 5: Ongoing Evaluation and Access
Jill Emery
Graham Stone

Abstract

Many resources take a year before they become embedded into the curriculum or research process, and user feedback may not be positive from the onset of provision. Chapter 5 of Library Technology Reports (vol. 49, no. 2) “Techniques for Electronic Resource Management” asserts that the best evaluation of a product or service happens within a three- to five-year time frame. It recommends tracking downtime and availability by saving e-mail alerts and messages from the producer about scheduled downtime and maintenance and tallying them annually. It also suggests keeping a file on each resource provider that includes all pertinent correspondence, along with notifications of routine maintenance and specific troubleshooting of problems that have arisen.


Many resources can take at least one academic year before they become embedded into the curriculum or research process. Usage data and user feedback may not be positive from the outset, and a resource should not be cancelled after the first year unless there are very good budgetary reasons for doing so. Sometimes there is a significant time lag after purchase of a resource before all potential users have access to it, and that situation also impacts the level of usage during the first year. Given the development cycles of most electronic products and services, the first twelve months after the release of a brand-new product tend to be accompanied by significant changes and upgrades to the product or service.1 The concept of “soft launch” or “soft rollout” has become predominant even in libraries.2 The best evaluation of a product or service happens within a three- to five-year time frame. The arc of usage and user behavior is not fully realized until the third year of activity for any given resource or service. Evaluation of user behavior and usage data is important in building up a detailed picture of the appropriateness of the resource over time and is invaluable when it comes time to review the resource in the future. Despite all good-faith efforts, activation and establishment of access to electronic resources at any given institution are sometimes overlooked or missed. Part of this stage should be spent double-checking that access is available for all purchased resources and, if access to a collection of resources has been purchased, that the collection still has the same titles or makeup initially purchased. Patron-driven e-book packages require more frequent hands-on management than A&I and full-text databases. Patron-driven e-book packages are comprised of more fluid content as titles move in and out of the package depending on the profile established.


Types of Evaluation

There are various ways to evaluate electronic resources and how they are used locally. As electronic resources grow to assume the majority of library collections budgets, determining which evaluation strategy best captures the usage profile at your institution is key to creating a successful evaluation model.

At present, many electronic databases and journals can be evaluated using COUNTER-based statistics. However, COUNTER data is just one mechanism for evaluating electronic resources. Journal publishers like to promote and use ISI Impact Factors to exemplify and depict content relevance.3 Another measure that also provides and depicts citation-related data is an Eigenfactor score. USKG and COUNTER are working together on the Journal Usage Factor (JUF) project, which is assessing how online journal usage statistics might form the basis of a new measure of journal impact and quality. In addition to article- and journal-level metrics, there are also a growing number of altmetrics and analytic tools.4

Lastly, many libraries also choose to develop an aggregation of web page statistics, discovery tool statistics, openURL usage, and ILS usage to add to the use evaluation of any given title or resource. This aggregated evaluation approach will be explained in more depth in chapter 6. In order for the evaluation to be most beneficial to your institution, the electronic resources manager must first agree on which data points he or she would like to use to evaluate electronic usage and then set consistent methods of collecting and reporting these figures from one year to the next. One way to determine the criteria to be used for evaluation of your electronic resources is the balanced-scorecard approach. The balanced-scorecard approach allows for the use of a variety of factors in evaluating your electronic resource collection.5

Project COUNTER

www.projectcounter.org

Eigenfactor

http://eigenfactor.org

UKSG

www.uksg.org

Project COUNTER: Journal Usage Factor

www.projectcounter.org/usage_factor.html


Check the Implementation

Many electronic resource managers set up review periods to check access to resources on a schedule. With new purchases, it is best to check the established access points for your institution about a month after purchase to ensure that access is working correctly from web pages and the library catalog. Part of this evaluation should include checking the remote authentication process as well as the links. If an institution has purchased an ERM system, then a tickler can be established to remind staff to perform this check for access provision. Depending on the resource or package purchased, once it has been determined that access is fully set up, then a monthly, quarterly, half-year, or annual review of the resource should occur to make sure that the content has remained the same and all of the access points are working correctly.


Ask Your Users

In addition to gathering data from the sources listed above, it is vital for any library to ask its users what their electronic resource needs are and if they feel that their needs are being met by the electronic resources provided. This type of information gathering can occur in a highly structured way by using an evaluation tool such as LibQUAL+, by a standard set of survey questions that are distributed each quarter or semester, in a more informal evaluation of an open-ended comments section on a library’s web pages, or via tracking mechanisms for access problems and issues faced by end users.

LibQUAL+

www.libqual.org/home

Make sure that you have a system to record users’ comments received via e-mail and also anecdotal comments from meetings with faculty and students. Tracking these comments is especially useful in establishing the underlying feeling of your users to a given resource. You may have an institutional customer relationship management (CRM) system to do this, but more often than not, a simple spreadsheet will suffice.

Again, the librarians at any given library should come to an agreement on which strategy to use to gather information from users and make sure a consistent process is used at each evaluation period to ensure coherent reporting of the feedback.


Check Changes to Coverage of Resources or Platform Migration

For A&I and full-text databases, a yearly or biannual check is normally sufficient to ensure that access is occurring as it should and that the platform still fully supports the functionality of the content given. Databases are bought and sold and move from one supplier to another. This is a good way to catch these changes.

Sometimes, an A&I database may be available from more than one provider and may or may not have a full text-component available. Part of this evaluation stage should include looking at the other platforms available to make sure the best use of the resource is being leveraged. It may be that moving an A&I database to another platform results in more direct linking to purchased full text or a more robust controlled language. There are times when an A&I database or full-text database has moved from one provider to another and this move has shifted either the focus of the content or the available access to it. The annual review can catch these more subtle changes and perhaps land a resource on your review list, as described in chapter 7.

Journal titles move fairly regularly between different hosting services as well from one publisher to another. An initiative begun by UKSG to set guidelines for journals moving from one publisher to another has made some headway in getting publishers and providers to announce these changes in advance. This protocol has become known as the Transfer Code of Practice. However, not all publishers and platform providers follow the recommended guidelines, which means that spot-checking journal titles by any given publisher is a worthwhile endeavor for an ERM team to perform.

UKSG Transfer Code of Practice

www.uksg.org/transfer

Coverage of journal packages can be checked on a biannual or quarterly basis, depending on the package purchased, to catch any content coverage changes that might have occurred. This can be done in coordination with reports provided by your OpenURL provider that capture coverage and holdings data changes. The most common checking of packaged content usually occurs at the renewal cycle to verify what titles should and should not be part of the package. The major subscription agents have created package support services, and package title verification is a good reason to enlist the use of a subscription agent, especially if you have multiple packages that renew at roughly the same time. By having staff selectively check titles in various journal packages, confirming the coverage and holdings can be done in a routine manner.6


Track Downtime and Availability

Downtime can be checked or evaluated in a number of ways. One way is to save e-mail alerts and messages from the vendor about scheduled downtime and maintenance and tally these up annually. It is wise for electronic resource managers to set up, if possible, some form of electronic resource troubleshooting mechanism, through either e-mail, ERM tool, or software application. This way, you again do an annual accounting of downtimes or significant service interruptions with any given journal package, platform, or provider. It is extremely important for electronic resource managers to report these findings back to the provider, especially at the renewal period. It may be possible, although rare, to receive discounts or other forms of compensation, such as free months of access.

With any purchase of an e-journal collection, there is a strong likelihood that journals have moved from one year to the next. However, most journal publishers allow for a two- to three-month grace period at the beginning of every year before terminating access. Therefore, it is best to establish a routine check of your journal package access in April or May of any given year, and not in January or February, which was routine for print subscriptions.

For patron-driven e-book plans, content is normally added and subtracted on a monthly or quarterly basis through the record loads performed in the catalog. It is wise to spot-check URLs when the record loads occur to ensure that proxy scripts are running correctly and that access from these records represents the established profile of titles.

E-book packages may also update on a monthly or quarterly basis. Knowing the update schedule, you should coordinate the access check of each of the e-book records in your online catalog with the knowledgebase used by your openURL resolver.


Communicate with the Vendor

The electronic resource manager should keep a file or dossier on each resource provider that includes all pertinent correspondence that has occurred, along with notifications of routine maintenance and troubleshooting of specific problems that have arisen. If there is a place to capture this information in your ERM tool, through either a notation system or a file uploads system, this information can be stored there as well. For example, Knowledge Base + in the United Kingdom has developed the facility for the community to add (and share if required) notes and e-mails regarding correspondence and user feedback, together with the license information so that the electronic resources manager can access this information in one place. With each renewal, an overview of performance and issues that have arisen during a given year should be shared back with the vendor or provider. Specific feedback from your end users may help with future developments and changes to improve the product or service offered.

Knowledge Base +

www.jisc-collections.ac.uk/knowledgebaseplus

Many vendors and electronic resource service providers have user groups and user group meetings as part of major conferences or as stand-alone events. It is highly recommended that if you are using these services, you join the user group and become involved in committees of interest since this is the best way librarians have to partner with service suppliers to help define the directions of tool development and provide much needed feedback on the user experience. Of course, not everyone can be on a user group, so talk to your colleagues at other universities and at regional and national meetings of consortia to see who is on what. It might be useful to create a list of contacts on user groups and advisory boards in a shared area for all to consult. Often, publisher library advisory boards have a one-way dialogue, where librarians comment only on new products and ideas and do not feed back ideas from the user community. Make sure that, if you are on a group, you represent your community by consulting with colleagues in regional consortia or at informal meetings so that you can take the concerns of the community along to the publisher or vendor.

This information can also be used when negotiating the cost for the next fiscal cycle or as part of the overall review of a product or service for retention.


Notes
1. Stephen Abram, “Product Development Life Cycle, ” Stephen’s Lighthouse (blog), December 14, 2010, accessed November 6, 2012, http://stephenslighthouse.com/2010/12/14/product-development-life-cycle/?utm_source=rss&utm_medium=rss&utm_campaign=product-development-life-cycle
2. S. E. Smith, “What Is a Soft Launch?” WiseGEEK, accessed November 6, 2012, www.wisegeek.com/what-is-a-soft-launch.htm
3. “Impact Factor, ” Wikipedia, last modified December 11, 2012, accessed December 12, 2012, http://en.wikipedia.org/wiki/Impact_factor; Jo Cross, “Impact Factors: The Basics, ” in E-Resources Management Handbook, ed. Graham Stone, Rick Anderson, and Jessica Feinstein (Newbury, UK: UKSG, 2009), doi:10.1629/9552448-0-3.17.1
4. Baynes, Grace. “Scientometrics, Bibliometrics, Altmetrics: Some Introductory Advice for the Lost and Bemused, ”Insights November 2012;25(no. 3):311–315.doi:10.1629/2048-7754.25.3.311
5. Bielavitz, Tom. “The Balanced Scorecard: A Systemic Model for Evaluation and Assessment of Learning Outcomes?”Evidence Based Library and Information Practice 2010;5(no. 2):35–46.
6. Collins, Maria; Murray, William T.. “SEESAU: University of Georgia’s Electronic Journal Verification System, ”Serials Review June 2009;35(no. 2):80–87.doi:10.1016/j.serrev.2009.02.003

Article Categories:
  • Information Science
  • Library Science

Refbacks

  • There are currently no refbacks.


Published by ALA TechSource, an imprint of the American Library Association.
Copyright Statement | ALA Privacy Policy