07_Mueller_et_al

Where’s the EASY Button? Uncovering E-Book Usability

Kat Landry Mueller (klmueller@shsu.edu) is Associate Professor, Electronic Resources Librarian, Newton Gresham Library, Sam Houston State University, Zachary Valdes (zav001@shsu.edu) is Associate Professor, E-Resources Metadata Management Librarian, Newton Gresham Library, Sam Houston State University, Erin Owens (eowens@shsu.edu) is Associate Professor, Access Services Coordinator and Scholarly Communications Librarian, Newton Gresham Library, Sam Houston State University, and Cole Williamson (wcwilliamson@ualr.edu) is Assistant Professor, Research and Scholarly Communications Librarian, Ottenheimer Library, University of Arkansas at Little Rock.

E-book platforms have multiplied among vendors and publishers, complicating not only acquisitions and collection development decisions, but also the user experience. Using a methodology of task-based user testing, the researchers sought to measure and compare user performance of eight common tasks on nine e-book platforms: EBSCO eBooks, ProQuest Ebook Central, Gale Virtual Reference Library (GVRL), Oxford Reference, Safari Books Online, IGI Global, CRCnetBASE, Springer Link, and JSTOR. Success and failure rates per task, average time spent per task, and user comments were evaluated to gauge the usability of each platform. Findings indicate that platforms vary widely in terms of users’ ability and speed in completing known-item searches, navigation tasks, and identification of specialized tools, with implications for library acquisition and user instruction decisions. Results also suggest several key vendor design recommendations for an optimal user experience. The study did not aim to declare a “winning” platform, and all the platforms tested demonstrated both strengths and weaknesses in different aspects, but overall performance and user preference favored ProQuest’s Ebook Central platform.

E-book platforms have multiplied among vendors and publishers, complicating not only acquisitions and collection development decisions, but also the user experience. Recurring anecdotal discussions among Library faculty at Sam Houston State University remained inconclusive regarding various platforms’ ease or intuitiveness of use. Researchers sought to measure and compare common tasks across nine different e-book platforms using task-based user testing. User behavior observation and direct quotes, along with quantitative data such as average time per task and success/failure rates for task completion, informed researchers of the ease, intuitiveness, and duration for eight tasks for each of the platforms tested. The researchers hope that the findings of this study will provide valuable information for other libraries making collection development and instruction recommendations or decisions regarding these various platforms, while also serving as a mode of feedback to the platforms’ vendors and publishers.

Literature Review

Scope of the Literature Review

This literature review focuses on the past works of most relevance to the current study based on the goal pursued and/or the methodology employed. The aim of this literature review will be to highlight key works that have focused on evaluating the usability of e-book platforms in the desktop computer environment, particularly those works that sought to compare competing platforms, and to demonstrate how the current study fits in with and builds further upon these past works.

Many studies have compared user preferences for e-books versus print books, user acceptance of e-books, and comparative user behavior in the two reading mediums. This literature review will not attempt to detail this sizable body of work, as it is outside the scope of the present study, which aims to assess actual student interaction with different platforms in occasions when a user must use an e-book, regardless of preferences. However, one example from that body of literature worth mentioning briefly is Berg, Hoffman, and Dawson (2010), where task-based usability testing, very similar to that of the current study, was employed to compare specific e-book titles on one platform with the same titles in print.1 This study and its literature review would provide a good starting point for librarians more specifically interested in usability-based comparisons of e-book versus print formats, as opposed to comparing different e-book platforms.

Similarly out of scope for this literature review are studies focused specifically on the use of dedicated e-reading devices, the usability of e-books on mobile devices, and the use of e-textbooks (required course texts in electronic format), as they emphasize specific aspects of the e-book experience that are not pertinent to the current study.

Survey, Review, and Focus Group Methodologies

The host of studies discussing users’ thoughts and attitudes towards e-books may be separated into a few categories, depending on how they approach the subject. First, there are those studies seeking to look at the usability of e-books through a variety of lenses that are pertinent to the course of this current study. One such by Hobbs and Klare (2016) sought to look at the general efficacy of student interactions with e-books through a combination of interviews and surveys.2 Their findings showed that while the number of students using e-books increased, their overall proficiency with them remained flat. Reasons cited for this by the participants were difficulty in using the e-book interface, as well as difficulty using study habits acquired with print materials, such as the marking of pages with tabs. The present study will be looking at both the ease of use for the interface, as well as the availability of tools, such as note-taking. A usability study by Abdullah and Gibb discussed the various reasons that drew users toward or away from e-books.3 One negative aspect of e-books they found was the difficulty in learning new technology; the present study may help to inform further understanding of this difficulty by determining which platforms present greater or lesser barriers to intuitive use.

Comparative reviews of platforms, using e-books focused in one specific subject, comprise another category of studies pertinent to this paper. Shereff (2010) compares the various tools and features of NetLibrary and Thieme e-books with the aim of examining usability, search interfaces, and content in biomedical information.4 Comparison reviews such as these provide a stepping-stone for the current paper’s larger comparative usability scope. A similar review by Heyd (2010) of medical library aggregators compared Net Library, R2 Digital Library, and Stat!Ref, but again, did not extend in scope beyond this focused content topic.5

A study by Shrimplin, et al. (2011), bears consideration; the authors used a Q methodology to divide readers into four categories,6 from which they determined that individual attitudes towards e-books range between utilitarian and emotional.7 The two emotional categories were Book Lovers and Technophiles, while the utilitarian categories were Printers and Pragmatists. Printers were those users who would increase their use of e-books if the usability of the interface were to be improved, highlighting the importance of usability in repeat use of e-books.

The professional literature is replete with librarians’ reviews of individual e-book platforms, or detailed comparisons of multiple platforms. These provide valuable assessments from the expert’s perspective concerning what functions and features a platform includes or omits, how well essential tools work in a given environment, and treatment of aspects such as ADA accessibility. One key example is a work by Tovstiadi and Wiersma (2016), who conducted a rubric-based evaluation of 20 publisher and aggregator platforms to gauge and compare usability.8 Their rubric used the CRL (Center for Research Libraries) Academic Database Assessment tool9 as a foundation and incorporated additional evaluation from the e-book Accessibility Project. Using the rubric, the researchers assessed each platform in regards to 34 elements important to usability and user experience, such as pagination, table of contents, native citation tool, search functionality, zoom, annotation, and more. A follow-up work by the same authors, published in 2017, elaborates on the use of rubrics to compare metadata and search results for the same e-book titles on different aggregator and publisher platforms.10 These two papers provide an excellent assessment of comparative platform usability, but these studies (and the entire genre of librarians’ expert reviews) stop short of studying actual user interaction with each platform.11

A different approach for studying usability is the focus group. Caroline Gale (2016) used focus groups to compare several platforms for ease of use, available features, and user preferences.12 Across two iterations of the focus groups, students either used e-books hands-on during the session, or examined specific titles in advance of the session, then provided feedback concerning problems, advantages and disadvantages, and likes and dislikes. The focus groups found that students:

  • liked clear, uncluttered reading interfaces and quick loading times;
  • disliked the difficulty of annotating;
  • generally lacked interest in “added features” such as note-taking;
  • preferred to download the whole e-book versus chapters; and
  • desired content that could be downloaded in PDF and retained.

Although the study lists some of the main aggregator platforms that the library uses, “including VLe-books, dawsonera, MyiLibrary, ebrary, EBL and EBSCO,” the author is not explicit about whether all platforms were examined; furthermore, the specific points of praise and criticism shared from the student focus groups are not tied to specific platforms in the article. Therefore the article describes general strengths, weaknesses, and preferences without evaluating individual platforms. Furthermore, the study’s scope did not include observation of how the students actually interacted with each platform.

Most recently, a study by Tracy (2018) sought to look deeper at user choices and preferences for print versus electronic by examining the variations in e-book platforms that may affect user choices.13 In the study, 62 participants completed online diary forms over an eight-week period, documenting instances of e-book use and deliberate e-book avoidance in academic use contexts. The forms collected details such as the e-book used, the tasks completed, and which e-book features users found “easy or challenging to use,” or for instances of avoidance, the reasons for avoidance and alternate formats or content used.14 Participants were also interviewed during the study to discuss the usage challenges described in their diary forms. Examples of aspects where the study identified room for platform improvement included platform “clutter,” navigation, page numbering, search function, and downloadability and portability. The current study provides the opportunity to build upon and validate or refute these findings through user testing.

Task-Based Usability Testing Methodology

Finally, the current study’s use of task-based usability testing builds upon an existing history of studies involving similar methodology. Hernon et al. (2007), conducted task-based usability testing where students were given a plausible research assignment from one of three disciplines and were asked to demonstrate their search strategies in approaching the assignment.15 The researchers observed what types of e-books students in each discipline used and how they used them; however, students were not limited to searching e-books only, and the researchers did not seek to compare how successful students were in interacting with different platforms.

The 2008 study by Abdullah and Gibb has rightly been treated as an important work on student experiences with e-books, and it does employ task-based testing methodology.16 Student participants were asked to perform a series of search and browse tasks in a single platform, NetLibrary; the researchers compared each student’s performance against that student’s self-reported past experience with e-books and also conducted a follow-up web survey of student preferences for e-books versus print books. Although quite informative regarding user preferences, this study tested the usability of only a single platform and focused mostly on student reactions to that platform; the current study seeks to expand on the knowledgebase of past research by widening the focus and comparing student, as well as faculty and staff, success in task completion on numerous platforms.

O’Neill’s 2009 thesis employed a research goal and methodology similar to the current study.17 Ten students (five undergraduate and five graduate) were recruited to test three platforms—ebrary, MyiLibrary, and Ebook Library (EBL)—by attempting four assigned tasks. Users were divided into three groups, and each group used the three platforms in a different order, to prevent the data for any given platform from being skewed by greater or lesser experience with other platforms. After completing the tasks, participants were asked a series of follow-up questions, such as which platform they believed was easiest to use. The data was coded for key concepts and patterns of user behavior, with an emphasis on user experience versus simple quantitative measures, such as length of time or number of steps or errors. The current study builds upon O’Neill’s work by expanding the comparison to a larger number of platforms, introducing rubric-based evaluation of participant task completion, enlarging the user sample to include faculty and staff, and combining qualitative data on user behavior and experience with quantitative measures such as time required to complete a task. A new study in this vein is reasonable due to significant platform upgrades in the years since O’Neill’s research, including the merger of ebrary and EBL into ProQuest Ebook Central, the greater availability of continuous scrolling—a feature identified as desirable by O’Neill’s test participants, but absent at that time in both ebrary and MyiLibrary—and other changes.

EBSCO conducted a study in 2011 to inform the process of transforming NetLibrary into EBSCO eBooks; they combined usability testing with NetLibrary log analysis, customer feedback, and formal surveys.18 The study’s findings highlighted four key areas of functionality that required the developers’ focus to improve user experience, namely, Discoverability, Online Viewing, Printing/Emailing, and Downloading. Unfortunately, little detail is provided about the nature or scope of the usability test performed in that study, which limits its reproducibility and presents obstacles to expanding on its place within the literature or comparing it to the current study in a more detailed fashion.

More recently, Zhang, Niu, and Promann (2017) recruited students and faculty with varying levels of e-book experience (based on a screening survey) to participate in task-based usability testing of library e-books.19 The test involved the ebrary, Ebook Library (EBL), EBSCO eBooks, Safari Books Online, and ACS Humanities e-book platforms. Searches for e-books on specific subjects were performed from the library homepage rather than in any specific e-book platform. Users were then asked to locate specific e-book titles on different platforms, find a specified piece of information inside the e-book (such as the definition of a term), and then, “if possible, conduct the following four actions: copy the answer; highlight the answer; add a note next to the answer; and download the answer page(s).”20 Data analyzed also included time required to complete each task, number of errors, quantity of requests for help, and a tally of positive or negative comments during each task. The study’s findings included the fact that beginners tended to search inside books more often, whereas intermediate or expert users preferred using the index, table of contents, or lists of figures and tables in comparison to using “Search Within” tools. Also noteworthy was participants’ difficulty in understanding features, such as highlighting and note-taking, deviated from more commonly understood interactions—for example, the use of less recognizable icons without explanatory tooltips, or the need to click an intermediary button before a familiar keyboard command like Ctrl + C will function. Most of the findings discussed center around the impact that a user’s experience level with e-books has on that user’s behavior when attempting tasks in an e-book platform.

A presentation by Tovstiadi, Wiersma, and Tingle at the Electronic Resources and Libraries 2017 conference described a study that was coincidentally conducted around the same time as the present investigation and employed highly similar methodology.21 Seventeen users participated in task-based testing on three platforms each, and then were asked to rank their platform preferences after tasks were attempted; platform order was randomized during testing, just as in the current study. A total of six platforms were tested: Brill, Cambridge University Press, Ebook Central, EBSCO eBooks, ScienceDirect, and Wiley. Tasks and questions presented to users included identification of bibliographic information, opening a book, navigation within a book, searching, annotation, citation, printing a page, and downloading a chapter. Some key findings included: most students (defined as more than 50%) “found citation tools easily”; most students “used a page number box to ‘jump’ to a specific page when available”; most students “blamed themselves when the platform didn’t perform as expected”; most students “tried Ctl + F first” for finding information within a book; students showed a clear preference for aggregators over publisher platforms; and content providers should “use universally recognized icons/terminology.”22

Tovstiadi, Wiersma, and Tingle selected e-book titles available on multiple platforms (two aggregators and the publisher), which was advantageous in decreasing variables; the present study instead selected a unique title on each platform, which had different advantages. This permitted the consideration of a broader selection of platforms and content areas while also providing a better opportunity for “fresh eyes”—in other words, with a greater diversity of subjects and platforms, each participant was more likely to encounter something unfamiliar, for which they could not simply rely on past experience. The current study set its sights on a different selection of platforms than those tested by Tovstiadi, Wiersma and Tingle; additionally it sought to measure task completion rather than just observing methods attempted, thereby offering additional insight on the topic. Furthermore, with such similar methodologies involved, direct comparisons between the findings are more relevant; therefore, the current study may also help to validate or question previous findings.

Methodology

The researchers developed a mixed-methods approach to observe user interaction with multiple e-book platforms, and to evaluate how easily, intuitively, or quickly users were able to complete tasks. The Institutional Review Board (IRB) at Sam Houston State University (SHSU) approved the study before recruitment and testing began. For context, SHSU is Carnegie-classified as “Doctoral Universities: Moderate Research Activity” (R3); total enrollment for the Fall 2016 semester, shortly before testing began, was 20,632 students.

The central methodology employed was task-based testing, in which users were asked to complete a list of prescribed tasks (available at https://hdl.handle.net/20.500.11875/2615). Researchers observed the test neutrally, but did not intervene or provide direction; the exception was Task 1, which required that the e-book be opened in order for subsequent tasks to be possible. Tasks were drawn from real needs that the researchers had observed through course assignments or during reference consultations, as well as expected user interactions with e-books.

In selecting e-book platforms to test, the researchers took a variety of factors into consideration. These included (1) the quantity of books accessible via the library on that platform, whether by perpetual access or subscription; (2) the diversity of subject areas represented by content available on the platform; (3) the uniqueness of the platform; and (4) the extent to which the platform had been absent from previous e-book usability studies. The researchers ultimately selected nine e-book platforms for testing: EBSCO eBooks, ProQuest Ebook Central, JSTOR, IGI Global, Springer Link, Safari, CRCnetBASE, Gale Virtual Reference Library (GVRL), and Oxford Reference.

The e-book platforms from EBSCO and ProQuest were included as the academic library arena’s dominant discipline- and publisher-neutral e-book aggregators. IGI Global, Springer Link, Safari, and CRCnetBASE were selected as more unique, less evaluated platforms where the library had access to a relatively sizable collection of titles. Although initially excluded due to the library’s smaller collection of content, JSTOR was the last addition to the list of selected platforms. It was added in the hopes that usability feedback might inform the library’s decisions about increasing acquisition of content on that platform, since its desirable DRM-free content offerings had begun to attract the attentions of librarians conducting collection development. Meanwhile, GVRL and Oxford Reference were selected to allow evaluation of platforms designed specifically for e-reference content, since these tend to differ in key ways from other platforms. One e-book title was selected on each platform, along with specific keywords in the book’s contents to be leveraged in searching and navigation tasks. Finally, the researchers conducted careful investigations to determine what capabilities and features each platform provided. At this stage, the researchers were unable to ascertain that Springer offered any form of citation tool; however, while analyzing video recordings after testing, the researchers did find a single instance demonstrating that the feature was actually available.

To recruit participants, study invitations were mass-emailed to all enrolled undergraduate and graduate students and all employed faculty and staff; a $10 Amazon gift card for each participant was offered as an incentive to volunteer. The researchers were concerned about being able to enlist the desired number of participants, given that participation required an average of 15–30 minutes on site in the library, and therefore cast a wide net by inviting all students rather than a selected sample. The researchers enlisted 30 testing participants, comprised of ten faculty/staff, four graduate students (two doctoral, two masters), and four students at each undergraduate level (senior, junior, sophomore, freshman). Testing appointments were made with respondents on a “first come, first served” basis until the quota was filled for a given user group; tests were conducted between January and March of 2017.

Each enrolled participant was randomly assigned a participant number, with two participants testing each defined slate of platforms (see table 1). Informed by the usability concept of randomized testing, the order in which platforms were tested was rotated to avoid any skewing of the data based on the presence or absence of experience with other platforms. For instance, a platform might rate lower if always tested first, or might rate higher if always tested last, so rearranging the order in which platforms were tested increased the likelihood of a fair evaluation.

Initially the researchers planned to test eight platforms, with every participant testing EBSCO eBooks, Ebook Central, and two additional platforms—four interfaces in total. Because JSTOR was a late addition after the number of participants was set, several participants instead tested only three platforms, reducing the number of tests for EBSCO eBooks and Ebook Central (see table 1).

Prior to each test, the study methods and goals were explained to the participant, who then signed an informed consent document agreeing to the methodology and recording. Researchers explained that the test was intended to assess the platform, not the user, and therefore self-consciousness about right versus wrong answers was unwarranted. Participants were asked to use the “think aloud” protocol during testing to make their thought processes visible to the researchers, and they were instructed to inform the test administrator when they reached a point where they would, under normal circumstances, give up or quit their attempt. The procedures followed for the user tests are available at https://hdl.handle.net/20.500.11875/2615.

Screen-recording software was used to capture both audio and user activity in the interface, in order to allow the researchers to review tests in greater detail during data analysis. While the participant attempted each task and expressed their thoughts and reactions aloud, the test administrator watched and made notes of what methods the user attempted, along with any noteworthy user quotes. If a user quit attempting a specific task, the administrator noted this and moved onto the next task; the administrator instructed the user in completing the abandoned task only if necessary to progress past Task 1. After the participant had attempted all assigned tasks, the administrator asked follow-up questions regarding which platform they liked most and least, their past experience with the platforms tested, and their expectations regarding likely future use.

As part of assessing each platform’s usability, the researchers sought to evaluate how easily or intuitively a given task could be completed in the interface. To this end, a rubric was developed that would score a user’s attempt on a given task as an Efficient Success, Alternate Success, or Failure. Efficient Success was defined as using the method(s) that seemed to be most intended by the platform designers, as opposed to other alternate methods of successful completion (which were classified as Alternate Success). For example, Task 2 asked participants to navigate to page 50 in the e-book, thus on most platforms, typing 50 into a Go-to-Page function would be more efficient than clicking the Next Page navigation button 50 times. The researchers relied heavily on the Help files within each platform to inform their classification of Efficient Success, with the rationale that such product documentation would reflect the method by which developers intended users to accomplish a specific task. In cases where the Help documentation did not address a topic, distinctions between Efficient and Alternate Success were determined by discussion and consensus among the four co-authors.

Task attempts were classified as Failures when participants voluntarily quit their attempts, erroneously believed a task had been completed when it had not been, or otherwise failed to achieve the goal of the assigned task. If a task required a feature not provided by a given platform—such as note-taking—the rubric provided an option to score the task with a Null value; this way the platform was not penalized for a user’s failure to interact with a non-existent tool. In instances when a user inadvertently skipped over a task, or there was a technological glitch, the researchers scored these as a Null value as well.

Initially, researchers attempted to design a single rubric to describe tasks across all platforms; in the end, however, the rubric template was customized for each platform, since the most efficient, or even the possible, methods of task completion varied so significantly between platforms. The rubrics underwent several iterative norming sessions to improve inter-rater reliability. All nine platform-specific rubrics developed by the researchers for this study are available at https://hdl.handle.net/20.500.11875/2615.

After testing was complete, the researchers reviewed the recordings to (a) score task completion according to the rubric, (b) record additional quantitative data such as the length of time to complete each task, and (c) note qualitative data such as the participants’ verbal comments and researchers’ observations. After data was recorded for all testing sessions, averages were calculated per platform and per task to further inform data analysis. To conduct a more comprehensive comparison of completion time results, the researchers elected to collate and assess times in both their original state and an adjusted state in which all null value sessions were assigned times derived from the overall mean time across all platforms for a given corresponding task.

To conduct a more comprehensive comparison of completion time results, the researchers elected to collate and assess times in both their original state and an adjusted state in which all null value sessions were assigned times derived from the overall mean time across all platforms for a given corresponding task. By introducing this Null Mean Substitution method into the assessment and reporting of completion time results, the researchers aimed to avoid any substantial skewing of data that would have resulted from comparing overall average time calculations across platforms possessing an inequivalent number of testable features. While introducing mean time values into null task sessions does introduce potential for skewing of data and bias, the researchers believed this approach to be appropriate for the purpose of this research and the characteristics of the analysis. Additionally, because the overall number of completed task sessions to null sessions was largely in favor of completed sessions, the researchers identified the Null Mean Substitution method as one being acceptable in this analysis for connecting missing data points while largely maintaining the integrity of the original data. To accomplish this, the time values for all completed sessions of each task were averaged, which generated a value that was then allocated to all null sessions for the corresponding task. For example, if Task 1 incurred 10 nulls overall, and 20 completed sessions overall, the completion times for the 20 completed sessions would be averaged, and this average would then be allocated to the 10 null sessions. When reporting on overall platform completion times and their corresponding rankings, the researchers elected to incorporate both adjusted and non-adjusted times; and when reporting on individual task completion times and rankings, the researchers elected to only report adjusted times. The one exception implemented by the researchers when reporting on individual task completion times and rankings was to exclude any platform which did not offer the feature being tested for that task.

Results

For the remainder of the paper, the following short-hand acronyms may be used: Efficient Success Rate (ESR), Alternate Success Rate (ASR), Overall Success Rate (OSR), and Failure Rate (FR). For additional tables of data regarding average task times and success or failure rates, refer to the Appendixes of Supplemental Tables at https://hdl.handle.net/20.500.11875/2615.

EBSCO eBooks

EBSCO eBooks achieved a 100% Overall Success Rate on Tasks 1–5 and 7; for complete data on EBSCO eBooks’ success and failure rates, see table 2.

A review of average task durations placed EBSCO eBooks first place on Task 5 last at seventh of seven places on Task 3. Task 9 (turnaway message) was not timed, but saw an ESR of 95%.

Despite being unable to generate a top completion time in any task, EBSCO eBooks produced 100% ESR for three tasks, and only produced an ESR below 90% on one task. Additionally, EBSCO eBooks produced a 90% or higher OSR for all tasks. EBSCO eBooks was one of two platforms that contained all testable features. When adjusted to account for the platform’s seven null values, the overall average time increased by 5.42% from 2.98 to 3.14 minutes, ranking third.

EBook Central (formerly EBL)

EBook Central produced the highest (100%) OSR on Tasks 1, 3, 4, 5, 7, and 8. EBook Central saw its worst usability in Task 9 (turnaway message), with 55% ESR, and a high 44% FR. Nonetheless, EBook Central generated 100% ESR in more than half of all tasks, and produced top-ranked completion times in three tasks. For complete data on EBook Central’s success and failure rates, see table 3.

Among average task completion times, EBook Central ranked first of nine on Task 7. EBook Central was one of two platforms that contained all testable features. After adjusting to account for 25 null values, the average time spent on EBook Central’s platform increased by 20.90% from 2.46 to 2.98 minutes, ranking second of nine platforms.

Gale Virtual Reference Library (GVRL)

GVRL had its highest Overall Success Rate of 100% on Task 5, based on 50% ESR and 50% ASR. GVRL saw the most frequent Failures on Task 7, in which the platform produced a 30% FR versus a 70% ESR. For complete data on GVRL’s success and failure rates, see table 4.

GVRL achieved its best average time on Task 6, for which it ranked in second place with just 12.16 seconds compared to 21.17 seconds for all others. GVRL’s worst average time, earning eighth place on Task 4, was 23.73 seconds versus 17.17 seconds for all platforms.

GVRL did not have a next-page feature at the time of testing and incurred 15 null values in total. When adjusted to account for null values, GVRL’s overall average time increased by 5.89% from 3.76 to 3.98 minutes, ranking fourth of all nine platforms.

Oxford Reference

Oxford Reference achieved 100% OSR on Task 2 and Task 5. Oxford Reference’s worst performance occurred in Task 6, with a majority (55.6%) Failure Rate. For complete data on Oxford Reference’s success and failure rates, see table 5.

Oxford Reference did not manage any first place rankings for average tasks times, but it did rank in the top half of platforms for Task 8 and Task 5. The platform earned last place on Task 4 and Task 7.

Oxford Reference did not have a next-page feature at the time of testing and incurred 15 null values in total. When adjusted to account for null values, Oxford Reference’s overall average time increased by 4.60% from 5.26 to 5.50 minutes, ranking eighth of all nine platforms.

Safari

Safari achieved 100% OSR, as well as 100% ESR, on all tasks except Task 5. For complete data on Safari’s success and failure rates, see table 6.

When comparing average completion times, Safari ranked in first place among all platforms on Task 2 and Task 8. Safari’s struggled most on Task 5, with the second-slowest average completion time.

Safari did not have a citation feature or a note feature at the time of testing and incurred 22 null values in total. When adjusted to account for null values, the overall average time increased by 46.75% from 1.83 to 2.68 minutes. This platform experienced the largest percentage change when accounting for null values, but ultimately maintained a top-place ranking overall.

IGI Global

IGI Global achieved 100% OSR, as well as 100% ESR, on Tasks 1, 3, 4, and 6. IGI Global’s highest FR occurred in Task 5, with nearly a third (30%) of its users failing to complete the task. For complete data on IGI Global’s success and failure rates, see table 7.

IGI Global achieved first place completion times on Task 4 and Task 6, but produced last-place completion times on Task 2 and Task 5. Due to IGI Global not offering a note-taking feature at the time of testing, Task 7 was null.

Despite IGI Global tying for the most first-place completion times (2), it also produced the slowest completion times for two tasks. The platform did not have a citation feature at the time of testing, and incurred 16 null values in total. When adjusted to account for null values, IGI Global’s overall average time increased by 13.10% from 4.00 to 4.52 minutes, earning the platform a seventh place ranking among all nine platforms.

CRCnetBASE

CRCnetBASE produced an OSR of 100% on Tasks 1, 3, and 4; for complete data on CRCnetBASE’s success and failure rates, see table 8.

On completion times, CRCnetBASE ranked among the lower performers on Task 1 and Task 2, while it scored as the worst performer on Task 8. Only Task 5 was scored in the top three platforms. Task 7 was null, as this feature was not offered on the platform at the time of testing.

Only on Task 5 did CRCnetBASE rank in the top half of platforms for average completion times. CRCnetBASE came in dead last at ninth place on Task 8 with 57.57 seconds—the average across all platforms was 27.2 seconds, and the platform that took first place on this task averaged just 14.26 seconds.

Although CRCnetBASE generated an OSR of 90% or better in all tasks, the platform fell into the bottom third in completion times for all but one task. After adjusting to account for the platform’s 13 null values, the overall average time decreased from 4.01 to 4.53 minutes, putting this platform in the lower middle (sixth) among all nine platforms.

Springer Link

Springer Link produced its highest OSR on Tasks 1 and 4, and its worst usability in Task 2, with a substantial 70% FR. For complete data on Springer Link’s success and failure rates, see table 9.

With respect to the average task completion time, Springer Link was highly competitive on Task 1 and Task 3, but ranked in the bottom half of platforms on Task 4, Task 5, and Task 2. Because Springer Link did not offer note or citation tools at the time of testing, Tasks 6 and 7 were null.

Despite landing in the top quartile of completion times for two tasks, Springer Link fell into the low middle rankings for all others. For Task 2, Springer Link produced the second-worst FR percentage for any task among all platforms. The platform did not have a full book citation feature or a note feature at the time of testing and incurred 21 null values in total. When adjusted to account for null values, the overall average time increased by 27.06% from 3.21 to 4.08 minutes, thereby moving the platform down from fourth to fifth place among all nine platforms.

JSTOR

JSTOR achieved 100% OSR on Tasks 2, 3 and 4. JSTOR’s highest FR occurred in Task 1 (80%). For complete data on JSTOR’s success and failure rates, see table 10.

When comparing average task completion times, JSTOR produced a first-place ranking amongst all platforms on Task 3 and a second-place ranking for Task 4, although both tasks produced relatively small completion times overall. Conversely, JSTOR also produced the second-slowest completion time on Task 8 and the slowest for Task 1, which is almost 3 times the overall average. Due to JSTOR not offering a note-taking feature at the time of testing, Task 7 was null.

In addition to generating the most Failure Rates of ≥30% (three tasks), JSTOR’s Task 1 FR of 80% came in as the worst FR for any task among all platforms. Furthermore, outside of placing first (Task 3) and second (Task 4), the platform did not manage to place higher than sixth for average completion time rankings on any other task. The platform did not have a note feature at the time of testing and incurred 11 null values in total. When adjusted to account for null values, the overall average time increased by 9.07% from 5.89 to 6.43 minutes, ranking last.

Follow-Up Questions

Following each test, the participant responded to follow-up questions regarding which platform they liked most and least, their past experience with the platforms tested, and their expectations regarding likely future use. Table 11 summarizes the key quantitative results; the Discussion section more fully addresses qualitative data from participant comments.

Discussion

Table 12 lists the shorthand phrases that will be used in this section to refer to testing tasks.

EBSCO eBooks

Despite being unable to generate a top completion time in any task, EBSCO eBooks produced 100% ESR for three tasks (Next Page, Next Chapter/Entry, and Search Term), and only on one task did it produce an ESR below 90% (Task 2, Go To Page 50). When asked to navigate to page 50 of a selected e-book, more than 40% of participants elected to scroll through the e-book’s pages instead of taking advantage of the platform’s direct page navigation feature. Although this result did generate a higher ASR than any other platform on this task, the difference between the average completion times for Efficient Success and Alternate Success methods was only four seconds. This demonstrated to the researchers that while EBSCO eBooks’ direct page navigation tool may have not been easily noticed, the platform was able to compensate for this by providing a layout that participants were able to navigate relatively easily. Additionally, on Task 7 (Notes Tool), EBSCO eBooks generated a 100% Overall Success Rate, and of the 20 participants tested for this task, only one was unable to produce an Efficient Success. The researchers found EBSCO eBooks’ resounding success in this area particularly surprising when compared to the study by Tovstiadi, Wiersma, and Tingle, in which less than 25% of testers were able to locate the notes tool within the platform.23

This ease of navigation did not always translate into top-ranking completion times, however, as EBSCO eBooks produced times for Next Page and Find Another E-Book tasks which ranked in the bottom-third among all platforms. Additionally, despite high success rates overall, some participants occasionally had difficulty completing certain other tasks on the platform. For example, when attempting to perform a search within the book (Task 5), one participant commented, “I don’t immediately see a search button,” and another commented, “That was a little hard to find.” EBSCO eBooks excelled at conveying when an e-book was in use (95% ESR; see figure 1). The researchers believe this capability is increasingly important as platforms and acquisitions methods proliferate, forcing patrons to interpret limited concurrent access to some electronic resources.

In post-testing follow up questions, all participants were asked to select the platform they preferred most and least from the sample of platforms included in their particular testing session. Since not all platforms were tested an equal number of times (e.g., aggregator platforms were tested 20 times each, with all other platforms being tested 10 times each), the discussion of post-testing results for all platforms henceforth use percentages calculated from the number of participants who actually tested the platform. For example, if a participant did not test a particular platform and consequently did not select that platform as their favorite platform, it would not count against the platform as a non-selection, nor would it be reflected in the reported percentage of selections that platform earned within the category of favorite platform. Participants selected EBSCO eBooks as their favorite platform seven times (35%, third place). Comments in favor of the platform included, “Simple! Had everything I was looking for right there,” and, “It had easy and very specific navigation. [Using] words, instead of symbols, made it superior to [EBook Central].” Participants chose the platform as their least preferred e-book platform only once (5%, tied for first place); however, when this participant was asked to elaborate on what they did not like about the platform, one of the details mentioned was the lack of a citation tool. This led the researchers to believe that this tester may have confused EBSCO eBooks with another platform that did not offer this particular feature.

EBook Central

While Task 9 was only tested by the two aggregator platforms and was not timed, EBook Central’s Failure Rate of 44.4% for this task was almost 9 times higher than the competing platform (5%). Nonetheless, EBook Central generated 100% ESR in more than half of all tasks, and produced a top-ranked completion time on one task (Task 7; Note Tool). EBook Central tied for the most ESR scores, with 100% rates on Task 1 (Find E-Book), Task 3 (Next Page), Task 5 (Search Term), Task 7, and Task 8 (Find Another E-Book). Additionally, EBook Central was one of only two platforms to generate a 100% OSR on Task 7.

On average, EBook Central participants successfully searched terms within the specified e-book 31.3% faster than the average produced across all other platforms. However, on at least seven occasions, participants expressed difficulty navigating the results of those searches; the results, which were displayed via bars demonstrating the count of the searched term in each chapter, obscured the actual matched text until clicked upon (see figure 1). One tester asked, “Is it supposed to highlight it when I search?,” and another stated, “This doesn’t make any sense. What I’m used to is it popping up the specific [results], not all this extra information.” The majority of the patrons who experienced uncertainty with the search term results did eventually decipher the structure, but several ultimately moved on without indicating understanding of the arrangement.

Additionally, EBook Central participants were able to search and locate specified e-books 52% faster than the average time across all other platforms, and were able to locate the note tool 34.43% faster than the second highest ranked platform (EBSCO) on average. Participants praised the visibility of the note tool with comments such as, “This one gives you a highlighter, a notes box, and a bookmark. I like that,” and “It’s pretty easy to tell because it’s a little empty page, and it’s right next to the highlighter [icon].” This intuitiveness may in part have contributed to the platform’s top completion time.

EBook Central generated very few failed tasks overall, but the citation tool task (Task 6) produced the platform’s highest number of failures (3 total) and lowest OSR (83.3%), contributing to the average task time ranking of fourth out of 7. Most of the issues EBook Central experienced with this task appeared to relate to the use of a quotation mark icon, which participants did not notice or locate intuitively. Icons for citation tools on other platforms were accompanied by explanatory text, such as “Get Citation” or similar, which did not require hovering over the icon; this may have proved advantageous, since participants rarely hovered over the Ebook Central icon long enough for pop-up text to appear.

EBook Central tied for the fewest number of least-preferred platform votes among all participants (1 vote, 5%), and took top ranking with 11 participants (55%) indicating it as their favorite platform. This mirrored its selection as the most preferred platform among those tested in the study by Tovstiadi, Wiersma, and Tingle.24 Participant comments pertaining to the platform were predominately positive; however, the platform did garner several comments implying there may still exists room to improve certain areas. When expounding upon dislikes, one tester noted, “It has to do with it being too busy. It can almost [become] overwhelming with the amount of data that comes up on the screen.” Other participants conveyed contrary sentiments, however, with one tester stating, “Everything was just so clear, and everything was where I thought it should be. All of the notes and citations were above [the text], and the search bar was really clear on the side.” Among those testers who favored EBook Central, nearly all expressed being pleased with the platform’s ease of navigation and intuitiveness.

Gale Virtual Reference Library (GVRL)

GVRL generated a completion time in the top quartile for only Task 6; most results were in the low middle of the average task time rankings. GVRL was one of only three platforms that failed to achieve a 100% Overall Success Rate for Task 1 (Find E-Book), tying for the second worst OSR with Oxford Reference. Despite GVRL generating only a 40% Efficient Success Rate to Oxford Reference’s 70% on Task 1, Gale users were able to complete this task nearly twice as quickly (40.68 seconds versus 106.48 seconds). GVRL’s generated the second highest failure rate (20%) on Task 4, making it one of only two platforms to produce a less than perfect score on this task. This may have been due to a high number of participants misunderstanding what “Next Entry” meant, as indicated by the five Null values generated for this task.

GVRL was only able to achieve a 100% OSR on one of six tasks (Task 5—Search Term). When compared to the other platforms that generated a 100% OSR on this task, GVRL produced completion times that were 48.58% slower on average. One tester correctly located the search tool; however, even after locating it, she remained uncertain as to whether it was the correct location to generate a search. The platform’s worst result may have been produced in Task 7 (Note Tool), where it produced a 30% Failure Rate (highest among all platforms), and a completion time that was nearly 20% slower than the average completion time (35.97 seconds versus 29.98). Several participants commented that the tool was unintuitive to use. One participant managed to find the notes tool, but was unable to figure out how to use it. Another participant was able to locate the tool, but was not certain whether her notes would be saved upon leaving the e-book.

Despite GVRL struggling on most tasks, four participants (40%) selected it as their favorite platform, making it the second most popular; only one participant (10%, tied for second) selected it as their least favorite. The researchers’ notes regarding these selections included participant comments favoring the platform’s presentation of search results, ease of navigation, and visibility of its tools.

Oxford Reference

Although Oxford Reference ranked in the bottom third for completion times on many tasks—bringing in the lowest ranking on two of seven tasks—it achieved second place on Task 8, demonstrating successful usability in open searching that was perhaps not seen in known-item searching (Tasks 1 and 5) or navigating within an e-book (Tasks 1, 2, 3, and 4).

The researchers believe Oxford Reference’s higher success rate in Task 8 (Find Another E-Book) had much to do with its engaging landing page, which made identifying new e-books a more intuitive process for participants. Conversely, Oxford Reference’s poor performance on other tasks highlighted a disconnect between form and function. Several participants commented on Oxford Reference’s welcoming aesthetics the start of testing sessions; however, many participants experienced difficulty and confusion when completing tasks such as navigating to the next chapter of an e-book (Task 4) or locating the citation tool (Task 6). The researchers observed that the platform failed to translate its approachable design into a user-friendly experience. This was most apparent from the average task completion times for Task 4. While all other platforms averaged 17.18 seconds to complete navigating to the Next Entry, Oxford Reference testers took over 44% longer (24.91 seconds).

Not only did Oxford Reference generate the slowest completion time for Task 4, but it also generated the lowest Overall Success Rate (71.4%), and highest Failure Rate (28.6%). Similar issues were observed on Task 6 (Citation Tool) and Task 7 (Note Tool), where Oxford Reference completion times were 71.6% and 57.8% slower than the average completion times. For both of these tasks, Oxford Reference produced the second highest Failure Rate, as well as the lowest OSR for Task 6 and the second lowest OSR for Task 7. Oxford Reference’s struggle with the citation tool contrasted with the success of the citation tools on many other platforms, not only in the present study but also in the study by Tovstiadi, Wiersma, and Tingle, which found that most students (more than 50%) “found citation tools easily.”25 Oxford Reference produced better OSR on other tasks, such as Task 5 (100%), and Task 1 (80%), but struggled with a poor average completion time in the initial task of finding the specified E-Book (106.48 seconds vs. 49.87 for all platforms, or 90% longer), which could have given participants a frustrating first impression.

Despite Oxford Reference’s overall below-average performance, the platform generated only one vote for least favorite platform (10%, tied for second place). However, it also earned only two votes for most favorite platform (20%, tied for fifth place). Of those participants who favored the platform, one indicated preferring the platform due to “knowing how to do everything on it,” while another tester expressed opposite feelings, stating, “I would probably kick [Oxford Reference] to the curb, which is interesting because most of Oxford’s stuff I usually like.

Safari

Safari took the top completion time in two tasks (Go To Page 50 and Find Another E-Book), earned second place on Task 1 (Find E-Book), and third place for Task 4 (Next Chapter/Entry), but only generated an eighth place finish on Task 5 (Search Term). At the time of testing, only six of the eight tasks could be tested on the Safari platform; thus, the results included a fairly high number of null values (22 total) due to the lack of testable features.

Safari tied for the most 100% Efficient Success Rates (Tasks 1–4, and 8), and produced 90% ESR ratings for the remaining task (Task 5). Impressive to the researchers, Safari was the only platform to have only one task (Task 5; 10%) generate any failures. In contrast, IGI Global, EBSCO eBooks, and EBook Central all had three tasks generate at least one failure.

Safari demonstrated easy navigation by finishing in the top third for four of the five tasks measuring this aspect (Tasks 1, 2, 4, 8). While many testing participants seemed to easily navigate within the embedded e-book, both by sections as well as by the table of contents on the left menu, participants noted that the platform did not provide page numbers, providing comments such as, “I feel like even when you’re reading digitally, you should have page numbers.”

Safari was mostly strong in known-item searches (Tasks 1, 5, 8), with top-quartile rankings in Task 1 and Task 8, but produced its lowest ranking (eighth) on Task 5 (Search Term). Several users struggled to initially identify how to search within the e-book as demonstrated by the significantly longer time (48.04 seconds, more than double the average task time of the top four platforms, 22.07). Additional evidence supporting the difficulty with Task 5 includes testing participants’ comments, such as, “I don’t like having to expand the drop down to find the search within feature,” as well as anecdotal testing notes from researchers about participants attempting to find the search box and trying to use Ctrl+F due to the non-intuitive location of this tool. For the available two of five tasks measuring Tools/Features (Tasks 3 and 4), Safari produced 100% ESR ratings, but generated mixed results on completion time rankings with a sixth of 7 finish on Task 3, and a third of nine ranking on Task 4.

Garnering the fastest (first) overall average completion time among all platforms didn’t necessarily translate into being a preferred platform, however, as Safari was selected the least preferred platform four times (40%, tied for third), and was chosen as the favorite platform only three times (30%, fourth). Despite high success rates overall, missing tools, particularly the citation tool, appeared to have influenced tester’s perception of the platform. Two such supporting statements included, “[I] don’t see that [citation tool]. Again would just use bibliographic information and create my own citation”, and “[I] looked at copyright where I thought it would be so going to say “no” this book doesn’t offer citations.”

IGI Global

In addition to having the fastest average completion time for two tasks (Task 4 and 6), IGI Global also produced a 100% Efficient Success Rate for those same tasks (Next Chapter/Entry and Citation Tool).

The platform produced last place rankings (ninth) for two tasks--Go To Page 50 (Task 2) and Search Term (Task 5). Failure Rates of 20% and 30% for Tasks 2 and 5 provided additional evidence of this platform’s usability challenges. When asked to navigate to page 50 of a selected e-book, more than 40% of participants elected to scroll through the e-book’s pages (Alternate Success) instead of taking advantage of the platform’s direct page navigation (Efficient Success). Although this result did generate the second highest ASR for this task, the difference between the average completion times for Efficient Success and Alternate Success methods was only 10 seconds. While the direct page navigation was not intuitive to testing participants, IGI Global’s two reading options (PDF and HTML) allowed participants to navigate relatively easily despite the lack of page numbers in the HTML version.

For the remaining tasks that focused on locating an e-book on the platform (Tasks 1 and 8), IGI Global produced fifth and sixth place rankings respectively. The researchers noted that for Task 8’s failures, participants understood how to search correctly, but could not distinguish books within the results from chapters or e-journals.

IGI Global demonstrated mixed ease of navigation by finishing first for one of the five tasks measuring this (Task 4), but finished in the bottom third for Task 2 (ninth) and Task 8 (sixth). While the platform offered both HTML and PDF versions of the tested e-book, it did not provide page numbers in the HTML version, an oversight that had several participants swapping between the two formats trying to identify page numbers, as evidenced by comments like, “I don’t see page numbers so. . . . But I know I’m [in] chapter three.”

IGI Global struggled with known-search features, with completion time rankings of fifth, ninth and sixth place respectively for Tasks 1, 5, and 8. The platform’s performance with Task 5 (Search Term) appeared particularly problematic, as demonstrated not only by earning the lowest rank on this task, but also by producing a 30% Failure Rate. Additionally, significantly longer average completion times (65.61 seconds, more than three times the average of the top four platforms, 22.07) appeared to validate testers’ struggles with identifying how to search within the e-book on this platform. Participants’ comments, and researchers’ observations regarding participants struggles with locating the search tool, and unsuccessfully using Ctrl+F, further give weight to the prevalence of this issue. When attempting to search within the book, participants’ comments included: “I don’t immediately see a search button,” and, “Where’s the search box? I’m not really sure where the search box is . . . [pause]. I wonder if Ctrl-F works? I’m going to try Ctrl-F. . . . And it didn’t work.” The success (or lack thereof) of participants’ use of Ctrl+F to search the e-book depended on the existence of the provided search term in the chapter they had opened (PDF). Thus, an incorrect conclusion could easily be drawn if the participant was not using the intended search box tool (located on the e-book’s main detail page) to search the entire resource.

For the available three of five tasks evaluating tools or features (Tasks 3, 4, and 6), IGI Global performed excellently, as indicated by its top (first) average completion times for Tasks 4 and 6, combined with 100% ESR for all three of these tasks.

Despite having two top completion times and high success rates overall, participants had difficulty navigating due to the lack of page numbers and finding the platform’s search feature; some testing participants also noted the lack of a note tool. These challenges were corroborated by the platform’s overall average completion time ranking of seventh place. In post-testing follow-up questions, participants chose IGI Global as their least preferred e-book platform 4 times (40%, tied for third), compared to their favorite platform 2 times (20%, tied for fifth place).

CRCnetBASE

CRCnetBASE did not manage to produce a top completion time on any task. However, the platform did produce a 100% Efficient Success Rate on two tasks (Tasks 3 and 4), as well as 90% ESR for Tasks 5, 6 and 8. Those three tasks, in addition to Task 2, all experienced a 10% Failure Rate. Additionally, the platform produced a 90% OSR or higher on all seven tasks.

The CRCnetBASE platform performed poorly on the five tasks evaluating navigation, placing in lower half (fifth or lower) on all tasks measuring this aspect. The platform produced fifth place rankings on Tasks 2, 3, and 4; and garnered rankings of seventh and ninth (last) on Tasks 1 and 8 respectively. Participants made several comments about the e-book chapter and title tabs layout (see figure 2), such as:

  • “So this one defaults to chapters . . . you have to switch it over to get the title.”
  • “I don’t think that’s an e-book; I think it searched wrong.”
  • “I feel like that should be switched where you have book titles come up first rather than book chapters.”

This feedback, along with IGI Global’s seventh and ninth place rankings for Task 1 and Task 8, respectively, demonstrates that the presentation of e-books on this platform was challenging for many users.

When attempting to navigate to page 50 (Task 2), testers did not always notice the page number ranges indicated by chapter within the table of contents, which led many testers to open various chapters and then return to the book’s main page to try again. Researchers’ observational notes indicate several instances where participants switched between chapters. Other evidence supporting this comes from participant feedback, such as “it would be nice to do some pages, if not all.” Book page numbers did not always match PDF page numbers when participants endeavored to use the “Go To Page” function (see figure 3), corroborated by comments such as, “I’ll just type it in up here. . . . No, wait, you can’t do that, because it doesn’t give you the actual page numbers.”

For tasks measuring known searches, CRCnetBASE had mixed ratings, placing in the bottom half for two of the three tasks in this subset. Task 5 was CRCnetBASE’s best ranking for any task at second place, followed by seventh and ninth (last) place for Tasks 1 and 8, respectively. CRCnetBASE’s Search Term (Task 5) tool was intuitive and easily found by users, whereas the layout of different tabs for e-book titles versus e-book chapters again made finding a specific title (Task 1) or identifying another e-book (Task 8) tricky.

In all tasks evaluating tools and features for this platform (Tasks 3, 4 and 6), CRCnetBASE placed fifth. Task 6 (Citation Tool) proved problematic for some users, since this platform did not display any citation text directly, instead requiring users to download citations in .RIS format and then upload into third-party citation management software (RefWorks, EndNote, etc.). Participants provided feedback on this, saying:

You say Download Citations, and it says please check at least one article. These aren’t articles, these are chapters, and it’s in a book. [Downloaded a chapter citation and tried to view the RIS file; see figure 4.] I can’t use it. That’s fine. I don’t even know what kind of citation it’s gonna come up with. I don’t know if it’s gonna have a host of different citations or if it’s, I don’t know, the basic information that you would use to create citations.

During follow-up questions post-testing, CRCnetBASE was not selected by any participants as their favorite platform, and correspondingly received the greatest number of votes (8, 80%) as the least preferred.

Springer Link

Springer Link produced a 100% Efficient Success Rate for one task (Task 1), but in only one other task did it produce an ESR above 75% (Task 8). Additionally it struggled to produce OSR above 75% for three of the six tasks available on the platform (Tasks 1, 4, and 8).

At the time of testing, the Springer Link platform offered a PDF download option—by chapter, or entire book—and an HTML option for chapters; the full book PDF download was available for some, but not all, titles. This meant that testers had to choose whether to open a PDF or HTML version of the chapter, or download the full book PDF, which led to varying degrees of success for the tasks tested in this research project.

When the researchers evaluated the platforms during conceptualization of this research project, Springer Link did not seem to offer any sort of citation tool, so the rubric scored Task #6 as null.26 However, when analyzing the screen-captured testing videos, the researchers did find a single instance that revealed the feature was available (see figure 5). The feature was so obscured that neither testers, nor the researchers during preparation, discovered the tool. Users had to be in the full book PDF (not a chapter PDF) to find it. The researchers maintained the null scoring, but the lack of intuitiveness in this feature seemed worth mentioning as a failure on the part of the platform.

Several participants mistakenly believed that Springer Link’s Book Metrics/Bookmetrix feature was a citation tool, whereas its actual purpose is to show how often a book is cited elsewhere (see figures 6 and 7). Thus Task 6 was 70% Null, with the three fails (30%) due to incorrectly concluding Bookmetrix was a citation tool.

Participants struggled with navigation in Springer Link, as demonstrated by eighth and sixth place rankings for Tasks 2 and 4, respectively, but fared better on Tasks 1 (first) and 3 (second). E-Books were clearly displayed, which made it easy for testers to find the initial title (Task 1) as well as find another e-book on the platform (Task 8). However, Go to Page 50 (Task 2) saw a high Failure Rate (70%) due to only the PDF option having page numbers (HTML version did not), as well as e-book page numbers and PDF file page numbers not always coordinating. User comments such as “I don’t see any numbers . . . like . . . okay well I know page 50 is somewhere in here . . . I just don’t see it,” and, “No side bar tool, I’m guessing this is the next chapter?“ add weight to that perception. The researchers found tester’s struggles with Springer Link’s direct page navigation concerning when considering that it was the only platform in this study to generate a majority failure rate on this task. The platforms tested in the 2017 study by Tovstiadi, Wiersma, and Tingle also saw little difficulty in this area of functionality. That study reported that “few students” (less than 25%) “actually struggled to find the appropriate page.”27 Additionally, the researchers anecdotally believe the high rate of intervention needed to assist participants with proceeding beyond Task 2 may have resulted in an unintended advantage in subsequent tasks. Nevertheless, problems similar to those seen in Task 2 were observed again on Task 4: when asked to navigate to the Next Chapter/Entry of the e-book, more than 44% of participants had an Alternate Success, which was by far the highest compared to any other platform for this task (the next highest ASR was 6.7%). This indicated to the researchers that Springer Link’s PDF option by chapter or entire e-book might have not been a familiar option to testers.

Springer Link had mixed results with tasks demonstrating known searches (Tasks 1, 5, and 8) placing first, seventh and fifth, respectively. The higher rankings of Tasks 1 and 8 relative to Task 5 signal that the search tools available for discovering an e-book on the platform are intuitive but that searching within an e-book for a phrase or keyword was more obtuse. Participant actions that support this perspective include

  • using Ctrl+F to search within the e-book;
  • searching first within a chapter, and then having to exit and redo the search within the entire e-book; and
  • searching the entire platform, rather than the e-book, for the search term.

One participant noted annoyance that search results only displayed which chapters contained the search term, but that further detail (highlighting the term, number of times it appears, etc.) was not offered. As was the case with IGI Global, participants’ success—or failure—using Ctrl+F to search the e-book was dependent upon if the chapter they had entered contained the provided search term. Thus, an incorrect conclusion could easily be drawn if the participant was not using the intended search box tool (located on the e-book’s main detail page) to search the entire e-book.

Springer placed second on Task 3 even without offering any sort of specialized tool for moving between pages or chapters. As previously mentioned, the PDF option offered easy page navigation (Task 3) but did require jumping back to the book’s main page to open the next chapter if the tester had not chosen to download the entire e-book. The note/comment tool was not offered on the Springer Link platform, and the citation tool was not believed to exist (see discussion above).

Collectively, these issues (bottom third rankings for Go To Page 50, Next Chapter/Entry, Search Term; Citation Tool hidden location compounded with confusing the Bookmetrix feature) may have translated into being chosen as the least favorite platform four times (40%, tied for third place), and only receiving one vote (10%, sixth place) for favorite platform.

JSTOR

JSTOR produced 100% Efficient Success Rates for two tasks (Task 3—Next Page; Task 4—Next Chapter/Entry), but otherwise proved to be consistently difficult for testers to use, placing in the bottom rankings for more than half of the eight tasks measured. When asked on the initial task to find a specified title, more than 80% of participants were not able to locate the e-book. The platform displaying e-book chapters more prominently than the e-book title—which was smaller, italicized, and lacking the subtitle (see figure 8)—may have contributed to this high (80%) Failure Rate.

The need for improvement in this area is further highlighted by participant comments after researchers revealed that the search results were chapters, characterized by, “Oh! Now, if I had known that . . .

For navigation, the JSTOR platform offers thumbnails for page navigation to a specific page (see figure 9). Many testing participants did not notice this feature and opened a chapter, noted the page range, and then navigated using “Next Page” buttons. One tester corroborated that observation by stating, “It would be nice to be able to just type in the page [number] where I want to go, so I’ll just have to click the pages [next page button].” The researchers believe the testers’ oversight of direct page navigation may be due to thumbnails for navigating on e-book platforms being unique, thus not corresponding to participants’ other experiences.

The Next Page and Next Chapter/Entry buttons were noted and appreciated by participants with feedback such as “So it says next chapter right there--this part of this site I like a lot.” Both navigation tasks (3 and 4) were 100% Efficient Success and produced top or upper level rankings (first on Task 3; second on Task 4) relative to all other platforms.

With regards to known searches, JSTOR produced low-rankings for the three tasks measuring this attribute, placing last (ninth) and second to last (eighth) for Find E-Book (Task 1) and Find Another E-Book (Task 8), respectively. Since both Task 1 and Task 8 dealt with finding and identifying e-book titles on the platform, this was a notable problem with JSTOR.

Tools and features seemed to be mostly intuitive for participants for the three tasks JSTOR had available. A first place finish for Task 3, followed by second and sixth for Next Chapter and Citation Tool tasks, respectively.

In post-testing follow up questions, JSTOR received six votes (60%, fourth place) for least preferred e-book platform, and received only two votes (20%, tied for fifth place) for favorite platform. The demonstrated struggle to find and identify e-book titles (Tasks 1 and 8) may have influenced these opinions, as feedback comments included, “Harder to find the title of the book. It didn’t have the actual title on the entry. Seems a little less developed. Older design,” and “Had a harder time navigating, takes longer to figure out.

Limitations and Further Research

The study encountered several limitations that are worth noting. The e-book selected for testing on the GVRL platform was a “featured” title, meaning that it was displayed on the homepage when a user entered the platform. This limited the extent to which participants actually had to search for the book versus merely recognize the book, so Task 1 performance may have been skewed in GVRL’s favor. However, since Task 8 also tested the capability to search for books, the researchers feel that this usability theme was still fairly explored on this platform.

Rather than force users to overcome the additional hurdle of creating a new account, the researchers artificially handled individual user-account logins on some platforms. This was deemed necessary because the Annotate tool in Oxford Reference would not even be displayed on the screen, unless a user was logged in to the platform, so they would not even have the opportunity to identify the availability of the feature without a preceding login. Therefore, prior to each test, the researchers logged in to an individual account on several platforms, such as Oxford Reference, EBSCO eBooks, and GVRL. Although this simplified the testing procedure, the results may not accurately reflect user interactions with various tools when such intervention is not present. Further research in this area may especially be warranted when a platform provides a capability such as note-taking, but saves the data only for the duration of the current session unless a user logs in to an individual account, as is the case with EBSCO eBooks; it is unclear from the present study whether users would accurately understand the session-duration limitation of such tools.

While analyzing recorded test sessions, researchers realized that the testing procedure should have been better normed before tests began. Some test administrators read each task aloud to the participant, while other test administrators allowed the participant to read the tasks themselves. More consistent practices in this regard would have improved reliability of the testing method, and would have possibly translated into fewer skipped tasks resulting in null values. Additionally, as with any research conducted online, technology may not always cooperate. This study encountered several issues with account logins, system timeouts, Flash compatibility, slow load times, and other miscellaneous errors that resulted in null testing values.

Self-selection bias may have been a problem in participant recruitment, since those most motivated to respond to the invitation quickly—possibly representing those most interested in e-books—would have been chosen first for the limited number of participant slots. In terms of researcher “lessons learned,” recruitment materials should have been more clear about all testing appointments being on campus, because several online students volunteered but were unable to attend live testing sessions in the library. If the researchers could devise an approach to include these volunteers in virtual testing in the future, it would be advantageous for their experience to also be represented. Another “lesson learned” was that efficiency would have been improved by scheduling testing appointments via a calendar tool such as LibCal, rather than via email back-and-forth.

Although the study’s rubric permitted the researchers to bring a unique perspective to assessing platform usability, the rubric relies on certain assumptions about developer intent. Platform Help files were used as much as possible to determine developer intent, but nonetheless, the determination of what constitutes Efficient versus Alternate Success is still subjective to some degree. This may act as a limitation of the study, but it may also present a possible area for further research. E-Book usability research could benefit from the development of a more universal rubric of what constitutes ideal usability in e-book navigation, searching, and other key areas of functionality, informed by studies of user expectations and a broader base of research in user interface design.

Further research should delve deeper into testing turnaway experiences, which this study only briefly explored. Other advanced features, such as downloading and DRM issues, should also be examined through more rigorous user-based testing. Additionally, further research could investigate possible contradictions between user opinions and user behavior with regards to highlighting and note-taking tools. A number of users commented that they liked/disliked a certain platform because it did/didn’t offer note-taking, but the researchers wonder how many of those users would have noticed the presence or absence of such a tool outside of the testing scenario. Future studies could also explore how expressed opinions about the importance or appreciation of these tools map to actual user behavior in non-testing circumstances, perhaps through ethnographic studies and, if available, statistics regarding the usage of such tools in a platform.

One of the most notable challenges in the usability testing of e-book platforms is the exponential rate of platform modification. Vendors and publishers perform their own iterative testing and often release small updates on a recurrent basis. Since this study’s testing was concluded, the researchers have already observed substantive changes to Springer Link, JSTOR, and CRCnetBASE, the latter of which has migrated to an entirely new platform. For future studies, researchers are encouraged to clearly document, for instance via notes and screenshots, the contemporary aesthetics and functionality of each platform, since it may change before data analysis is complete. Even with such precautions to mitigate differences between testing and analysis, the possibility of changes appearing during testing procedures is still a risk which can seriously complicate the collection and comparison of data.

Conclusion

The researchers did not intend to find an overall “winner” among the platforms. However, the various comparative rankings and success/failure rates may prove valuable, or at least interesting, to libraries facing collection development decisions between these platforms. These data may also inform librarians as to which platforms may be better suited for a particular individual or population, and may provide an example of how other task-based comparisons of multiple platforms might be conducted. While content often drives platform choice, nevertheless in situations where the choice may be predetermined—these findings may make libraries proactively aware of usability concerns. Additionally this study findings may also inform library’s user instruction efforts, in terms of recognizing which aspects of a given platform are less intuitive and may require more explication.

The data from this study shows patrons to have mostly preferred EBook Central’s platform among all others tested, with it generating the highest number of votes for favorite platform, the lowest number of votes for least-preferred platform, and the second lowest average time per testing session. In terms of overall task success, however, EBSCOhost and Safari outperformed all others, with EBSCOhost achieving 100% OSR ratings on six of nine tasks, and with Safari achieving 100% OSR ratings on five of eight tasks. These two platforms also led the group in participant efficiency, with Safari generating a cumulative ESR of 98.3%, and with EBSCOhost producing a cumulative ESR of 91.92%.

Finally, this study’s findings suggest several key vendor design recommendations to ensure an optimal user experience, including

  • use of standard, recognizable icons to maintain consistency with user experience across the web;
  • clear and readily visible explanatory text to accompany icons for which no standard exists;
  • clear and logical choices regarding how and where content levels (e.g., book, chapter, page) are displayed and differentiated;
  • consistent numbering of pages in both the book and the PDF file, even within chapter-level downloads; and
  • clear and simple presentation of search results that mirror user experience across the web.

Author Contributions

The authors have chosen to detail their contributions to this project according to the CRediT taxonomy (https://www.casrai.org/credit.html). All four authors contributed equally to Methodology, Investigation, Formal analysis, and Writing—review and editing. Kat Mueller’s additional contributions were Conceptualization, Project administration, Visualization, and Writing—original draft of the Abstract, Introduction, Results, Discussion, and Conclusion sections. The additional contributions of Zachary Valdes were Project administration (interim), Visualization, and Writing—original draft of the Results and Discussion sections. Erin Owens’ additional contributions were Conceptualization and Writing—original draft of the Literature Review, Methodology, and Limitations and Further Research sections. Cole Williamson’s additional contributions were Writing—original draft for the Literature Review section.

References

  1. Selina Adelle Berg, Kristin Hoffman, and Diane Dawson, “Not on the Same Page: Undergraduates’ Information Retrieval in Electronic and Print Books,” Journal of Academic Librarianship 36, no. 6 (2010): 518–25.
  2. Kendall Hobbs and Diane Klare, “Are We There Yet? A Longitudinal Look at E-Books through Students’ Eyes,” Journal of Electronic Resources Librarianship 28, no. 1 (2016): 9–24.
  3. Noorhidawati Abdullah and Forbes Gibb, “Students’ Attitudes towards E-Books in a Scottish Higher Education Institute: Part 2, Analysis of E-book Usage,” Library Review 57, no. 9 (2008): 676–89.
  4. Denise Shereff, “Electronic Books for Biomedical Information,” Journal of Electronic Resources in Medical Libraries 7, no. 2 (2010): 115–25.
  5. Michael Heyd, “Three E-Book Aggregators for Medical Libraries: Netlibrary, Rittenhouse R2 Digital Library, and Stat!Ref,” Journal of Electronic Resources in Medical Libraries 7, no. 1 (2010): 13–41.
  6. As explained by QMethod.org, “Fundamentally, Q Methodology provides a foundation for the systematic study of subjectivity . . . Q Methodology (Q) is a complete methodology which involves technique (sorting), method (factor analysis), philosophy, ontology, and epistemology. Q reveals and describes divergent views in a group as well as consensus.” Q Methodology for the Scientific Study of Human Subjectivity, https://qmethod.org/.
  7. Aaron K. Shrimplin et al., “Contradictions and Consensus—Clusters of Opinions on E-Books,” College & Research Libraries 72, no. 2 (2011): 181–90.
  8. Esta Tovstiadi and Gabrielle Wiersma, “Comparing Digital Apples and Oranges: A Comparative Analysis of E-books Across Multiple Platforms,” Serials Librarian 70 (2016): 175–83.
  9. The Academic Database Assessment Tool, originally created by JISC before being hosted by CRL, has since been discontinued, though the descriptive information has been migrated to eDesiderata, http://edesiderata.crl.edu/.
  10. Esta Tovstiadi and Gabrielle Wiersma, “Inconsistencies between Academic E-book Platforms: A Comparison of Metadata and Search Results,” portal: Libraries and the Academy 17, no. 3 (2017): 617–48.
  11. For more literature in this category, see also Gail Golderman and Bruce Connolly, “Infobase Ebooks,” Library Journal 139, no. 7 (2014): 107; Michael Gorrell, “The Ebook User Experience in an Integrated Research Platform,” Against the Grain 23, no. 5 (2011): 36–40; Gita Gunatilleke, “Ebook Library (Ebl),” Charleston Advisor 8, no. 2 (2006): 16–19; Péter Jacsó, “Csa Illustrata, Gale Virtual Reference Library, and Cambridge Journals,” Online 31, no. 3 (2007): 53–56; Stacy Magedanz, “Ebrary Revisited,” Charleston Advisor 6, no. 2 (2004): 18–21; Sue Polanka, “E-Book Aggregators,” Booklist 104, no. 18 (2008): 68.
  12. Caroline Gale, “Champions and Ebooks: Using Student Library Champions to Inform E-book Purchasing Strategies,” Insights 29, no. 2 (2016): 181–85.
  13. Daniel G. Tracy, “Format Shift: Information Behavior and User Experience in the Academic E-book Environment,” Reference & User Services Quarterly 58, no. 1 (2018): 40–51, https://journals.ala.org/index.php/rusq/article/view/6839.
  14. Tracy, “Format Shift,” 42.
  15. Peter Hernon et al., “E-book Use by Students: Undergraduates in Economics, Literature, and Nursing,” Journal of Academic Librarianship 33, no. 1 (2007): 3–13.
  16. Noorhidawati Abdullah and Forbes Gibb, “Students’ Attitudes towards E-books in a Scottish Higher Education Institute: Part 2, Analysis of E-book Usage,” Library Review 57, no. 9 (2008): 676–89.
  17. Laura C. O’Neill, “A Usability Study of E-book Platforms” (master’s thesis, UNC-Chapel Hill, 2009), https://cdr.lib.unc.edu/record/uuid:9a109741-0d0e-4a11-b02b-21893c32f8b9.
  18. Michael Gorrell, “The eBook User Experience in an Integrated Research Platform,” Against the Grain 23, no. 5 (2011): 35–36, 38, 40.
  19. Tao Zhang, Xi Niu, and Marlen Promann, “Assessing the User Experience of Ebooks in Academic Libraries,” College & Research Libraries 78, no. 5 (2017): 578–601, https://doi.org/10.5860/crl.78.5.578.
  20. Zhang, Niu, and Promann, “Assessing the User Experience,” 585.
  21. Esta Tovstiadi, Gabby Wiersma, and Natalia Tingle, “User’s Choice: eBook Platforms” (PowerPoint presentation, Electronic Resources and Libraries Conference, Austin, Texas, April 5, 2017), http://scholar.colorado.edu/libr_facpapers/123. For additional discussion by the authors of this research, see also “Telling Us What We Don’t Know: Evaluating User Experience Across Different Ebook Platforms” (PowerPoint presentation, American Library Association Midwinter Meeting, Denver, Colorado, February 11, 2018), http://scholar.colorado.edu/libr_facpapers/122; “There are No Right or Wrong Answers: Improving the User Experience One Usability Test at a Time” (PowerPoint presentation, Electronic Resources and Libraries Conference, Austin, Texas, March 7, 2018), http://scholar.colorado.edu/libr_facpapers/124.
  22. Tovstiadi, Wiersma, and Tingle, “User’s Choice,” slides 16, 18, 18, 21, 25, and 28.
  23. Tovstiadi, Wiersma, and Tingle, “User’s Choice,” slide 19.
  24. Tovstiadi, Wiersma, and Tingle, “User’s Choice,” slide 25.
  25. Tovstiadi, Wiersma, and Tingle, “User’s Choice,” slide 16.
  26. See also the Methodology section regarding the issue with the Springer Link citation tool.
  27. Tovstiadi, Wiersma, and Tingle, “User’s Choice,” slide 18.

Table 1. Participant Platform Testing Assignments

Participant

Platforms Tested (In Order)

1 and 16

EBSCO, EBL, GVRL, Safari

2 and 17

EBSCO, CRCnetBase EBL, Safari

3 and 18

JSTOR, Safari, Oxford

4 and 19

EBL, EBSCO, IGI, GVRL

5 and 20

EBL, IGI, EBSCO, CRCnetBase

6 and 21

JSTOR, Oxford, IGI

7 and 22

GVRL, EBSCO, Springer, EBL

8 and 23

CRCnetBase, Springer, EBSCO, EBL

9 and 24

Oxford, Springer, JSTOR

10 and 25

GVRL, EBL, CRCnetBase, EBSCO

11 and 26

Oxford, EBSCO, CRCnetBase, EBL

12 and 27

Oxford, GVRL, JSTOR

13 and 28

Safari, EBSCO, IGI, EBL

14 and 29

IGI, EBSCO, EBL, Springer

15 and 30

Springer, JSTOR, Safari

Table 2. EBSCO eBooks Performance Compared to All-Platform Averages

Table 2. EBSCO eBooks Performance Compared to All-Platform Averages

Table 3. Ebook Central Performance Compared to All-Platform Averages

Table 3. Ebook Central Performance Compared to All-Platform Averages

Table 4. GVRL Performance Compared to All-Platform Averages

Table 4. GVRL Performance Compared to All-Platform Averages

Table 5. Oxford Reference Performance Compared to All-Platform Averages

Table 5. Oxford Reference Performance Compared to All-Platform Averages

Table 6. Safari Performance Compared to All-Platform Averages

Table 6. Safari Performance Compared to All-Platform Averages

Table 7. IGI Global Performance Compared to All-Platform Averages

Table 7. IGI Global Performance Compared to All-Platform Averages

Table 8. CRCnetBASE Performance Compared to All-Platform Averages

Table 8. CRCnetBASE Performance Compared to All-Platform Averages

Table 9. Springer Link Performance Compared to All-Platform Averages

Table 9. Springer Link Performance Compared to All-Platform Averages

Table 10. JSTOR Performance Compared to All-Platform Averages

Table 10. JSTOR Performance Compared to All-Platform Averages

Table 11. Most and Least Preferred Platforms

Most Preferred (Votes)

Most Preferred (%)*

Ranking

Least Preferred (Votes)

Least Preferred (%)*

Ranking

eBook Central

11

55

1

1

5

1 t

GVRL

4

40

2

1

10

2 t

EBSCO eBooks

7

35

3

1

5

1 t

Safari

3

30

4

4

40

3 t

Oxford Reference

2

20

5 t

1

10

2 t

IGI Global

2

20

5 t

4

40

3 t

JSTOR

2

20

5 t

6

60

4

Springer Link

1

10

6

4

40

3 t

CRCnetBASE

0

0

7

8

80

5

* Due to the variance in total number of tests conducted for each platform, the researchers elected to rank platform preference by percentage of vote type, and not by total number of votes received for each category

Table 12. Shorthand Phrases to Represent Testing Tasks

Task

Shorthand

Task 1, Find specified e-book title

Find E-Book

Task 2, Navigate to Page 50

Go To Page 50

Task 3, Go to the Next Page

Next Page

Task 4, Go to the Next Chapter

Next Chapter/Entry

Task 5, Search for provided term within the e-book

Search Term

Task 6, Find a citation for the e-book

Citation Tool

Task 7, Save a Note in this e-book

Note Tool

Task 8, Find another e-book title

Find Another E-Book

Task 9, Turnaway message meaning

Turnaway

Figure 1. Comparison of Turnaway Message (Task 9) Performance in EBSCO eBooks and Ebook Central

Figure 1. Comparison of Turnaway Message (Task 9) Performance in EBSCO eBooks and Ebook Central

Figure 2. EBook Central Search Term (Task 5) results

Figure 2. EBook Central Search Term (Task 5) results

Figure 3. CRCnetBASE display of e-book titles versus chapters (tab layout)

Figure 3. CRCnetBASE display of e-book titles versus chapters (tab layout)

Figure 4. CRCnetBASE PDF Go-to-Page function

Figure 4. CRCnetBASE PDF Go-to-Page function

Figure 5. CRCnetBASE citation tool

Figure 5. CRCnetBASE citation tool

Figure 6. Springer Link Citation Tool

Figure 6. Springer Link Citation Tool

Figure 7. Springer Link’s Book Metrics Feature

Figure 7. Springer Link’s Book Metrics Feature

Figure 8. Detail of Bookmetrix Data in Springer Link

Figure 8. Detail of Bookmetrix Data in Springer Link

Figure 9. JSTOR e-book display of titles versus chapters

Figure 9. JSTOR e-book display of titles versus chapters

Figure 10. JSTOR Thumbnails Tab

Figure 10. JSTOR Thumbnails Tab

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2023 RUSA