rusq: Vol. 52 Issue 2: p. 123
Playing Games to Improve the Quality of the Sources Students Cite in their Papers
Karen Markey, Chris Leeder, Charles L. Taylor

Karen Markey (ylime@umich.edu) is Professor, School of Information
Chris Leeder (cleeder@umich.edu) is a Doctoral Student, School of Information
Charles L. Taylor (chartay@umich.edu) is Lecturer in English, Comprehensive Studies Program, College of Literature, Sciences, and the Arts, University of Michigan, Ann Arbor, Michigan

Abstract

This study seeks to determine the effectiveness of the BiblioBouts information literacy game for improving the quality of the sources undergraduate students cite in their written papers. BiblioBouts was incorporated into a second-year English class of 45 students in which about half played the game from start to finish (i.e., players) and the other half failed to play all or part of the game (i.e., nonplayers). The authors hypothesized that the quality of the sources players cited in their papers would improve as a result of playing BiblioBouts and players would cite more scholarly sources in their final-paper bibliographies than nonplayers. About 90 percent of the sources players’ cited in their in-game bibliographies were scholarly sources. When players transitioned to their final papers, the percentage of scholarly sources they cited in their final papers dropped in half (44.6 percent); however, it surpassed the percentage (35.2 percent) of scholarly sources nonplayers cited in their final papers. The authors suggest that players put scholarly sources into play and cited them in their in-game bibliographies knowing that they would earn high scores for their actions. The authors also raise the question of whether the second-year students in this class and whether underclassmen generally understand scholarly sources well enough to integrate them into their papers. BiblioBouts players benefited in several other ways including being exposed to many more sources than they would have found on their own, becoming familiar with the library portal and its many available databases, and mastering citation management software for saving online sources’ citations and full-texts.


When undergraduate students arrive at the academy, they are operating for the first time in the same rich, deep, diverse information environment that faculty use to teach the knowledge of the disciplines and to extend the frontiers of knowledge. Bereft of expert knowledge of the disciplines, many students need guidance about where to start and what expert research and discovery tools to use. As a result, students fall back on their habitual patterns: Google, Wikipedia, and the web.1 When they have exhausted this comfort zone, they do not know what to do next. This point of need is precisely when students are most receptive to information literacy instruction.

Our approach is to meet students online where most library research now takes place and put an online tool into their hands that teaches them how to conduct library research while they go about the business of completing their assignments. This tool is the BiblioBouts information literacy game.

BiblioBouts is unique. It exposes many students to expert research and discovery tools for the first time and requires them to use these tools repeatedly. The game is institution-neutral, discipline-neutral, and class-rank neutral (starting with college freshmen and up). BiblioBouts is best suited to courses in which students complete research-and-writing assignments. Game-like features such as a leader board, levels, badges, and scoring log motivate students to continue playing, giving them opportunities to gain valuable practice performing a wide variety of information literacy tasks and to increase their understanding of the underlying information literacy concepts. Put students into a game situation in which they perform a technical reading of a source to assess its scholarly nature or judge the completeness of cited source and they will perform the task repeatedly because they want to watch their score increase, their name climb the leader board, and their trophy-case fill with badges.

Our information literacy research has embraced games because into good games are built principles of learning.2 For example, games allow players to follow hunches, get results by trial and error, and engage in self-discovery. Games reward players who exceed minimum-level expectations, they stimulate players’ competitive spirits, and they publicly acknowledge the skillful actions of game leaders. Games also have the potential to scale from one student to thousands. Gaming’s strengths make it an intriguing approach to test as a method of developing students’ information literacy skills and knowledge.


RESEARCH QUESTIONS

This study seeks to determine the effectiveness of the BiblioBouts information literacy game for improving the quality of the sources undergraduate students cite in their written papers. It answers these specific research questions:

  1. Is the quality of the sources undergraduate students choose for their in-game bibliographies better than the quality of the sources they originally contributed to the game?
  2. Is the quality of the sources students cite in their final papers better than the quality of the sources they choose for their in-game bibliographies?
  3. Do BiblioBouts players cite more scholarly sources in their final-paper bibliographies than nonplayers?
  4. Are students’ source credibility assessments comparable to the quality ratings of expert coders?

Answers to these research questions will help determine the effectiveness of BiblioBouts for teaching students information literacy skills and concepts and guide the future design and development of the BiblioBouts game.


DEVELOPING BIBLIOBOUTS

BiblioBouts has been conceived as a web-based game with social networking features to give students guidance while conducting library research to complete a written paper. BiblioBouts’ design, development, and evaluation has been supported with funds from the Institute of Museum and Library Services (IMLS). This paper’s first author is the principal investigator of the IMLS-sponsored research project, and although she is ultimately responsible for all aspects of BiblioBouts’ design, development, and evaluation, she is assisted by project staff charged with graphic design, programming, information literacy training, user support, and data collection and analysis. Librarians at four participating institutions also assist, some recruiting instructors at their universities to incorporate BiblioBouts into their academic courses and others deploying BiblioBouts in their library’s information literacy instruction program. This paper’s second and third authors are project staff and an instructor, respectively, whose class played BiblioBouts in winter 2011. Between September 2010 and December 2013, BiblioBouts has been available for free to instructors at colleges and universities in North America and abroad; however, evaluation activities are restricted to participating institutions where librarians assist project staff with human subjects review board business.


PLAYING BIBLIOBOUTS

BiblioBouts is an online tournament made up of a series of mini-games or bouts, each of which introduces students to a specific subset of information literacy skills within the overall research process. Instructors choose a broad-based topic for students to research, then schedule the game’s four bouts over a three-week period. Table 1 describes the game’s bouts, their suggested duration, and summarizes the information literacy skills, concepts, and tools students encounter when playing each bout.

In the initial Donor bout, students search the web and library databases for sources and save them using the Zotero citation management tool. In the Closer bout, they choose their best sources to “do battle” in the game. In the Tagging and Rating bout, students evaluate the content and quality of their opponents’ sources including rating their credibility and relevance on a 100-point scale. Lastly, students specify a research topic and choose the best sources from the pool of everyone’s sources for a best bibliography on this topic. The objective of BiblioBouts is to donate the very best sources that one’s opponents rate highly and choose for their paper’s best bibliography.


LITERATURE REVIEW

The evolution of academic librarianship is now being driven by a paradigm shift occurring in society as a whole as the result of the rapid development of new information technologies.3 One outcome of this paradigm change has been the creation of the “blended librarianship” model and its vision of integrating library services and practices into the teaching and learning process.4 Blended librarians embrace educational technology and design thinking in creating innovative approaches to their educational role.5 The Association of College and Research Libraries’ Information Competency Standards for Higher Education have been mapped to progressively advancing levels of cognitive development.6 In response, new methods of delivering information literacy instruction are being explored in which information technology is utilized to help support student learning.7

Educational games are well suited to scaffold students’ information literacy skills acquisition. The use of games for learning has been widely researched, and the literature on “serious” or educational games is extensive, covering many educational disciplines and game genres. Several authors have discussed the elements of traditional games that incorporate principles of good learning, such as developing organizational and problem-solving skills and engagement through goals, feedback, interaction, and achievements.8 In his foundational work “What Video Games Have to Teach Us about Learning and Literacy,” Gee presents a list of thirty-six learning principles embodied by video games, including active critical learning, meta-level thinking, experimental probing, and the on-demand and just-in-time presentation of needed information.9 Games enable students to learn by doing, undertake purposeful and meaningful tasks, reflect on their experiences and work with others to achieve learning goals, while also helping create engagement.10 Ultimately, effective game design can create personalized learning experiences that motivate players to learn new skills without realizing they are in the midst of the learning process.11

Educational games can incorporate these principles of good learning into information literacy instruction as a structure for learning and practicing the information-gathering process.12 As students search for and evaluate information, they have the opportunity for repeated practice and reinforcement, which can be particularly effective for skills-based learning such as information literacy, where logical thinking and problem solving are crucial components.13

To measure the effect of the BiblioBouts game on student learning of information literacy skills, an evaluation instrument was developed to evaluate the quality of sources chosen during the game process. Citation analysis of student bibliographies is frequently used to measure students’ information literacy knowledge generally or the effectiveness of information literacy instruction specifically. In fact, a meta-analysis of 91 case studies of information literacy assessment found that citation analysis of bibliographies was used in 17 percent of studies, the second-most common technique behind multiple-choice questionnaires that were used in 34 percent of the studies.14 Several early studies set the groundwork for this method.15 Kirk scored bibliographies according to the variety, relevance, and scholarliness of cited sources, then compared resulting scores from two types of bibliographic instruction.16 Dykeman and King expanded the evaluation criteria to writing, organization, and content, and compared the performance of a treatment group whose members received bibliographic instruction with a control group that did not.17

Measuring the quality of students’ cited sources has continued to be used in more recent studies.18 In many of these studies, quality is measured by scoring the students’ use of scholarly sources against a standard rubric. Long and Shrikhande used an “information literacy grading scale,”19 while Tuñón & Brydges’s rubric scored a citation on a scale of 1 to 4 for five criteria (breadth of resources, depth of understanding, level of scholarliness or “quality,” currency, and relevance).20 Middleton developed a “scholarly index” (SI), a rating the proportion of scholarly vs. nonscholarly sources utilized in the student’s bibliography.21 A limitation of such systems is that the binary choice of “scholarly versus non-scholarly” does not leave room for more fine-grained analysis of the level of scholarliness in different types of sources, especially web-based sources that do not always fit conventional criteria.

To score students’ bibliographies in this study, the authors reviewed other researchers’ scoring methods but ultimately chose not adopt them because they uniformly assigned low scores to websites as a genre. When students play BiblioBouts, all the sources in the game come from the web, thus, a scoring method was needed that was able to distinguish sources that come from credible vs. non-credible websites, rewarding the former with high scores and penalizing the latter with low scores. The researchers developed a format-neutral taxonomy that can be applied to both online and offline sources. Its design was inspired partly by the quantitative rating scale of Middleton’s SI,22 and partly by Crowston and Kwasnik’s faceted classification system for categorizing online genres.23 The taxonomy bears five facets: Information Format, Literary Content, Author Identity, Editorial Process, and Publication Purpose. Each facet is subdivided by categories that describe attributes of sources. Academic faculty who teach undergraduate students participated in the development of the scheme’s scoring system, assigning numerical values to each category based on the desirability of the attribute in a source. Because the taxonomy is faceted, scoring is multidimensional, considering several factors. Tests of the taxonomy reveal that quality scores are consistently higher for scholarly, academic, and peer-reviewed sources, and consistently lower for anonymous, self-published, and nonreviewed sources, thus demonstrating the taxonomy’s usefulness as a measurement tool for citation analysis.24 The taxonomy is used in this study to answer its research questions about the quality of students’ sources.


DATA COLLECTION AND ANALYSIS

The researchers sought a medium-size undergraduate course at a participating institution where students played BiblioBouts while they completed a research-and-writing assignment. Fitting these specifications was a highly recommended but not required second-year English course named “Academic Argumentation” in which forty-five students were enrolled in winter 2011. The course is an immediate follow-up to the institution’s first-year composition class. It assists students in advancing from basic writing competence to engagement with more challenging paper topics, and with more refined argument techniques. The course represents many students’ first experience with a requirement to incorporate properly cited supporting materials into their written arguments. The instructor assigned students an essay on the topic “How climate change has brought about a particular adaptation in a particular human population.” Essays were expected to advance a causal argument bringing about a particular effect that was supported by cited references to research. The syllabus gave these instructions regarding cited sources:

  • Use and cite at least 5 outside sources.
  • No more than 2 of the 5 sources can come from the open web.
  • Citations to Wikipedia are not acceptable.

The game’s Tagging & Rating (T&R) bout randomly selected an opponent’s source and displayed it to at least eight different players who tagged its Information Format (IF) and Publisher (PUB), rated its credibility, and commented on the credibility ratings they gave to it. The display included a full citation, a link to the source’s full-text, and, when available, an abstract. To assign IFs, players moused down on a pulldown menu that grouped IFs by their purpose and moused up to select one. Here are the IFs grouped according to purpose:

  1. to inform and/or facilitate learning: Consumer Magazine, Consumer Newspaper, Trade Magazine, Trade Newspaper, Research Report, Conference Proceedings, Course Material, Encyclopedia, Scholarly Journal, Dissertation or Thesis, Public Affairs Information, Book
  2. to promote or persuade: Blog, Promotional Material, Policy Statement, Public Sharing Site, Informational Video
  3. to catalog or list: Database, Directory, Online Repository

Next, players tagged the source’s Publisher (PUB), listing these options in a pulldown menu: Individual Person, Commercial Business, Nonprofit Organization, K–12 Education Institution, Government Organization, and Higher Education Institution.

Next, players answered three questions about the source’s credibility. Students could display a tool tip that gave more explanation about the task by hovering over the underlined word (that also was highlighted in red). Questions and tips were the following:

  • To what extent do you believe that this source is written by an expert? (Tool tip: The source provides evidence that its author has expert knowledge, skills, and competence in the subject.)
  • To what extent do you believe that this source is trustworthy? (Tool tip: The source provides evidence that the information is truthful, fair, and reliable.)
  • To what extent do you believe that this source is scholarly? (Tool tip: The source is a product of serious academic study, and, possibly, original research.)

To register their credibility rating, students pulled sliders beginning with 0 percent (Not at all) and ending with 100 percent (To a great deal). When players’ credibility ratings are discussed in the sections that follow, average credibility ratings are given for students’ three separate ratings above.

BiblioBouts logged students’ game play data. Logged data included citations and full-texts for the sources students contributed to the game in the Donor bout, their IFs, PUBs, credibility ratings, and comments for opponents’ sources in the T&R bout, and the sources they chose for their in-game bibliographies in the Best Bibliography bout.

The researchers anonymized students’ final papers, extracted cited sources from them, and gave sources to coders. Coders were second-year master’s students in University of Michigan’s School of Information and trained by the researchers to apply the taxonomy. Coders then retrieved the full-texts of both BiblioBouts sources and cited sources from students’ final papers and applied the taxonomy, scoring sources along these five attributes: (1) Information Format (IF), (2) Literary Content, (3) Author Identity, (4) Editorial Process, and (5) Publisher (PUB).

For each attribute, coders chose one element that best described the cited source. For IF and PUB attributes, taxonomy elements were the same as the ones players used to tag sources during game play. Thus, we can compare players’ tags with coders’ assigned elements to determine whether players’ ratings and coders’ taxonomy scores co-vary.

The taxonomy associated a numerical value from 1 (low) to 4 (high) with each element that represented how useful and appropriate the element would be to an undergraduate student researching sources for a class assignment. These values were determined based on consensus of the researchers who developed the taxonomy and instructors whose classes played BiblioBouts.25Table 2 gives examples of two sources, a scholarly and a non-scholarly source, and how coders would score them using the taxonomy.

When coders assigned scored elements to a state-of-the-art literature review published in a Scholarly Journal, the source’s taxonomy score was close to the maximum of 20.0 points. A blog written by a college professor on an academic topic was substantially less at 12.9. Note that values for all five elements describing the literature review were high whereas values for the blog were a combination of high and low scores.

An intercoder reliability study resulted in overall mean agreement of 0.80, or 80 percent agreement between coders. Given the large number of possible elements for each of the five facets, the researchers felt confident in the overall level of agreement between the coders.26

At the end of the game, a dozen players volunteered to participate in a focus group interview. Project staff recruited students by sending them email messages and offering a $25 VISA gift card for their participation. The interview was held over the lunch hour and participants were treated to pizza and soda pop. Researchers also conducted a post-game personal interview via phone with the instructor after papers were graded and returned to students; instructors were given questions in advance to prepare their answers. Thus the researchers enlisted several data collection methods to evaluate BiblioBouts’ effectiveness.


DATA ANALYSIS
BiblioBouts’ Players and Nonplayers

The instructor invited the 45 students enrolled in the class to play the BiblioBouts game. The instructor also registered and played BiblioBouts. Of the 45 students, 41 played one or more of BiblioBouts’ four bouts. Of the four students who did not play BiblioBouts at all, three forfeited the points they earned toward the final grade in the class and one dropped the course in mid-term. For this paper’s analyses, the authors divided students into two groups: (1) the 22 players who met each bout’s caps or quotas and (2) the 23 nonplayers who failed to play one or more bouts, failed to meet all bouts’ caps or quotas, or did not play BiblioBouts at all.

Quality of Students’ Sources

The researchers hypothesized that the quality of students’ chosen sources should increase as students progressed from bout to bout, culminating with writing and citing sources in their final papers. To test this hypothesis, the researchers analyzed the sources 22 players chose in the Closer and Best Bibliography bouts (this paper calls them BiblioBouts sources) and the sources the 21 players and 19 nonplayers cited in their final papers (this paper calls them final-paper sources).

Information Formats (IFs) of BiblioBouts and Final-Paper Sources

Table 3 lists IFs of players’ BiblioBouts sources. In the Closer bout, players submitted sources that addressed the broad-based topic that the instructor chose for their game. In the Best Bibliography bout, players chose sources for their in-game best bibliographies on a topic of their own choosing but within the purview of the original broad-based topic.

The number of scholarly journals dwarfed all other IFs, accounting for almost three-quarters of players’ BiblioBouts sources. A distant second was encyclopedias, accounting for only 7.0 percent and 7.6 percent of players’ closed and best-bibliography sources, respectively. All cited encyclopedia articles came from published online encyclopedias available through the library’s database portal, not from Wikipedia. Only 10.0 percent and 6.7 percent of players’ BiblioBouts’ sources were non-scholarly sources. Except for scholarly journals and encyclopedias, table 3 averages were based on very small numbers.

Players’ final-paper sources were roughly split between scholarly (44.6 percent) and non-scholarly (55.4 percent) sources. In contrast, scholarly sources (90.0 percent and 93.3 percent) characterized players’ BiblioBouts sources almost exclusively (table 3). Non-players relied even more than players did on non-scholarly sources—almost two-thirds (64.8 percent) of their final-paper sources cited non-scholarly sources (table 4). Statistical tests were not performed because of small frequencies (less than 5) in the majority of cells.With respect to cited sources in final papers, both players and non-players cited a wider variety of IFs than were represented in their BiblioBouts sources. With respect to scholarly IFs in final papers, scholarly journals (22.8 percent) and research reports (10.9 percent) were typical in players’ papers and the former only (19.1 percent) in nonplayers’ papers. With respect to non-scholarly IFs in final papers, players cited consumer newspapers (11.4 percent), promotional material (10.3 percent), consumer magazines (8.2 percent), and policy statements (8.1 percent), and nonplayers cited consumer newspapers (19.1 percent), public affairs information (17.7 percent), promotional material (13.2 percent), and blogs (7.4 percent). Players cited a wider array of IFs than did non-players.

The Credibility of Students’ Cited Sources

Table 5 compares the credibility ratings players gave to sources with coders’ taxonomy scores. Taxonomy scores for the cited sources in nonplayers’ papers are based on a 40 percent sample of their final-paper sources.

With respect to BiblioBouts sources, average coder taxonomy scores for scholarly sources were high at 19.0 in the Closer bout and 18.9 in the Best Bibliography, about 1 point less than the maximum score of 20.0. Most BiblioBouts sources were scholarly journals, earning the highest taxonomy score of 19.6 in the Closer bout and 19.5 in the Best Bibliography bout. Also with respect to BiblioBouts sources, coders’ taxonomy scores for non-scholarly sources were significantly lower than scholarly sources with scores of 13.6 in the Closer Bout (t(140) = 13.686, p < . 001) and 13.7 in the Best Bibliography bout (t(103) = 9.84, p < . 001).

Taxonomy scores for the scholarly sources in both players’ and nonplayers’ final papers averaged 17.4 points. These scores were not as high as the scores for BiblioBouts sources (19.0 and 18.9) because scholarly journals, the IF to which coders gave the highest taxonomy scores, did not dominate as much in students’ final papers, making way for other scholarly IFs that did not score as high as scholarly journals. Coded non-scholarly sources in players’ and nonplayers’ final papers averaged 12.5 and 12.8 points, respectively. These scores were significantly lower than scores for non-scholarly BiblioBouts sources (players’ final papers: t(182) = 14.990, p < .001; nonplayers’ final paper sources: t(66) = 7.378, p < .001). Across the board, scholarly sources earned higher taxonomy scores than non-scholarly sources.

With respect to credibility ratings, players hardly distinguished between scholarly and non-scholarly sources. Their scholarly sources averaged credibility ratings between 66 percent and 68 percent and their nonscholarly sources averaged credibility ratings between 63 percent and 66 percent. Differences between players’ credibility scores for scholarly and nonscholarly sources were not significant. At first glance, players’ high credibility scores for some non-scholarly sources were troubling, e.g., consumer magazines, trade magazines, directories, but because these scores were based on so few sources (five or fewer), they may not be indicative of how players would rate the credibility of a much larger collection of nonscholarly sources.

Players rated large numbers of scholarly journals and respectable numbers of encyclopedias and monographs. They gave scholarly journals and monographs high credibility ratings in the 60s and 70s and encyclopedias low ratings in the low- to mid-40s. Although these encyclopedias came from library databases, players might have been hesitant to rate encyclopedias high because of the course syllabus ban on Wikipedia and hearing repeated admonishments about Wikipedia from instructors and librarians. With respect to nonscholarly sources, players rated a handful of consumer newspapers in the low 50s. These results indicate that players were able to sense distinctions between some scholarly and nonscholarly IFs; however, additional research is needed to state with certainty that coders’ taxonomy ratings and players’ credibility ratings co-vary.

The researchers hypothesized that players would cite increasingly more scholarly sources as they progressed in their research from start to end. In fact, the opposite occurred. Players occupied themselves with scholarly sources almost exclusively during game play. When it came to their final papers, the majority (55.4 percent) of their cited sources were nonscholarly (102 of 184 sources). The percentage of scholarly sources in players’ final-paper bibliographies (44.6 percent) exceeded the percentage of scholarly sources in non-players’ final-paper bibliographies (35.2 percent) but it was nowhere near the scholarly source percentages (90.0 percent and 93.3 percent) that characterized players’ BiblioBouts sources.

Final-paper sources were not subject to credibility ratings because students wrote their papers after the game ended. Thus, table 5 enumerates no player-assigned credibility ratings for final-paper sources; however, coders applied the taxonomy to assess the quality of final-paper sources, rating players’ and nonplayers’ scholarly sources an average 17.4 and their nonscholarly sources an average 12.5 and 12.8. In students’ final papers, scholarly journals, conference proceedings, and monographs earned the highest taxonomy scores and blogs, promotional material, and directories the lowest.

Cited Sources in Students’ Final Papers

Players’ final papers averaged 9.0 cited sources, and nonplayers’ final papers averaged 7.6 cited sources. Thus, players’ final papers contained more scholarly sources and more sources overall than non-players’ final papers but the difference was not significant.

Players’ final papers cited only 17 (12.0 percent) of the 142 sources players closed in BiblioBouts. Their IFs mirrored BiblioBouts sources generally: 9 were scholarly journals, 2 were encyclopedias, 2 were monographs, and there were 1 each of promotional material, consumer newspapers, research reports, and conference proceedings. The average taxonomy score of these BiblioBouts sources in players’ final papers was 17.2, comparable to the average taxonomy scores of scholarly sources in players’ or nonplayers’ final papers (see table 5). Players gave these 17 sources an average credibility rating of 73.1 percent, higher than average credibility ratings they gave to BiblioBouts scholarly sources (see table 5).

Seeking a reason why so few BiblioBouts sources were cited in players’ final papers, the researchers examined how players proceeded from the broad-based topic that the instructor assigned to students to their final-paper topics. The broad-based topic was “Human Adaptation to Climate Change,” and the instructor expected students to specialize, discussing a particular population’s adaptation to global warming. Table 6 traces the evolution of players’ topics starting with the broad-based topic, continuing with their in-game best bibliographies, and ending with their final written papers. In this table, paper topics are examples, not a complete list.

During the Best Bibliography bout, 9 (40.9 percent) players reiterated the broad-based topic that the instructor set for the game but their final paper topic was narrower than the broad-based topic. The top third of table 6 gives examples of their best bibliography and final-paper topics. The remaining 13 players (59.1 percent) cited best bibliography topics that were more specific than the broad-based topic. About half of these players’ final paper topics were the same as their best bibliography topic (see the middle third of table 6 for examples). The other half wrote papers on topics that were still narrower than their specific best bibliography topics (see the bottom third of table 6 for examples).

All players progressed from the instructor’s broad-based topic to a narrower topic. Some did so before or during the best bibliography bout, and others did so after this bout. When players specialized, they examined specific countries (e.g., Ireland, Cuba, China), considered adaptive strategies (e.g., reducing greenhouse gas emissions, cycling, energy-efficient appliances), or investigated the broad-based topic from a different perspective (e.g., political parties, celebrities, diseases). Most likely, such specialization required them to gather additional sources that specifically addressed their specific interests because there were not enough or any BiblioBouts sources on their final paper topics.

Publishers (PUBs) of Sources

Providing a different perspective on students’ sources was the Publisher (PUB) facet. This facet described who was responsible for the source’s publication. Table 7 reveals the PUBs of players’ closed sources. The far-left column specifies the number of points that were added to a source’s taxonomy score for particular PUBs. The two right-hand columns enumerate coders’ taxonomy scores and players’ credibility ratings.

Over three-quarters of players’ closed sources came from higher education. Many of these were scholarly journals and trade journals published by commercial publishers but because their bylines identified authors’ academic institutions, research labs, and think tanks, players classified them as higher education in BiblioBouts and coders followed suit. Sources from government and commercial sectors accounted for 7.0 percent to 8.5 percent of sources. Few or no sources came from nonprofits or K–12 or were issued by an individual person. Coders’ scores were highest for higher education (19.3), so were players’ credibility ratings (71.7 percent). Except for nonprofits, players’ credibility scores were comparatively high for all PUBs except for nonprofits. Coders’ taxonomy scores were comparatively low for all PUBs except for higher education. Omitted from this discussion is a table for publishers of players’ best bibliography sources because it would resemble table 7 so closely.

Table 8 documents PUBs of players’ and nonplayers’ final paper sources. Its non-player statistics are a 40 percent sample of their final-paper sources.

PUBs of players’ final-paper sources were quite different from PUBs of their closed sources. No longer does higher education dominate. Instead, percentages of higher education, commercial, and government publisher types are comparable at about 25.0 percent with nonprofits close behind at 18.5 percent. The percentages of PUBs of players’ and non-players’ final-paper sources were almost mirror images of each other.

Most commercial types came from news media outlets such as BBC, NBC, CBS, CNN, New York Times, The Guardian, and Washington Post. Both players and non-players contributed a handful of sources from K–12 education and individual people. Examples of the latter are

  • an authorless website named “Global Warming is a Farce”;
  • a personal essay on the origins of the global warming issue dated 1999;
  • facts about the Lake Mead NRA compiled by a named officer employed by the NRA (US) and published on a website by a family run company;
  • a list of fifty things people can do to stop global warming published by a freelance web promotion professional; and
  • a blog written by a man whose brief bio claims he has written on global warming since 2003.


DISCUSSION

When playing BiblioBouts, players played it safe, knowing that they would score high for the scholarly sources they put into play and chose for their best bibliographies. Thus they limited their BiblioBouts sources almost exclusively to scholarly sources. That players’ credibility ratings for scholarly and nonscholarly sources were roughly equivalent is troubling; however, the credibility ratings for the latter were based on so few sources that it is difficult to generalize based on data from this one analysis. Subsequent analyses of BiblioBouts game-play data will enable the researchers to thoroughly investigate students’ source credibility assessments to determine whether they are comparable to the quality ratings of expert coders.

Writing their final papers, players went farther afield, citing more non-scholarly than scholarly sources and hardly using the sources they and others found while playing BiblioBouts. That players’ final-paper sources were comparable to non-players’ final-paper sources in terms of IFs, PUBs, and taxonomy scores (see tables 5 and 8) is evidence of players’ reversion back to their habitual patterns of source selection. Especially troublesome was that this reversion occurred despite their experience of playing the game with its focus on source evaluation and the availability of the BiblioBouts Post–Game Library of mostly scholarly sources on the broad-based topic in play that players built as a result of game play.

It has been suggested that topic specification played a role, that is, forcing players to seek entirely different sources because their topics evolved during game play and afterward while writing their papers (see table 6). The broad topic in play—how climate change has brought about a particular adaptation in a particular human population—also contributed. For example, table 4’s nonscholarly IFs such as policy statements, public sharing sites, blogs, and promotional material that communicated the opinions, viewpoints, and policies of governments, NGOs (nongovernmental organizations), nonprofits, and individual people about climate change were likely to have relevant content for the arguments players made in their final papers.

The researchers will monitor future classes whose instructors choose different broad-based topics to determine whether large proportions of BiblioBouts sources make up the sources of players’ final papers. If and when they do, we will scrutinize the characteristics of these broad-based topics to advise subsequent instructors about topic selection so that their students benefit from BiblioBouts’ Post-Game Library.

Scholarly sources especially scholarly journals dominated BiblioBouts sources; however, players chose few (12.0 percent) BiblioBouts sources for their final papers. This suggests that undergraduate students may not understand scholarly sources well enough to integrate them into their papers. When playing BiblioBouts, players were probably able to recognize, retrieve, and contribute scholarly sources on the broad-based topic in play; however, integrating them into their final papers to support their claims might have been too much of a stretch. As a result, players as well as nonplayers turned to nonscholarly sources that discussed global warming in terms that the average layperson could understand. It may take several years of coursework in a discipline for students to develop competence and facility with its scholarly literature. Our speculation in this regard is echoed by Rebecca Jackson, who, as a result of mapping information literacy competencies to cognitive development models, asserts that successfully meeting information literacy standards that include using information to generate new concepts can only be accomplished by graduates of higher education institutions.27

Players’ failure to cite BiblioBouts sources in their papers came to the researchers’ attention after the class ended so they had no opportunity to query students about this. The researchers sought explanations from students and instructors who played BiblioBouts after the class that is described in this study. Interviewees agreed that game-play occurred too early in the research process for them to have settled on their papers’ topics and cited sources; however, playing BiblioBouts exposed them to information that had an impact on their final papers. Here is what students from other classes said in this regard:

  • “When I did BiblioBouts, I didn’t have my thesis all the way constructed… . After I did BiblioBouts, I needed a lot of sources that were similar to the topic that we used but they weren’t exactly the same thing. Like I had to do a lot of other research after the fact to find information that was similar to [but not the same as] the [broad-based] topic.”
  • “Because I feel like we just kind of hopped into research without really formulating our thesis completely.”
  • “Because it’s a research paper and so you have to look at what sources you find and then make some kind of thesis out of that.”

Here is an instructor who lauds BiblioBouts for its ability to expose students to more information than they would have found on their own. Because of this exposure, students were inspired to think in new and different ways about their topics and about the need to find more information after the game ended to support their ideas.

  • “A lot of [students] found it was very useful to have references that they all pooled together and found and rated and that was good. But … the students … said, ‘Initially I started with an idea of what I wanted to write on this theme. But after all this reading and so on, my ideas have formed in slightly different ways from what I had initially thought of and that was too simple. So can I not use any of those [BiblioBouts] sources because I have formed my idea about what I want and I need to look for other sources? I might use one [BiblioBouts source] but I want to use something else as well.’ I let them do that.”

This paper’s research questions regarding the quality of players’ chosen sources increasing as they progressed from bout to bout and culminating with writing their final papers were not supported by analyses involving IFs, PUBs, and taxonomy scores; however, the instructor who graded students’ final papers observed better-quality arguments and sources. Here is what he said in this regard.

  • “Their understanding of how sources applied to their argument went up as a result of looking at a number of different sources all talking about the same thing. Even if they looked at the sources only briefly, they were able to see how the sources that they did choose compared to other available sources. That forced some of them to have a greater understanding of how the information that you’re citing impacts the information that you’re putting into the paper as part of the argument. The quality of argument went up and the quality of sources overall went up.”

Focus group interviews revealed other ways in which students benefited from game play. Before the game’s start, a librarian visited the class to demonstrate the university library’s database portal and relevant databases. The instructor also profiled BiblioBouts with a list of relevant databases. Some students acknowledged the game as being their first exposure to these databases.

  • “For me it was the first time I have ever used the university library system and so it was a really interesting experience for me just because like I have searched Google, Google Scholar, and other things … but I have never used databases before.”

Students added that playing BiblioBouts emphasized the importance of evaluating the sources they find.

  • “The biggest thing for me is after playing the game is getting used to finding sources, legitimate sources, rather than just typing something into Google and bringing up whatever websites came. [Playing the game] definitely helped me reevaluate what sources I want to use and at a university level rather than just what I’ve been doing thus far.”
  • “[Playing BiblioBouts] did change my mentality on what a good source is and what isn’t and a lot of it was the repetition of constant tagging and looking at different sources and being able to evaluate them in those terms.”

Another benefit the instructor cited was keeping sources on hand after a course ended to use at a later time.

  • “Creating a database that they might use in the future is something that they gained from this. In the past, sources were disposable. You … used them for a paper or project and then discarded them… . In the future, you keep a running database … especially within your major. Some of them have already told me that they are going to do exactly that… . They would use Zotero to do it… . They were very fond of Zotero.”

Not long after the game ended, students described how they were already using Zotero for assignments in other courses. Game play also increased their use of the university library.

  • “Now I use Zotero for all of my projects and all of my papers when I need to find different articles.”
  • “I actually used the university library and services a lot more, which I like never had and like they were really helpful.”

Because of playing BiblioBouts, students made a habit of collecting more sources than their instructors required, picking and choosing the best ones to make their argument rather than limiting themselves to the first ones retrieved on the topic.

  • “Whenever I did research I used to just find however many citations needed or however many the professor had required for the particular assignment. [Since playing] the game, I now [collect] a huge amount of [sources], then I narrow it down, much in the same way that we did in the game.”

Students also acknowledged that playing a game motivated them because it was an open competition with their fellow students.

  • “The leader board, to know who was on top and who was behind and how far you were from the top five, I liked that because it pushed you and it gave you like the motivation to be, ‘Oh, I’ll do more.’”
  • “I think the game is an effective way [to learn about library research], especially if it’s with others because a lot of people are competitive.”
  • “[Because the game] involves others … with each other trying to reach the same goal, it’s able to motivate you more to do what you need to do and so I think a game is a good way to learn.”

Playing BiblioBouts did not immediately produce the desired behavior, that is, students citing increasingly more scholarly sources in their final papers. It did, however, expose them to more sources than they would have found on their own, to the importance of evaluating the sources they find, and to a new software tool for organizing the sources they find online, including using this tool to keep sources on hand just in case they need them in their future coursework. Their instructor also observed that the benefits of playing BiblioBouts went beyond simply allowing the students to support their arguments more effectively; it enhanced their understanding of the relationship between researched sources and argument planning, as well. They began to see that engaging with higher-quality sources early in the preparatory stages for their papers helped them to recognize avenues of investigation that might not otherwise have occurred to them, resulting in more thoughtful theses and stronger subtopics.


CONCLUSION

Students from a second-year English course at a research university played the BiblioBouts information literacy game. To determine whether game play improved the quality of the sources students cited in their final papers, several methods were used—analyses of game logs, cited sources in players’ and nonplayers’ final papers, a personal interview with the instructor, and focus group interviews with students.

As a result of playing the game, the players in this one particular class did not progress from lower-quality to higher-quality sources; however, scholarly sources dominated the sources they submitted to BiblioBouts, demonstrating that players were able to recognize quality sources for the purpose of playing the game, and they gave their highest credibility ratings to sources from scholarly journals. Although players used few BiblioBouts sources for their final papers, they now know that they can find scholarly sources by searching relevant databases available through the university library’s database portal and use Zotero to automatically generate citations and save digital full-texts. Of the several factors contributing to players’ failure to cite the game’s scholarly sources in their papers, most important was the evolutionary nature of their final-paper topics, becoming more specialized than the original broad-based topic that was the impetus for the sources they submitted to the game, and the game’s exposure of players to many more sources than they would have found on their own. By the time the game ended, some players were already using Zotero to organize their sources for assignments in other classes. BiblioBouts players were partial to learning how to conduct library research by playing a game because the game situated them in an open competition with fellow students, stimulating them to go above and beyond what they would have normally done because of the immediate rewards they reaped as a result of their game-play activity.

Because this paper’s analysis was limited in scope to a single class of second-year students playing the BiblioBouts information literacy game, results may not be statistically valid or generalizable. However, results are important because they expose the “black box” events of research-and-writing assignments such as the sources students and their classmates find and prefer, the topics they have in mind for their papers, and the best sources for their topics, and thus provide researchers with opportunities to analyze the decisions that students make in the process of achieving their objectives. This paper provides evidence that despite encountering scholarly sources several times during the research process, students depended on nonscholarly sources for their final written papers. The authors speculate that underclassmen are not intellectually ready to synthesize the scholarly sources they find. Follow-up studies are needed to confirm this speculation. A number of different scenarios are possible including comparing students’ game-play performances under different conditions: (1) incentives, for example, requiring students to play the game versus playing it for extra credit, (2) class level, for example, underclassmen versus upperclassmen, master’s students, and/or doctoral students, and (3) level of instructor involvement, for example, no instructor involvement versus instructors who play the game and give their students previews of the information literacy skills and concepts that needed to effectively and efficiently play each bout.


Acknowledgments

Support for the BiblioBouts Project is provided by the Institute of Museum and Library Services (IMLS) through its National Leadership Grant Program (LG-06-08-0076-08). Special thanks goes to the BiblioBouts team of expert coders: Caitlin Campbell, Adrienne Matteson, and Emily Thompson. Thanks also to Library Liaisons: Catherine Johnson at the University of Baltimore and Alyssa Martin at Troy University and to the BiblioBouts Project team at the University of Michigan: Fritz Swanson, Gregory R. Peters, Jr., Brian Jennings, Michele Wong, Victor Rosenberg, Soo Young Rieh, and Andrew Calvetti.


References and Notes
1. Karl V. Fast and D. Grant Campbell, “‘I Still Like Google’: University Student Perceptions of Searching OPACs and the Web, in Proceedings of the ASIS Annual Meeting 2004 (Medford, NJ: Information Today, 2004), 138–46; Alison J. Head, “How Do Students Conduct Academic Research?” First Monday 12, no. 8(2007), accessed February 10, 2012, http://firstmonday.org/issues/issue12_8/head/index.html; Jeffrey Knapp, “Google and Wikipedia: Friends of Foes?” in Teaching Generation M, ed. Vibiana Bowman Cvetkovic and Robert J. Lackie (New York: Neal-Schuman, 2009), 157–78; Allison J. Head and Michael B. Eisenberg, “How Today’s College Students Use Wikipedia for Course-related Research, ” First Monday 15, no. 3 (2010), accessed February 10, 2012, http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2830/2476; Caspar Grathwohl, “Wikipedia Comes of Age, ” Chronicle of Higher Education (January 7, 2011).
2. Steven Johnson, Everything Bad is Good for You: How Today’s Culture is Actually Making Us Smarter (New York: Riverhead, 2006); James Paul Gee, What Video Games Have to Teach Us about Learning and Literacy (New York: Palgrave Macmillan, 2007); Marc Prensky, Digital Game-based Learning (St. Paul, Minn.: Paragon House, 2007).
3. John D. Shank and Steven Bell,  "“Blended Librarianship: [Re]Envisioning the Role of Librarian as Educator in the Digital Information Age, ”,"  Reference & User Services Quarterly  (Dec. 2011)   51, no. 2105.
4. Bell Steven J and John Shank,  "“The Blended Librarian: A Blueprint for Redefining the Teaching and Learning Role of Academic Librarians, ”,"  College & Research Libraries News  (July 6, 2004)   65, no. 7373.
5. Shank and Bell, “Blended Librarianship, ” 106.
6. Association of College and Research Libraries, "“Information Competency Standards for Higher Education, ”,"   (2001) accessed October 17, 2012, www.ala.org/acrl/standards/informationliteracycompetency.
7. Rebecca Jackson,  "“Information Literacy and Its Relationship to Cognitive Development and Reflective Judgment, ”,"  New Directions for Teaching and Learning.  (2008) no. 11448.
8. Johnson,  "Everything Bad is Good for You, 41; Prensky,"  Digital Game-Based Learning. 106
9. Gee, What Video Games Have to Teach Us, 207–12.
10. Nicola Whitton,   Learning with Digital Games: A Practical Guide to Engaging Students in Higher Education (New York:  Routledge, 2010): , 52..
11. Kurt Squire, Video Games and Learning: Teaching and Participatory Culture in the Digital Age (New York: Teachers College Press, 2011): 219; Justine Martin and Robin Ewing, “Power up! Using Digital Gaming Techniques to Enhance Library Instruction, ” Internet Reference Services Quarterly 13, no. 2/3 (2008): 223.
12. Ibid., 213.
13. John Kirriemuir,  "“Teaching Information Literacy through Digital Games, ”," in Information Literacy Meets Library 2.0. ,   ed. Peter Godwin ,  (London:  Facet, 2008) .
14. Andrew Walsh,  "“Information Literacy Assessment: Where Do We Start?”,"  Journal of Librarianship & Information Science  (2009)   41, no. 1:  19–28.
15. Thomas Kirk “A Comparison of Two Methods of Library Instruction for Students in Introductory Biology, ” College & Research Libraries 32 no. 6 (1971): 465–74; Amy Dykeman and Barbara King, “Term Paper Analysis: A Proposal for Evaluating Bibliographic Instruction, ”Research Strategies 1, no. 1 (1983)14–21; Bonnie Gratch, “Toward a Methodology for Evaluating Research Paper Bibliographies, ” Research Strategies 3, no. 4 (1985): 170–177; David F. Kohl and Lizbeth A. Wilson, “Effectiveness of Course-integrated Bibliographic Instruction in Improving Coursework, ” RQ 27, no. 2 (1986): 206–11; V. E. Young and L. G. Ackerson, “Evaluation of Student Research Paper Bibliographies: Refining Evaluation Criteria, ” Research Strategies 13, no. 2 (1995): 80–93.
16. Kirk, “A Comparison of Two Methods, ” 470.
17. Dykeman and King, “Term Paper Analysis, ” 16.
18. Philip M. Davis and Suzanne A. Cohen, “The Effect of the Web on Undergraduate Citation Behavior 1996–1999, ” Journal of the American Society for Information Science and Technology 52, no. 4 (2001) 309–14; Penny Beile, David N. Boote, and Elizabeth Killingsworth, “Characteristics of Educational Doctoral Dissertation References: An Inter-institutional Analysis of Review of Literature Citations, (paper presented at the Annual Meeting of the American Educational Research Association, Chicago, Ill., 2003), http://eprints.rclis.org/handle/10760/15870#.TzWV7yNaNV4 (accessed on February 10, 2012); Lisa Janicke Hinchliffe et al., “What Students Really Cite: Findings from a Content Analysis of First-year Student Bibliographies, ” Library Orientation Series 34 (2003): 36–74; Andrew M. Robinson and Karen Schlegel, “Student Bibliographies Improve When Professors Provide Enforceable Guidelines for Citations, ” portal: Libraries and the Academy 4, no. 2(2004), 275–90; Casey M. Long and Milind M. Shrikhande, “Improving Information-Seeking Behavior Among Business Majors, ” Research Strategies 20, no. 4 (2005): 357–69; Anne Middleton, “An Attempt to Quantify the Quality of Student Bibliographies, Performance Measurement and Metrics 6, no. 1 (2005): 7–18; Beth Mohler, “Citation Analysis as an Assessment Tool, Science & Technology Libraries 25, no. 4 (2005): 57–64; Johanna Tuñón and Bruce Brydges, “A Study on Using Rubrics and Citation Analysis to Measure the Quality of Doctoral Dissertation Reference Lists from Traditional and Nontraditional Institutions, ” Journal of Library Administration 45, no. 3/4 (2006): 459-481; N. N. Edzan, “Analyzing the References of Final Year Project Reports, ” Journal of Educational Media & Library Sciences, 46, no. 2 (2008): 211–31.
19. Long and Shrikhande, “Improving Information-Seeking Behavior, ” 361.
20. Tuñón and Brydges, “A Study on Using Rubrics, ” 467.
21. Middleton, “An Attempt to Quantify, ” 13.
22. Ibid.
23. Kevin Crowston and Barbara H. Kwasnik,  "“A Framework for Creating a Faceted Classification for Genres: Addressing Issues of Multidimensionality”,"  (Kona, Hawaii:  paper presented at the 37th Hawaii International Conference on System Sciences, 2004): , doi: 10.1109/HICSS.2004.1265268..
24. Chris Leeder, Karen Markey,  and Elizabeth Yakel,  "“A Faceted Taxonomy for Rating Student Bibliographies in an Online Information Literacy Game, ”,"  College & Research Libraries  (March 2012)   73, no. 2:  115–33.
25. Ibid., 121.
26. Ibid., 122.
27. Jackson, “Information Literacy, ” 60.

Tables
Table 1

The Bouts of BiblioBouts


Bout Suggested Duration Description Pedagogical Goals
Donor 7 days Students search the web and library databases for sources and save them using the Zotero citation management tool that passes them to BiblioBouts Students become experienced users of professional research and discovery tools for finding and managing relevant information on an academic topic
Closer 3 days Students choose their best sources to “do battle” in the game Students become proficient distinguishing between digital citations and full-texts and assessing the relevance of sources on an academic topic
Tagging & Rating (T&R) 7 days Students evaluate the content and quality of their classmates’ sources Students develop proficiency in evaluating sources based on indicators of their quality and their relevance to the broad-based topic
Best Bibliography 3 days Students specify a research topic and select the best sources that they will use to write their paper Students practice formulating their research paper’s topic and choosing the best sources for writing the paper

Table 2

Taxonomy Scores for Sample Scholarly and a Non-Scholarly Sources


Source Description Attribute Element Taxonomy Score
A state-of-the-art literature review published in a scholarly journal Information Format Scholarly Journal 4.0
Literary Content Article/Synthesis 3.8
Author Identity Academic Professional 4.0
Editorial Process Peer-Reviewed 4.0
Publisher Higher Education 4.0
Total for the scholarly source 19.8
A college professor’s blog about the academic subject matter he teaches Information Format Blog 1.3
Literary Content Editorial 2.3
Author Identity Academic Professional 4.0
Editorial Process Self-Published 1.3
Publisher Higher Education 4.0
Total for the nonscholarly source 12.9

Table 3

IFs of Players’ BiblioBouts Sources


Information Format (IF) Closer Bout Sources Best Bibliography Bout Sources
No. % No. %
Scholarly Formats
 Conference proceedings 2 1.4 0 0.0
 Encyclopedia 10 7.0 8 7.6
 Monograph 3 2.1 3 2.9
 Research report 7 4.9 4 3.8
 Scholarly journal 100 70.4 77 73.3
 Trade journal 6 4.2 6 5.7
 Scholarly subtotal 128 90.0 98 93.3
Nonscholarly Formats
 Consumer magazine 2 1.4 2 1.9
 Consumer newspaper 5 3.5 4 3.8
 Digital repository 1 0.7 0 0.0
 Directory 3 2.1 1 1.0
 Promotional material 2 1.4 0 0.0
 Trade magazine 1 0.7 0 0.0
 Nonscholarly subtotal 14 10.0 7 6.7
Total 142 100.0 105 100.0

Table 4

IFs of Players’ and Non-Players’ Final-Paper Sources


Information Format (IF) Players’Final-Paper Sources Nonplayers’Final-Paper Sources
No. % No. %
Scholarly Formats
 Conference proceedings 1 0.6 2 2.9
 Encyclopedia 14 7.6 4 5.9
 Monograph 3 1.6 1 1.5
 Research report 20 10.9 2 2.9
 Scholarly journal 42 22.8 13 19.1
 Trade journal 2 1.1 2 2.9
 Scholarly subtotal 82 44.6 24 35.2
Nonscholarly Formats
 Blog 8 4.3 5 7.4
 Consumer magazine 15 8.2 3 4.4
 Consumer newspaper 21 11.4 13 19.1
 Course material 1 0.6 1 1.5
 Database 2 1.1 0 0.0
 Directory 5 2.7 1 1.5
 Policy statement 15 8.1 0 0.0
 Promotional material 19 10.3 9 13.2
 Public affairs information 0 0.0 12 17.7
 Public sharing site 10 5.4 0 0.0
 Trade magazine 4 2.2 0 0.0
 Trade newspaper 2 1.1 0 0.0
 Nonscholarly subtotal 102 55.4 44 64.8
Total 184 100.0 68 100.0

Table 5

Players’ Credibility Ratings and Coders’ Taxonomy Scores


Information Format (IF) Closer Bout Sources Best Bibliography Bout Sources Players’ Final-Paper Sources Nonplayers’ Final-Paper sources
Taxon. Cred. % Taxon. Cred. % Taxon. Taxon.
Scholarly Formats n = 128 n = 98 n = 82 n = 24
 Conference proceedings 18.7 60.6 NA NA 18.8 18.0
 Encyclopedia 15.8 44.8 15.9 40.7 13.4 13.3
 Monograph 17.3 75.4 17.3 75.4 18.4 NA
 Research report 16.9 57.0 16.6 65.8 15.7 16.3
 Scholarly journal 19.6 68.8 19.5 70.6 19.5 18.9
 Trade journal 17.7 64.9 17.7 64.9 17.8 16.4
 Scholarly average 19.0 66.1 18.9 67.7 17.4 17.4
Nonscholarly Formats n = 14 n = 7 n = 102 n = 44
 Blog NA NA NA NA 8.4 8.0
 Consumer magazine 14.0 82.1 14.0 78.4 12.7 13.5
 Consumer newspaper 13.5 50.1 13.5 50.1 13.8 13.5
 Course material NA NA NA NA 17.1 12.6
 Database NA NA NA NA 15.6 NA
 Digital repository 18.1 64.5 NA NA NA NA
 Directory 13.3 64.4 14.6 69.8 11.8 13.3
 Policy statement NA NA NA NA 12.4 NA
 Promotional material 11.7 67.5 NA NA 11.9 12.1
 Public Affairs Info NA NA NA NA NA 14.4
 Public sharing site NA NA NA NA 12.2 NA
 Trade magazine 14.6 73.1 NA NA 15.0 NA
 Trade newspaper NA NA NA NA 13.2 NA
 Nonscholarly average 13.7 63.9 13.6 65.2 12.5 12.8
Total 18.4 65.9 18.6 65.9 14.7 14.4

Table 6

Players’ Best Bibliography and Final Paper topics


Best Bibliography Topic Final Paper Topic
Equivalent Best Bibliography and broad-based topic that become narrower final paper topic (n = 9)
 [Same as the broad-based topic] Hollywood celebrities and global warming
 [Same as the broad-based topic] Cuba: Adapting to climate change
 [Same as the broad-based topic] Ireland as a cycling nation
 [Same as the broad-based topic] Climate change and USA political parties
Same specific Best Bibliography topic and final paper topic (n = 7)
 Rising sea levels and Tuvalu
 Malaria in Africa
 Effect of climate change on the Inuit way of life
 Effects of global warming on the St’at’mic people of British Columbia, Canada
A specific Best Bibliography topic and narrower final paper topic (n = 6)
 Effect of climate change on bodies of water Effect of climate change on Lake Mead, NV
 Climate change effect on health Getting to work green and healthy in the USA
 Households go green Los Angeles: How households can stop global warming
 Environmental policy in urbanizing, industrializing countries Mitigation of greenhouse gas emissions in China’s industrial sector

Table 7

Publishers of Players’ Closed Sources


Publisher Quality Points Publisher Types No. % Taxon. Av. (Coders) Credit. Av. (Players)
4.0 Higher education 113 79.6 19.3 71.7
3.5 Government 12 8.5 15.0 66.0
2.0 Commercial 10 7.0 14.4 68.6
3.3 K–12 education 4 2.8 15.6 70.6
3.3 Non-profit 3 2.1 15.4 58.6
1.3 Individual person 0 0.0 NA NA
NA Total 142 100.0 18.7 70.7

Table 8

Publishers of Players’ and Nonplayers’ Final Paper Sources


Publisher types Players Nonplayers
No. % Taxon. (Coders) No. % Taxon. (Coders)
Higher education 48 26.1 19.1 17 25.0 18.1
Commercial 47 25.5 13.0 15 22.1 12.8
Government 45 24.5 14.1 14 20.6 14.4
Non-profit 34 18.5 12.8 13 19.1 13.8
K–12 education 5 2.7 13.6 5 7.4 13.6
Individual person 5 2.7 6.8 4 5.8 7.1
Total 184 100.0 14.4 68 100.0 14.7


Article Categories:
  • Library Reference and User Services
    • Features

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2023 RUSA