12_KitzieConnawayRadford

“I’ve Already Googled It, and I Can’t Understand It”: User’s Perceptions of Virtual Reference and Social Question-Answering Sites

Vanessa L. Kitzie, Assistant Professor, School of Information Science, University of South Carolina; email: kitzie@mailbox.sc.edu. Lynn Silipigni Connaway, Director of Trends and Library User Research, OCLC; email: connawal@oclc.org. Marie L. Radford, Professor and Chair, School of Communication and Information, Rutgers University; email: mradford@rutgers.edu.

For librarians to continually demonstrate superior and high-quality service, they must meet the needs of current and potential users. One way that librarians have met the needs of users is by expanding their service offerings online via virtual reference services (VRS). This expansion is particularly critical in the current time of COVID-19. To provide high-quality VRS service, librarians can learn from social question-answering (SQA) sites, whose popularity reflect changing user expectations, motivations, use, and assessment of information. Informed by interviews with 51 users and potential users of both platforms this research examines how strengths from SQA can be leveraged in VRS, and what can be learned from SQA practices to reach potential library users. This study represents one of the few comparisons between VRS and SQA that exist in the literature.

Informed by user demand, librarians have expanded their service offerings online. One significant and now longstanding offering is virtual reference services (VRS). These services are increasingly relevant in the current time of COVID-19, as librarians have scrambled to meet their user needs virtually. Librarians can implement high-quality VRS by learning from VRS research, which has developed methods and empirical evidence to assess and improve service quality. However, current research is limited, focusing on pre-existing users rather than potential ones, including the many individuals relying on social question-answering (SQA) sites to ask and answer questions online. SQA sites have similar objectives to VRS, but their differences reflect changing user expectations, motivations, use, and assessment of information.

For librarians to continually demonstrate superior and high-quality service, they must meet the needs of both current and potential users.1 Improving VRS services is no different. Informed by interviews with 51 users and potential users, this project examined how strengths from SQA can be leveraged in VRS, and what can be learned from SQA practices to reach potential library users. This study represents one of the few comparisons between VRS and SQA that exist in the literature. Findings provide context and offer practical suggestions for translating reference services to virtual environments in ways that continually meet user expectations and needs.

Literature Review

VRS, including live chat and email, have become practical alternatives to face-to-face (FtF) and telephone communication with librarians.2 The majority of public and academic libraries now offer VRS,3 addressing a growing demand among users for 24/7 access to library resources and services,4 and the changing ways individuals seek, share, and use information online. This offering of VRS has increased as librarians have rapidly and suddenly had to move their services online as COVID-19 has become a global pandemic. Preliminary results of research that includes a national survey and in-depth interviews with managers/directors of live chat services in academic libraries reveals an increase in virtual services, driven by the COVID-19 related closure of nearly all physical library buildings for an extended period of time beginning in March 2020.5 Anecdotal evidence, including VRS how-to’s and case studies, have also emerged in practitioner literature and services such as Springshare have experienced a 267% increase in total chats when comparing February to August 2020 across the US, Canada, Europe, and Australia.6

People’s use of social media as an information source has also grown. In 2017, 67 percent of Americans reported getting news on social media sites.7 People also use social media to gather health information and information about crises and social movements.8 One social media service paralleling VRS is SQA, such as Quora, Yahoo! Answers, and WikiAnswers. While both SQA and VRS provide “high-quality information that satisfies the information seekers’ needs,”9 these services differ in how they deliver information and who they enlist for delivery.

SQA services are collaborative. They rely on information from multiple individuals in a community, rather than a single expert, and thus provide a one-to-many service model of information delivery.10 These services are inexpensive, asynchronous, and useful for building social capital within an online community. Unlike SQA, VRS sites like Springshare involve a one-to-one interaction between a user and librarian, who has expertise with searching and may also have expertise in the user’s subject area. These interactions occur asynchronously or synchronously.11 While these services are free, as compared to some SQA sites requiring payment, individuals may not be able to access them if they do not belong to the library in question or are unaware of their existence.

Because of these differences, studies of VRS and SQA are often conducted separately. The following literature review is divided into two sections, one on each service, and conclude with a section reviewing studies that have directly compared them.

VRS Research

Within libraries, VRS has become a “user-preferred medium for knowledge exchange.”12 Since VRS provide a new environment from which to engage in a reference encounter, studies have focused on how the mediated context affects the quality of answers received when using these services.13 Prior research has found timeliness to influence VRS quality. Reference interviews are rarely conducted in chat and email-based transactions due to the librarian’s desire to provide a satisfactory answer quickly.14 The absence of these reference interviews, valuable to gaining insight into a user’s information needs, likely contributed to shifting the volume of user queries from subject search to procedural questions.15 Question type also varies by VRS platform. Rourke and Lupien found that users of library-based reference services were more likely to use IM-based services for less formal, ready reference content and live chat services, with features like co-browsing, for formal, in-depth searches.16 Mawhinney and Kochkina had similar findings, noting that question complexity was lower for text-based VRS as compared to chat and email.17 McKewan and Richmond also found an increase in question complexity when longitudinally comparing transcripts from a VRS live chat service.18 Therefore, librarians must be aware of user motivations and expectations when using a particular service, since they will impact what types of questions elicit high-quality answers.

Effectiveness and efficiency, or being given a relevant answer in a reasonable amount of time, are two additional measures impacting VRS quality and user loyalty.19 Similar to timeliness, these measures are context-dependent and based on user motivations and expectations. For instance, while in many cases users prefer unassuming questions that can be answered quickly,20 others use the service for complex questions involving research assistance and instruction, as well as technology-based help such as website navigation.21 Other studies have focused on the importance of experts in providing quality and satisfactory answers to end users, suggesting that relational features during the reference transaction influence effectiveness.22 Presence of relational features, such as providing empathetic expressions and high levels of engagement, also has been shown to increase accessibility of VRS services.23 Additional work has examined VRS within specific contexts, such as information literacy instruction,24 with findings offering implications for improving practice. Finally, Radford et al. and others have demonstrated that collaboration is vital for the provision of quality service and has the potential to be fostered between VRS librarians and those providing SQA services.25

SQA Research

Unlike VRS, which has a one-to-one model, the one-to-many model of SQA has resulted in studies of norms governing assessment, identity formation, and motivation unique to its multidimensional and collaborative platform.26 Several elements of the SQA service model have been addressed by conceptual frameworks, such as value assessment,27 network interaction between community members,28 community evolution over time,29 and intermediation.30

Past SQA research can be categorized as being either user-based or content-based.31 User-based studies focus on classifying the types of people that use the service and how users vary concerning motivation and satisfaction when using the service.32 Classification of users within SQA services varies based on service type. For example, Shah, Oh, and Oh found that consumers, or those who ask questions, greatly outnumbered contributors or those who answer questions, within the now-defunct Google Answers, while there was more of a balance between these user types in the now-defunct Yahoo! Answers.33 Various roles within SQA sites can also affect the kind of content exchanged, and whether it is even exchanged. In his examination of Answerbag, Gazan typified users into seekers, or those who interact with the community when posting questions, as opposed to sloths, who post a question, often homework-related, verbatim, and have no further interactions. He found that community members were more likely to assist seekers in fulfilling their information needs, and attempted to educate sloths regarding the ethics and values of the community, which shares values parallel to those of reference providers.34 Most recently, Roy et al. distinguished between reputation collectors, who contribute low-quality content to gain reputation points, versus caretakers, who are motivated to provide high-quality content.35

The social values of SQA sites also influenced reported motivations of use. Studies indicate that SQA answerers are generally motivated to provide answers to collect social capital, enforce site norms, monitor answer quality, and attain personal via altruistic behavior,36 while askers are motivated to fulfill cognitive, social, and emotional needs.37 These latter motivations vary by platform. For example, Choi and Shah, found that while users of Yahoo! Answers and WikiAnswers both reported fulfilling cognitive needs as their primary motivation for service use, in WikiAnswers, these needs were based around fact-finding questions, while in Yahoo! Answers, these needs reflected questions soliciting advice or opinion-based questions.38 These findings parallel prior VRS research suggesting that content exchanged varies based on the service model.

The other primary type of study performed in SQA, content-based, examine factors that compromise quality and satisfactory answers through predominantly quantitative approaches.39 Models have been developed to predict asker satisfaction using proxies and experts as evaluators.40 However, these models have not established a comprehensive set of criteria to account for outside variability of answer quality not explained by the models.41 Thus, qualitative methods of evaluation have emerged. For example, through content analysis, Kim and Oh identified characteristics, such as answerer politeness, as the most critical factors influencing asker satisfaction.42 Similar to VRS research findings, recent SQA research also suggests that users rely on actual characteristics of answerers,43 especially in when seeking health information.44 Emerging content-based approaches have investigated why specific questions are more likely to get answered than others, with implications for expert question routing and design, such as the identification of similar or complementary questions.45

Comparing VRS to SQA

Few direct comparisons exist in the literature between VRS and SQA. Existing comparisons find that SQA and VRS users, experts, and designers view them as complementary services, rather than in competition. This complementarity is due to the varying motivations and expectations for each service. Users prefer SQA for relational questions and prioritize the end product over the process. For these reasons, SQA users highlight its high content volume and speed, as well as its social and network-based aspects as critical strengths of the service. Users prefer VRS for fact-based questions and, conversely to SQA users, prioritize the process over the product. By understanding the process by which the answer was derived, these users can feel confident in the quality, relevance, accuracy, authoritativeness, and completeness of the answer.46 Findings suggest the existence of service and design synergies for SQA and VRS services.47

Research Questions

Based on findings from the literature review, this research addresses several gaps. First, most studies engage in content analysis of preexisting VRS and SQA content as opposed to interviewing users directly. This methodological choice represents a missed opportunity to uncover user perceptions of these services, which can directly inform improvements to service models.

Further, the majority of VRS and SQA studies focus on service quality. While service quality is essential, other key relational and motivational factors also have been found to influence service use and evaluation, and, therefore, should be considered in research of online question-answering (Q&A) services, inclusive of both VRS and SQA. Finally, few studies directly compare VRS and SQA, despite their potential design synergies.

Informed by these research gaps, this study addresses the following research questions:

  • RQ1. What are user motivations for using online Q&A services? How, if at all, do these motivations vary by service type?
  • RQ2. Where do users go when they have questions that require subject expertise? Why do they go there?
  • RQ3. How do users evaluate online Q&A services? How, if at all, does this evaluation vary by service type?
  • RQ4. How can the strengths of VRS and SQA be leveraged against each other’s weaknesses?

Methods

Recruitment

Data collection and analysis for this project occurred between 2012-2014. To address the research questions, the research team interviewed VRS and SQA users. The team divided recruitment into two rounds by service, VRS or SQA. To participate in the study, participants had to demonstrate their use of either VRS or SQA services at least once in the previous six months.

Participants were recruited using snowball sampling techniques, which consisted of study investigators emailing recruitment scripts for VRS and/or SQA users to personal contacts, who were asked to consider participating in the study or forwarding the script(s) to others who might participate and/or to their university listservs. Also, for Round 1, the team posted a pop-up message to Maryland AskUsNow! VRS and for Round 2, the team asked contacts at several universities and public libraries to post flyers in their spaces promoting the study. These recruitment efforts culled a final participant list of 54 users of VRS and SQA services. Since three participants indicated prior experience working in a library, their responses were not analyzed, yielding a total of 51 participants. Each participant received a $30 honorarium.

Data Collection

Data were collected for this project via telephone interviews. Interviews provide rich insight into human behavior, which was the primary goal of this study.48 While telephone interviews were used to address geographical and time barriers between researchers and participants, this modality can pose a limitation due to its lack of FtF context.

Interview questions developed for this study are based on the analysis of VRS and SQA transcripts from OCLC Question Point’s VRS and the SQA site, Yahoo! Answers. Specifically, areas identified in the transcripts that needed to be more fully understood and probed were included as interview questions. These areas identified how users of online Q&A services access the services, motivations for use, and if their experiences were successful or unsuccessful. The critical incident technique (CIT) was used when asking questions to determine the success of the interaction. CIT questions were developed using Flanagan’s original technique, as well as Dervin’s notion of critical incidents as moving through space-time, which asks participants what changes they would enact in a specific context given a magic wand.49

After an initial set of questions were developed, they were pre-tested on three individuals. Based on the comments of the individuals regarding the clarity, relevance, and scope of the questions to an online Q&A experience, the interview questions were revised accordingly.

The finalized interview schedule consisted of close-ended questions regarding categorical demographic information and the use of online SQA services, as well as open-ended questions regarding the use of VRS and/or SQA. Interviews were performed via telephone and lasted between 7 minutes and 20 seconds and one hour and 38 minutes. The mean time for interviews was 28 minutes and the median time was 23 minutes.

Data Analysis

Transcripts of the open interview questions were coded by two coders using the grounded theory method to establish general thematic concepts.50 As a preliminary step to coding, coders divided the transcripts by interview question and met in pairs to annotate five interview transcripts for each question that they were assigned. Annotations consisted of brief notes that summarized the main concepts expressed by the participants in the transcripts. Once coders agreed on how to code the emergent concepts, they divided transcripts by question and developed coding schemes for their assigned questions.

Coding was divided into two stages—the initial stage and the final stage. In the initial phase of coding, the assigned coder studied a transcript line-by-line and categorized the data with a name that described what was occurring within that line or lines. As coding progressed, the coder engaged in constant comparisons between the data to define analytic distinctions between codes. Following this initial phase and informed by constant comparative methods, the coder revisited these initial codes and engaged in focused coding, where salient codes that frequently occurred within the data were selected and organized into higher level concepts.51 The final themes established during focused coding were then used to develop a formal codebook. This codebook consisted of the theme, a brief description of the theme, and quotes from the transcripts that exemplified the theme.

Coders established inter-coder reliability (ICR) in pairs. An initial round consisted of coding ten transcripts for each question and revisiting codes that did not have an acceptable level of agreement (> 0.85). Based on a discussion between the two coders regarding inconsistencies, the resulting codebooks were revised, and an additional five transcripts were re-coded and inter-coder reliability re-calculated for an overall kappa level of 0.95. Coders then worked separately to code the rest of the transcript data within their assigned questions. The results of this coding, including a discussion of the coding schemes, now will be discussed.

Findings

Demographics

The majority (58% total) of respondents identified as students (students, 25%, n = 13; undergraduate students, 25%, n = 13; graduate students, 8%, n = 4). Other respondents identified as holding various occupations, including managerial roles, sales roles, an attorney, an adjunct professor, and homemakers. The majority of respondents were ranging in age from 19-25 (57%, n = 31), followed by those from 26-34 (22%, n = 12), those from 35-44 (10%, n = 5), and those from 12-18 (8%, n = 3).

Respondents reported searching the web frequently, with more than 10 web searches per day (39%, n = 21), followed by those searching the internet between 4-6 searches per day (26%, n = 14). No one reported searching the internet occasionally (at least 1-3 searches per day). Along with being frequent searchers, not only did the respondents search the web frequently, but they also felt that they were very experienced searchers (43%, n = 23), followed by those who reported being experienced searchers (37%, n = 20). Respondents also indicated satisfaction with using web searches to find what they were looking for very often (59%, n = 32) or often (37%, n = 20).

The majority of respondents reported using SQA services (94%, n = 47), while a smaller proportion reported using VRS services (39%, n = 20). Of the individuals who used SQA services, 43% (n = 20) visited SQA sites 1-3 times per week, followed by those visiting more than 3 times per week (30%, n = 14) and those visiting occasionally (28%, n = 13). Participants reported that they posted questions (43%, n = 20) more than they answered them (37%, n = 17), although the majority of respondents did not report either asking (58%, n = 27) or answering questions (64%, n = 30). The individuals who used VRS services visited VRS sites occasionally (74%, n = 14), with a much smaller proportion reporting more frequent use of either 1-3 times per week (22%, n = 4) or more than 3 times per week (6%, n = 1).

Key Themes

Based on the analysis of responses, findings were divided into four major themes: motivations for use, sources consulted, evaluation of service, and magic wand.

Motivations for Use

Motivation is defined as an individual’s internal need that guides subsequent behavior.52 In the context of this research, motivation represents how users connect their information needs to the use of a specific service, whether SQA or VRS. Users can be motivated to either use the service or not use the service based on a series of intervening factors.

For VRS, the main factors affecting users’ motivations to use or not use the service were quality (n = 39), satisfaction (n = 26), and variety of services (n = 25). For quality, users indicated that VRS services gave them information that was of good (n = 8) to high (n = 5) quality. As VS41 stated: “I normally use [VRS] for research projects. I’m very satisfied with the service, it’s high quality. I like the instant messaging feature they have.” However, as indicated in this response, users’ motivations to use VRS for high-quality information depend on the type of question (n = 4), with users seeking out VRS to answer more complicated questions that rely on subject expertise (n = 4). However, users reported that sometimes the VRS librarian may lack subject expertise or contextual knowledge, which negatively affects the quality of the reference service: “You have to put a question in, and they are supposed to give you a librarian close to you, but sometimes they don’t understand what you are asking, and I think that’s poor quality” (VS55). As indicated by user VS55, the VRS platform may deliver a librarian not geographically co-located with the user, which may impede the librarian’s expertise related to collection-specific questions.

After quality, the second factor most frequently mentioned by VRS users affecting their motivations for use was satisfaction (n = 26). Satisfaction is comprised of two elements—material satisfaction with an information system’s performance and emotional satisfaction, which hinges on a user’s expectations, goals, and specific tasks to perform (Bruce, 1998). The majority of users ranged from being satisfied (n = 5) to very satisfied (n = 7) with VRS services. Factors contributing to satisfaction were both material and emotional. One factor contributing to these high levels was a system-level (material) feature of instant messaging. Of the few users who indicated dissatisfaction with VRS (n = 2), one reason reported by user VS52 related to emotional satisfaction: “I wasn’t satisfied with the hours the librarians are available. I wish it were 24 hours.” In this example, VS52 had expectations of 24-hour availability for VRS that were not met by the service.

On almost equal footing with satisfaction, variety of services (n = 25) was the crucial third factor that impacted user-reported motivations for VRS use. This factor denotes the level of flexibility for VRS in providing services relevant to a variety of information seeking contexts. As user VS42 reports: “I use [VRS] when I was taking sociology, psychology, accounting, online classes; they were very useful especially when it came to finding resources for term papers or just writing a paper in general.” For this user, VRS was successful in addressing a variety of academic subjects. Other contexts VRS discussed as reported by users were: a variety of services including help finding resources (n = 11) and with user accounts (n = 1), and answering nonacademic questions including personal (n = 1) and marketing/advertising (n = 1) ones.

Other factors mentioned by users as motivations for VRS use were: ease of use (n = 14), convenience (n = 6), as a secondary option after a failed web search (n = 6), accessibility (n = 4), and to facilitate “one on one” communication (n = 3). In some instances, users reported a lack of awareness of VRS (n = 4). One user noted that VRS was a good concept in theory, but not in practice: “I’ve already Googled it and can’t understand it, so I need someone to explain it to me in a different way. It’s good, but it didn’t help very much. I think it’s a good concept; however, they show you the page you need, but they don’t really explain it that much” (VS45).

For SQA, the main factors influencing user motivations were quality (n = 25), satisfaction (n = 14), and information relevant to a specific subject (n = 10). The first two factors, quality and satisfaction, parallel those mentioned by VRS users. Where they differ is that SQA users report receiving answers of more variable quality (n = 10) than a good quality (n = 5). Further, users find information from SQA to be lacking in reliability (n = 5) and credibility (n = 5). One reason why individuals still may be motivated to use SQA despite this variable to low quality of answers is that when they do receive high-quality answers, they come from a subject expert: “someone responding has expertise that is relevant to what you are asking” (VS42). Subject expertise is also reported as a motivation influencing satisfaction with the service (n = 3). Other factors impacting satisfaction with SQA are easy to access and use (n = 6), timeliness (n = 2), a variety of opinions and experiences (n = 2), and detailed information (n = 1).

Another factor impacting users’ motivations for using SQA was that this service provided information relevant to a specific subject (n = 10). Users appeared to value SQA for its ability to connect them with information about specific, sometimes esoteric topic areas: “I build a lot of model airplanes and Yahoo Answers! are good for specific questions like what the tarmac was in WWII that are hard to find by just searching” (VS3). Another user VS31 indicates using SQA to address: “a specific question that other information sources may not specifically address” (VS31).

A final key difference between reported motivations for SQA use as compared to VRS is that the former has more affective elements, including facilitating human interaction (n = 7), the elicitation of personal feedback (n = 5), altruism (n = 3), and in one case, having fun (n = 1). These affective elements were mirrored in users who reported their motivations for using SQA to answer questions, citing altruism (n = 9) and belongingness (n = 5) as critical factors. As user VS57 recounts the decision to answer a question on SQA: “I just thought, ‘This is so awful! This poor girl!’ and I thought just maybe she’d listen to my answer reassuring her.” Other elements motivating users to answer others’ questions had to do with their perceived subject expertise (n = 5), serendipity in stumbling onto a question they knew how to answer (n = 4), and the gamification elements of the service (n = 3).

Sources Consulted

Both VRS and SQA users were asked what sources they would consult when looking for information outside of their area of expertise. The top sources named were social search (n = 19) and Google (n = 17). Social search entails online information seeking in which an individual consults social resources such as friends, subject experts, or unknown people online. Interpersonal sources identified included peers (n = 7), professors/teachers (n = 4), experts (n = 4), librarians (n = 3), and colleagues (n = 1).

When in a situation where users felt the need to contact subject experts, their choice of the communication medium to do so varied based on their relationship with the expert (n = 10), followed by what would give them the most high-quality information (n = 3). Whom users identified as subject experts hinged on their personal networks (n = 12) and confidence that the expert would know the answer (n = 11). In some cases, knowing that the expert would be able to find the answer (n = 5), trusting their answer (n = 5), and would be able to understand the user’s query (n = 6) was enough for the user to frame that person as an expert.

Following social search was Google, which seemed to be the next relevant option if a trusted interpersonal source was not available: “I guess usually Google unless I specifically know a person that I think that person would know the answer” (VS57). Many users mentioned either using or thinking about using VRS (n = 39). Making a move from thinking about VRS to actually using it appears to depend on the information need (n = 9): “I’ll use ask-a-librarian if it’s the night before my project and the library’s closed. When my other options fail, basically” (VS45). Most often, users reported using VRS if their information need was educational or research-based (n = 6).

Although a higher number of users reported at least considering VRS when having a question outside of their expertise area, and also designated the high levels of quality and satisfaction found within VRS as motivating their use, overall, as reported in the Demographics section, there were more regular users of SQA than VRS. One reason for this discrepancy may be explained by the fact that many users did report using Google and other search engines as an information source; another may be that it is challenging to identify VRS users because of privacy restrictions implemented by VRS providers and librarians. Through these searches, VRS users often indicated (n = 17) being pushed to SQA sites indirectly. According to user VS35: “I basically looked up the question on Google, and the first thing that usually comes up is Yahoo! Answers, for my types of questions at least.” Even when users reported directly visiting SQA sites for information (n = 9), the majority (n = 6) searched these sites for prior questions and answers relevant to theirs: “I went to Yahoo! and I typed in the main words of my question, and it’s usually the second or third thing to pop up and clicked that” (V18).

Evaluation of Service

Users were asked to evaluate VRS and SQA. For VRS, users mentioned a variety of factors they identified as necessary for evaluation. These factors were accessibility (n = 7), rapidity of information delivery (n = 6), reliability (n = 5), personal connection with a subject expert (n = 5), variety of sources (n = 3), knowledge and expertise of the librarian (n = 3), additional assistance (n = 2), and the simple fact that the service is “easier” (n = 1). When compared to a few key factors motivating VRS use (quality, satisfaction, variety of services), the more diverse factors impacting users’ evaluation of VRS suggests that there may be a communications gap between what VRS can deliver for users versus users’ perceptions of the service.

For SQA, users evaluated the service based on its delivery of varied opinions (n = 13); trustworthiness (n = 9), with users split on whether to trust (n = 5) or not trust (n = 4) results; its relational characteristics (n = 9); and the similarity of SQA content to users’ information needs. These results parallel users’ identified motivations for SQA use related to its affective components. Users’ identification of the variable quality and satisfaction of SQA as affecting their motivation to use the service appears to translate into whether they consider the service to be trustworthy. Underlying the variability of trustworthiness is the perceived absence of traditional subject experts: “It’s serious but not something you can reference because it’s a free service and not recognized by anything except Yahoo!” (VS19). Finding information relevant to the users’ information needs may parallel how users often access SQA services either indirectly or directly, but without asking a question and instead by searching archived content. For instance, user VS24 notes that they often assess relevancy by looking through archived SQA content and “see[ing] other answers that are similar to what I’m looking for” (VS24). Perhaps not surprisingly, SQA users report using information from these services to inform further searching (n = 24) more often than for direct decision-making (n = 16). This use is likely influenced by their variable trust in the information received.

For VRS, the CIT was used to elicit past experiences of successful and unsuccessful interactions utilizing this service. VRS users who had successful experiences noted they were for search help (n = 11), ranging from basic (n = 8) to advanced (n = 3), and to find articles (n = 5). Several other experiences noted by participants tended to coalesce under a broader umbrella of library-specific reference services, such as help with formatting (n = 1) or locating a book (n = 2). Less often mentioned were successful experiences that hinged on librarian subject expertise (n = 2) or credibility (n = 1). Reasons identified by users as contributing to a successful experience include fast delivery of the needed information (n = 6), provision of good answers (n = 6), and the ability to deliver the information (n = 4). Other reasons less mentioned also clustered around elements of service excellence, such as providing help until the user learned (n = 2) and providing enough information so that the user did not have to follow up (n = 2).

While many respondents did not have any unsuccessful VRS interactions to speak of (n = 10), those that did identify experiences with accessing library resources (n = 3), searching the library website (n = 1), formatting (n = 1), and asking an IT-related question (n = 1). Some of the reasons these interactions were considered unsuccessful had to do with the irrelevance of the answer to the user’s initial question (n = 6), wait time (n = 3) and time pressure (n = 3), issues with systems (n = 2) or lacking collections (n = 1) interpersonal dynamics, such as the librarian being dismissive (n = 1) or blaming the user for the failed search (n = 1), and the librarian’s lack of subject expertise (n = 1).

As subject expertise did not appear to be a significant factor addressed by users when evaluating VRS, it also varied in level of importance when users were asked about it directly. Specifically, half of the users (n = 10) said it was very important that a VRS librarian had subject expertise, while the other half was divided between subject expertise being fairly important (n = 6) to not important at all (n = 4). In fact, user VS42 “didn’t know that librarians specialized in subject areas,” and said that what’s most important is high-quality service. The librarian must be able to “direct me where to go” (VS42). This perspective is also reflected in users reporting that they have never asked for a subject specialist (n = 8) versus asking for a subject specialist (n = 4). Reasons for wanting to speak to a subject specialist varied. Some wanted a subject specialist all the time (n = 3), but most others wanted a specialist for specific situations, such as when they have limited knowledge or expertise (n = 2), for a high stakes situation (n = 1), or to clarify their question (n = 1). Connecting these findings to the CIT questions about successful and unsuccessful VRS services, it appears that most users tend to evaluate VRS based on service quality more often than subject expertise, and therefore, do not prioritize the latter in their evaluations. When users were asked about how they would evaluate a librarian with subject expertise, they addressed factors like the ability to address the information need (n = 13) and trust (n = 11) as important to consider. User VS44 notes the unique need for subject expertise in the following quote: “Usually the general public don’t have too specific of questions, but if you’re working with a special institution, I would want someone who has knowledge of the topic I’m looking for because it usually means they have more experience looking for the answers” (VS44).

Magic Wand

Participants were asked a magic wand question, which asked them to describe the perfect site for all of their information-seeking needs. Their replies were divided by answerer expertise, site interface and display, communication between askers and answerers, cost, and reward and recognition for answerers. The majority of users stated they wanted experts to address their questions. As user VS65 says: “I would probably want someone who has some sort of expertise in that subject, not just some random guy who thinks he’s right.” Most users preferred that this expertise comes from formal education (n = 20); however, a subset (n = 10) believed that people “who have definite real-world experience” (VS31) could be considered experts, even if this expertise did not come from formal training.

When discussing the site interface and display, users often compared their proposed site to existing ones (n = 22), such as Google (n = 8). Desired site features included information organized into facets, or categories (n = 13) and a search bar (n = 8). Less prevalent, but also discussed were display features, such as the presence of colors that “appeal to people’s eyesight” (VS62) and the use of avatars (n = 2).

Decisions over how users would communicate were divided among asynchronous (n = 5) and synchronous—with named synchronous options including Skype (n = 3) and IM/chat (n = 2). Users reported the need for communications to be convenient (n = 10), facilitated by the site design: “It’d be very, very user-friendly and simple to work with . . .” (VS68).

Users were less concerned with cost and rewards and recognition for answerers. For cost, users were split between a free system (n = 3) or having a paid (n = 2) or tiered plan (n = 1). Two users mentioned that the site should have a reward or recognition mechanism similar to the gamification elements included on many SQA sites: “People post their questions on there, and they get points for it or rewards if you post the answer” (VS29).

Discussion

Based on these findings, what can VRS and SQA learn from each other concerning user motivations, expectations, and use of these services? Informed by participant accounts, VRS functions well in addressing a variety of information needs within academic and institutional information-seeking situations. Participants reported high levels of both quality and satisfaction with the service. These findings echo those from other research studies, which position VRS as addressing fact-based, often in-depth questions that require subject expertise and prioritizing the process behind delivering these answers.53

Despite a high number of users reporting at least considering using VRS, few actually said they used the service, but were only thinking about using it. What these findings seem to suggest is that a gap exists between what users expect from a VRS, when they are motivated to use it, and how they actually use it. While users understand the strengths of VRS in theory, in practice VRS simply is not the first resource that comes to mind when addressing their information needs. This finding parallels Zhang and Deng’s finding that the majority of surveyed online Q&A users were not aware of VRS. Even those who were aware identified barriers to use of VRS, namely unfamiliarity with the service and the perceived difficulties of using VRS.54

Further, those who did use VRS reported not asking for librarians with the necessary subject expertise despite stating that they prioritized expertise when answering the magic wand question. This finding highlights the importance of VRS librarians fostering a “culture of willingness” to collaborate with other subject specialists both outside of and within their subject areas to ensure high-quality, exhaustive answers rather than presuming that the user will necessarily ask for a subject expert.55 Further, since users prioritize service quality over expertise, it is essential that VRS training continues to focus on customer service skills.56

One reason why this gap between user consideration of VRS versus their actual use of the service may exist is that VRS does not align with how users typically look for information. Confirming findings from past research, users overwhelmingly identified Google as their first resource when seeking any kind of information.57 Often, Google would indirectly lead them to SQA sites. SQA research has responded to this and similar findings by investigating methods to best match a user’s question with archived SQA content.58 VRS, on the other hand, does not have similar archived question-answer pairs, meaning that users are less likely to stumble upon these services when engaged in typical information seeking situations. Based on this finding, it is perhaps not surprising that when individuals were asked what their ideal site to fulfill their information-seeking needs would look like, they said that the site would have similar aesthetics to those that they frequently pursue, including Google.

VRS also lacks some of the affective elements that users reported valuing when information seeking. Users reported consulting known interpersonal sources for information, deciding on the communication medium based on their relationship with the source. This observation reflects the importance of a person’s social network to their information seeking behaviors.59 Further, a stated motivation for the use of SQA services was for its affective elements, such as a resource for advice or entertainment, or to demonstrate altruism when answering others’ questions, confirming previous research findings.60 These elements are missing in VRS interactions, mainly when the librarian is unknown to the user.

Users indicated that despite its named advantages, SQA had significant disadvantages. The results are of variable quality and satisfaction; users also report that trustworthiness is not a significant factor in their decision to use SQA. Perhaps, as a result, users said that they employed information from SQA to inform future searches more often than to make direct decisions.

Based on the connections made between VRS and SQA and informed by prior research,61 the research team can make several design recommendations for VRS services. These recommendations are:

  • Archive VRS transcripts for Search Engine Optimization (SEO). If people are using Google to look for information and getting an SQA site as their first hit because it uses natural language, perhaps more library websites should publicly list and archive their questions and answers so that they may also be retrieved by search engines.
  • Emphasize subject experts in service delivery. In prior research asking academic librarians to compare SQA to VRS, Shah and Kitzie found that librarians would limit referrals of subject experts due to perceived time constraints.62 However, as indicated by this research, users envision timeliness and convenience as two separate factors. Users are willing to wait if this wait signifies delivery of a relevant answer that they may not have been able to glean from a search engine or SQA site. Therefore, VRS librarians should collaborate via consortia to connect individuals with subject experts at the beginning of the reference interview; further, librarians need to revise their VRS scripts and site design to make the user aware that this option is available. For instance, users could have access to a drop-down menu to select a librarian with related subject expertise when asking a question. Or VRS could push to users pop-up chats at times of need, such as when a user’s search retrieves no results from a library website search or when a user spends a certain number of seconds on the library website with no activity.
  • Importance of VRS interface and display design. As indicated by user responses for the magic wand question, aesthetics are important. In fact, aesthetics may be more important than mode of communication—as users seemed to prefer asynchronous over synchronous resources. Therefore, when designing VRS resources, librarians must think beyond emulating a chat window to creating other asynchronous resources, such as a Q&A archive, which looks like sites and tools users are already familiar with.
  • Integrate VRS and SQA. Some aspects of SQA, particularly its provision of varied opinions and its relational elements, may not be able to be replicated entirely in VRS. But VRS can include options for more information on a specific subject from a variety of sources, as well as push to SQA sites when questions may require affective factors beyond what a librarian could reasonably provide. VRS services can also offer additional resources beyond Q&A services, such as online support groups. In an academic context, for example, VRS could offer a support group for first-year undergraduates, individuals as divided by academic disciplines, and graduate students. Further, VRS should work with SQA so that the latter would push requests that require high-quality, trustworthy information to a librarian.

Conclusion

Informed by fifty-one in-depth user interviews, this study investigated the motivations, expectations, assessment, and use of online Q&A services. Online Q&A services are a fruitful context for investigation given the continuing rise of people’s social search in digital environments. It is one of the few studies to make a direct comparison between VRS and SQA services. Making this comparison is vital since prior research has indicated that both services are not in competition, but instead are complementary.

This study is not without limitations, offering a snapshot of user perceptions during the study’s data collection period. The telephone interview medium limited the contextual richness of interviews since the team missed additional FtF information like facial expressions. Further, the sample was nonrandom, meaning that the results are not generalizable to all VRS or SQA users. Additionally, it is likely to have had an underrepresentation of VRS users due to the privacy restrictions of libraries in protecting user identity. Despite these limitations, these findings deepen our understanding of an exploratory, qualitative issue that requires additional research to test our emergent codebook further.

Research findings suggest that online Q&A users do not necessarily take advantage of the observed complementarity between SQA and VRS. Instead, most users reported using SQA services even when they did not adequately meet their expectations for quality and satisfaction. A key reason for this heightened use of SQA services as compared to VRS can be attributed to the integration of SQA into the way most users reported seeking information. Search engines like Google, as well as social search sites based on users’ networks often connected them to SQA sites indirectly. In this way users sometimes satisficed by using SQA services when requiring subject expertise but sacrificing the desired quality and satisfaction in doing so.63 This finding suggests that there are other contextual factors at play beyond a user being aware of the information need and the sources available to meet this need. Instead, our findings suggest the need for VRS to change how they are presented and increase their integration with other digital sources to better match how individuals commonly look for information online. This change is critically important now, given the impact of the COVID-19 pandemic and its aftermath on library reference services. Individuals have had to face the complete absence of FtF services and the necessity of relying more fully on our virtual presence. Virtual services in the ideal strive to be reassuring, enduring, and effective. This research pushes us to be more collaborative, open, and available.

Acknowledgements

This research was funded by the Institute for Museum of Library Services (IMLS) and part of a larger project entitled, Cyber Synergy: Seeking Sustainability through Collaboration between Virtual Reference and Social Q&A Sites. The project was funded by IMLS for the period of 10/01/11 to 9/30/14, in the amount of $250,000. The co-PIs on the project were Marie L. Radford, Lynn S. Connaway, and Chirag Shah. For more information, the project website can be accessed via the following link: http://www.oclc.org/research/activities/synergy/default.htm.

References

  1. Lynn Silipigni Connaway, The Library in the Life of the User: Engaging with People Where They Live and Learn (Dublin, Ohio: OCLC Research, 2015); Lorcan Dempsey, “From Infrastructure to Engagement: Thinking about the Library in the Life of the User” (keynote presented at Minitex 24th Annual Interlibrary Loan Conference, St. Paul, MN, May 12, 2015).
  2. Firouzeh F. Logan and Krystal Lewis. “Quality Control: A Necessary Good for Improving Service,” The Reference Librarian 52, no. 3 (2011): 218–30.
  3. Tai Phan, Laura Hardesty, and Jamie Hug, Academic Libraries: 2012, report issued by First Look NCES 2014-038 (Washington, DC: National Center for Education Statistics, 2014).
  4. Deborah L. Meert and Lisa M. Given. “Measuring Quality in Chat Reference Consortia: A Comparative Analysis of Responses to Users’ Queries,” College & Research Libraries 70, no. 1 (2009): 71–84.
  5. M. L. Radford, L. Costello, and K. Montague, “Surging Virtual Reference Services: COVID-19 a Game Changer,” College & Research Libraries News 82, no. 3 (2021): 106–7, 113, https://doi.org/10.5860/crln.82.3.106.
  6. Bettina Askew et al., “Georgia Libraries Respond to COVID-19 Pandemic,” Georgia Library Quarterly 57, no. 3 (2020); Dipti Mehta and Xiaocan Wang, “COVID-19 and Digital Library Services—A Case Study of a University’s Library,” Digital Library Perspectives (2020); Ginelle Baskin, “Navigating Virtual Reference amidst COVID-19,” Tennessee Libraries 70, no. 2 (2020); Springshare, International Chat Use Data, 2020, distributed by Springshare.
  7. Elisa Shearer and Jeffrey Gottfried, News Use Across Social Media Platforms 2017 (Washington, DC: Pew Research Center, 2017).
  8. David Westerman, Patric R. Spence, and Brandon Van Der Heide, “Social Media as Information Source: Recency of Updates and Credibility of Information,” Journal of Computer-Mediated Communication 19, no. 2 (2014): 171–83; Jonathan M. Cox, “The Source of a Movement: Making the Case for Social Media as an Informational Source Using Black Lives Matter,” Ethnic and Racial Studies 40, no. 11 (2017): 1847–54.
  9. Chirag Shah and Vanessa Kitzie, “Social Q&A and Virtual Reference—Comparing Apples and Oranges with the Help of Experts and Users,” Journal of the American Society for Information Science and Technology 63, no. 10 (2012): 2020.
  10. Shah and Kitzie, “Social Q&A and Virtual Reference,” 2020–36.
  11. Shah and Kitzie, “Social Q&A and Virtual Reference,” 2020–36.
  12. Shah and Kitzie, “Social Q&A and Virtual Reference,” 2022.
  13. Logan and Lewis, “Quality Control,” 218–30.
  14. Logan and Lewis, “Quality Control,” 218–30.
  15. Marie L. Radford and Lynn Silipigni Connaway, “Not Dead Yet! A Longitudinal Study of Query Type and Ready Reference Accuracy in Live Chat and IM Reference,” Library & Information Science Research 35, no. 1 (2013): 2–13.
  16. Lorna Rourke and Pascal Lupien, “Learning from Chatting: How our Virtual Reference Questions are Giving us Answers,” Evidence Based Library and Information Practice 5, no. 2 (2010): 63–74.
  17. Tara Mawhinney and Svetlana Kochkina, “Is the Medium the Message? Examining Transactions Conducted via Text in Comparison with Traditional Virtual Reference Methods,” Journal of Library & Information Services in Distance Learning 13, no. 1–2 (2019): 56–73.
  18. Jaclyn McKewan and Scott S. Richmond, “Needs and Results in Virtual Reference Transactions: A Longitudinal Study,” Reference Librarian 58, no. 3 (2017): 179–89.
  19. Lynn Connaway and Marie L. Radford, Seeking Synchronicity: Revelations and Recommendations for Virtual Reference (Dublin, Ohio: OCLC Research, 2011); Radford and Connaway, “Not Dead Yet!,” 2–13; Pnina Shachaf, “Social Reference and Library Reference Service,” in Proceedings of the 2009 International Federation of Library Associations and Institutions (IFLA) Satellite Meeting on Emerging Trends in Technology: Libraries between Web 2.0, Semantic Web and Search Technology 20 (2009): 1–4; M. Elena Gómez-Cruz, “Electronic Reference Services: A Quality and Satisfaction Evaluation,” Reference Services Review 47, no. 2 (2020): 118–33.
  20. Sharon Naylor, Bruce Stoffel, and Sharon Van Der Laan, “Why isn’t our Chat Reference Used More? Finding of Focus Group Discussions with Undergraduate Students,” Reference & User Services Quarterly 47, no.4 (2008): 342–54; Jeffrey Pomerantz, Lili Luo, and Charles R. McClure, “Peer Review of Chat Reference Transcripts: Approaches and Strategies,” Library & Information Science Research 28, no. 1 (2006): 24–48.
  21. Sarah Maximiek, Erin Rushton, and Elizabeth Brown, “Coding into the Great Unknown: Analyzing Instant Messaging Session Transcripts to Identify User Behaviors and Measure Quality of Service,” College & Research Libraries 71, no. 4 (2010): 361–74.
  22. Pomerantz, Luo, and McClure, “Peer Review of Chat Reference Transcripts,” 24–48; Kate Fuller and Nancy H. Dryden, “Chat Reference Analysis to Determine Accuracy and Staffing Needs at one Academic Library,” Internet Reference Services Quarterly 20, no. 3–4 (2015): 163–81.
  23. Ann Agee, “Language Style Matching as a Measure of Librarian/Patron Engagement in Email Reference Transactions,” Journal of Academic Librarianship 45, no. 6 (2019): 102069; Teagan Eastman et al., “Chatting Without Borders: Assessment as the First Step in Cultivating an Accessible Chat Reference Service,” Journal of Library & Information Services in Distance Learning 13, no. 3 (2019): 262–82.
  24. Julie Hunter et al., “Chat Reference: Evaluating Customer Service and IL Instruction,” Reference Services Review 47, no. 2 (2019): 134–50; Rebecca Hill Renirie, “Instruction through Virtual Reference: Mapping the ACRL Framework,” Reference Services Review 48, no. 2 (2020): 243–57.
  25. Marie L. Radford et al., “Shared Values, New Vision: Collaboration and Communities of Practice in Virtual Reference and SQA,” Journal of the Association for Information Science and Technology 68, no. 2 (2017): 438–49; Jessica A. Koos et al., “A Partnership between Academic and Public Librarians: ‘What the Health’ Workshop Series,” Journal of the Medical Library Association: JMLA 107, no. 2 (2019): 232.
  26. Pnina Shachaf, “Social Reference: Toward a Unifying Theory,” Library & Information Science Research 32, no. 1 (2010): 66–76.
  27. Daphne R. Raban, “User-centered Evaluation of Information: A Research Challenge,” Internet Research 17, no. 3 (2007): 306–22.
  28. Howard Rosenbaum and Pnina Shachaf, “A Structuration Approach to Online Communities of Practice: The Case of Q&A Communities,” Journal of the American Society for Information Science and Technology 61, no. 9 (2010): 1933–44; Pnina Shachaf and Howard Rosenbaum, “Online Social Reference: A Research Agenda through a STIN Framework,” in iConference 2009 Proceedings (2009); Jun Zhang, Mark S. Ackerman, and Lada Adamic, “Expertise Networks in Online Communities: Structure and Algorithms,” in Proceedings of the 16th International Conference on World Wide Web (2007): 221–30.
  29. Zhongfeng Zhang, Qiudan Li, Daniel Zeng, and Heng Gao, “Extracting Evolutionary Communities in Community Question Answering,” Journal of the Association for Information Science and Technology 65, no. 6 (2014): 1170–86.
  30. Christopher P. Lueg, “Querying Information Systems or Interacting with Intermediaries? Towards Understanding the Informational Capacity of Online Communities,” In Proceedings of the American Society for Information Science and Technology 44, no. 1 (2007): 1–6.
  31. Chirag Shah, Jung Sun Oh, and Sanghee Oh, “Exploring Characteristics and Effects of User Participation in Online Social Q&A Sites,” First Monday 13, no. 9 (2008).
  32. Shah and Kitzie, “Social Q&A and Virtual Reference,” 2020–36.
  33. Shah, Oh, and Oh, “Exploring Characteristics and Effects of User Participation in Online Social Q&A Sites.”
  34. Rich Gazan, “Seekers, Sloths and Social Reference: Homework Questions Submitted to a Question-Answering Community,” New Review of Hypermedia and Multimedia 13, no. 2 (2007): 239–48.
  35. Pradeep K. Roy et al., “Identifying Reputation Collectors in Community Question Answering (CQA) Sites: Exploring the Dark Side of Social Media,” International Journal of Information Management 42 (2018): 25–35.
  36. Sheizaf Rafaeli, Daphne Ruth Raban, and Gilad Ravid, “How Social Motivation Enhances Economic Activity and Incentives in the Google Answers Knowledge Sharing Market,” International Journal of Knowledge and Learning 3, no. 1 (2007): 1–11; Kevin Kyung Nam, Mark S. Ackerman, and Lada A. Adamic, “Questions In, Knowledge In?: A Study of Naver’s Question Answering Community,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2009): 779–88; Soojung Kim, “Questioners’ Credibility Judgments of Answers in a Social Question and Answer Site,” Information Research 15, no. 2 (2010); Rich Gazan, “Social Q&A,” Journal of the American Society for Information Science and Technology 62, no. 12 (2011): 2301–12.
  37. Yan Zhang, “Contextualizing Consumer Health Information Searching: An Analysis of Questions in a Social Q&A Community,” in Proceedings of the 1st ACM International Health Informatics Symposium (2010): 210–19; Erik Choi and Chirag Shah, “User Motivations for Asking Questions in Online Q & A Services,” Journal of the Association for Information Science and Technology 67, no. 5 (2016): 1182–97; Erik Choi and Chirag Shah, “Asking for More than an Answer: What do Askers Expect in Online Q&A Services?,” Journal of Information Science 43, no. 3 (2017): 424–35.
  38. Choi and Shah, “User Motivations for Asking Questions in Online Q&A Services,” 1182–97.
  39. Lada A. Adamic et al., “Knowledge Sharing and Yahoo Answers: Everyone Knows Something,” in Proceedings of the 17th International Conference on World Wide Web (2008): 665–74; F. Maxwell Harper et al., “Predictors of Answer Quality in Online Q&A Sites,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2008): 865–74; Jiwoon Jeon et al., “A Framework to Predict the Quality of Answers with Non-Textual Features,” in Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2006): 228–35; Chorng-Shyong Ong, Min-Yuh Day, and Wen-Lian Hsu, “The Measurement of User Satisfaction with Question Answering Systems,” Information & Management 46, no. 7 (2009): 397–403.
  40. Yandong Liu, Jiang Bian, and Eugene Agichtein, “Predicting Information Seeker Satisfaction in Community Question Answering,” in Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2008): 483–90.
  41. Chirag Shah and Jefferey Pomerantz, “Evaluating and Predicting Answer Quality in community QA,” in Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (2010): 411–18.
  42. Soojung Kim and Sanghee Oh, “Users’ Relevance Criteria for Evaluating Answers in a Social Q&A Site,” Journal of the American Society for Information Science and Technology 60, no. 4 (2009): 716–27.
  43. Andrew W. Vargo and Shigeo Matsubara, “Identity and Performance in Technical Q&A,” Behaviour & Information Technology 37, no. 7 (2018): 658–74.
  44. Soojung Kim, Sanghee Oh, and Jung Sun Oh, “Evaluating Health Answers in a Social Q&A Site,” in Proceedings of the American Society for Information Science and Technology 45, no. 1 (2008): 1–6; Jin Zhang and Yiming Zhao, “A User Term Visualization Analysis Based on a Social Question and Answer Log,” Information Processing & Management 49, no. 5 (2013): 1019–48.
  45. Alton YK Chua and Snehasish Banerjee, “Answers or no Answers: Studying Question Answerability in Stack Overflow,” Journal of Information Science 41, no. 5 (2015): 720–31; Duen‐Ren Liu et al., “Complementary QA Network Analysis for QA Retrieval in Social Question‐Answering Websites,” Journal of the Association for Information Science and Technology 66, no. 1 (2015): 99–116; Hamid Naderi et al., “Similarity of Medical Concepts in Question and Answering of Health Communities,” Health Informatics Journal 26, no. 2 (2020): 1443–54.
  46. Shah and Kitzie, “Social Q&A and Virtual Reference,” 2020–36; Yin Zhang, and Shengli Deng, “Social Question and Answer Services Versus Library Virtual Reference: Evaluation and Comparison from the Users’ Perspective,” Information Research: An International Electronic Journal 19, no. 4 (2014): n4.
  47. Chirag Shah, Marie L. Radford, and Lynn Silipigni Connaway, “Collaboration and Synergy in Hybrid Q&A: Participatory Design Method and Results,” Library & Information Science Research 37, no. 2 (2015): 92–99.
  48. Lynn S. Connaway and Marie L. Radford, Research Methods in Library and Information Science (Santa Barbara, CA: ABC-CLIO, 2017).
  49. John C. Flanagan, “The Critical Incident Technique,” Psychological Bulletin 51, no. 4 (1954); Brenda Dervin, “On Studying Information Seeking Methodologically: The Implications of Connecting Metatheory to Method,” Information Processing & Management 35, no. 6 (1999): 727–50; Lynn S. Connaway, Timothy J. Dickey, and Marie L. Radford, “‘If It Is Too Inconvenient I’m Not Going After It:’ Convenience as a Critical Factor in Information-Seeking Behaviors,” Library & Information Science Research 33, no. 3 (2011): 179–90; Lynn S. Connaway, David White, and Donna Lanclos, “Visitors and Residents: What Motivates Engagement with the Digital Information Environment?,” in the Proceedings of the American Society for Information Science and Technology 48, no. 1 (2011): 1–7.
  50. Barney G. Glaser and Anselm L. Strauss, Discovery of Grounded Theory: Strategies for Qualitative Research (New York: Routledge, 1968[2017]); Kathy Charmaz, Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis (Thousand Oaks, CA: Sage Publications, 2014).
  51. Glaser and Strauss, Discovery of Grounded Theory; Charmaz, Constructing Grounded Theoryg.
  52. Martin L. Maehr, “Culture and Achievement Motivation,” American Psychologist 29, no. 12 (1974): 887–96; Terence R. Mitchell, “Motivation: New Directions for theory, Research, and Practice,” Academy of Management Review 7, no. 1 (1982): 80–88.
  53. Lynn S. Connaway and Marie L. Radford, “Academic Library Assessment: Beyond the Basics” (workshop presented at Marquette University, Milwaukee, WI, July 18, 2013); Shah and Kitzie, “Social Q&A and Virtual Reference,” 2020–36; Zhang and Deng, “Social Question and Answer Services Versus Library Virtual Reference,” n4.
  54. Zhang and Deng, “Social Question and Answer Services Versus Library Virtual Reference,” n4.
  55. Radford et al., “Shared Values, New Vision,” 446.
  56. Fuller and Dryden, “Chat Reference Analysis to Determine Accuracy and Staffing Needs at one Academic Library,” 163–81.
  57. Shah and Kitzie, “Social Q&A and Virtual Reference,” 2020–36.
  58. Liu et al., “Complementary QA Network Analysis for QA Retrieval in Social Question‐Answering Websites,” 99–116.
  59. Rosenbaum and Shachaf, “A Structuration Approach to Online Communities of Practice: The Case of Q&A Communities,” 1933–44; Shachaf and Rosenbaum, Online Social Reference: A Research Agenda through a STIN Framework; Zhang, Ackerman, and Adamic, “Expertise Networks in Online Communities: Structure and Algorithms,” 221–30; Zhang et al., “Extracting Evolutionary Communities in Community Question Answering,” 1170–86.
  60. Rafaeli, Raban, and Ravid, “How Social Motivation Enhances Economic Activity and Incentives in the Google Answers Knowledge Sharing Market,” 1–11; Nam, Ackerman, and Adamic, “Questions in, Knowledge in?: A Study of Naver’s Question Answering Community,” 779–88; Kim, “Questioners’ Credibility Judgments of Answers in a Social Question and Answer Site”; Rich Gazan, “Social Q&A,” 2301–12; Choi and Shah, “User Motivations for Asking Questions in Online Q & A Services,” 1182–97; Choi and Shah, “Asking for More than an Answer: What do Askers Expect in Online Q&A Services?,” 424–35.
  61. Shah, Radford, and Connaway, “Collaboration and Synergy in Hybrid Q&A: Participatory Design Method and Results,” 92–99.
  62. Shah and Kitzie, “Social Q&A and Virtual Reference,” 2020–36.
  63. Brenda Dervin, Lois Foreman-Wernet, and Eric Lauterbach, Sense-making Methodology Reader: Selected Writings of Brenda Dervin (New York: Hampton Press, 2003).

Refbacks

  • There are currently no refbacks.


ALA Privacy Policy

© 2023 RUSA