The Journal of Academic Librarianship 43 (2017) 469–478
Contents lists available at ScienceDirect
The Journal of Academic Librarianship journal homepage: www.elsevier.com/locate/jacalib
Staffing Chat Reference with Undergraduate Student Assistants at an Academic Library: A Standards-Based Assessment
MARK
Kelsey Keyes, Ellie Dworak⁎ Albertsons Library, Boise State University, USA
A R T I C L E I N F O
A B S T R A C T
Keywords: Academic library Reference Virtual reference Library staffing Undergraduate students
Academic libraries have long experimented with how to staff the reference desk. Recent trends at college and university libraries indicate a shift toward a tiered staffing model, relying on a mix of professional librarians, library paraprofessional staff and often graduate students when available. Fewer academic libraries employ undergraduate students to work at the reference desk. This paper examines the use of undergraduate library assistants specifically to staff chat reference services at an academic library. It analyzes chat transcripts for content and comparative quality between different types of answerers: professional librarians, paraprofessional staff, and undergraduate students. Our analysis of 451 chat reference transcripts determined that undergraduate students can indeed provide satisfactory chat reference services, comparable in quality and content to that of paraprofessional staff and professional librarians. The data suggests that having well-trained undergraduate students staff chat reference is a viable, and even desirable, option for academic libraries.
Introduction Answerer: Hi, you're connected now. So you need juvenile recidivism statistics specifically from Idaho, correct? Patron: Yeah if they can be found. This topic seems pretty difficult to find exact info. Answerer: I'm wondering if maybe this information is protected since it involves juveniles. Patron: It could be. I didn't think of that. That's probably why it is so difficult to find. Answerer: I found information on Ada County's website, but I didn't see anything on recidivism. This may help you out though. https:// adacounty.id.gov/Juvenile-Court/Annual-Reports Patron: Answerer, Thank you for all your help and the links you sent me. I'll take a look and I may have to choose a different topic. I really appreciate you looking into it. Answerer: Yeah no problem. I would suggest contacting the juvenile department at Ada County. Perhaps they have the statistics they could send you. You could also contact the library liaison featured on the criminal justice library guide for more help. Anything else I can help you with? Patron: I think I'm ok for now. Thanks again. Answerer: idjc.idaho.gov is the website for the Idaho Department of Juvenile Corrections. Thank you for using library chat and have a wonderful day! Feel free to utilize this service again with any other questions you might have.
⁎
The chat transcript above was received by our institution via chat reference. We rated this as a 4 on the READ (Reference Effort Assessment Data) Scale, a “six-point scale tool for recording… supplemental qualitative statistics… placing an emphasis on recording the effort, skills, knowledge, teaching moment, techniques, and tools utilized by the librarian during a reference transaction” (Gerlich, n.d.). We coded this chat reference interaction based on the following criteria: the answerer (the term used by our chat software to designate the individual who engages with a patron in a chat transaction) greeted the patron; the answerer's language was clear, courteous, and grammatically correct; the answerer performed a search for the patron, provided instruction along the way and provided links and sources when necessary; and, finally, the answerer encouraged the patron to come back with further questions, and signed off to end the chat. Overall, we determined that this was an excellent example of a successful chat reference interaction. What makes this notable is that the answerer was not a professional librarian, but an undergraduate student. Academic libraries have long experimented with how to staff the reference desk. Recent trends at college and university libraries indicate a shift toward a tiered staffing model. Many academic libraries rely on a mix of professional librarians, library paraprofessional staff and sometimes graduate students to staff their reference desks. Fewer academic libraries employ undergraduate students to work at the reference desk.
Corresponding author at: Albertsons Library, Boise State University, 1865 West Cesar Chavez Lane, Boise, ID 83725, USA. E-mail address:
[email protected] (E. Dworak).
http://dx.doi.org/10.1016/j.acalib.2017.09.001 Received 21 July 2017; Received in revised form 6 September 2017; Accepted 15 September 2017 Available online 22 September 2017 0099-1333/ Published by Elsevier Inc.
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
more cost effective to use alternative staffing at public service desks. Ryan's, 2008 reference transaction cost analysis made the argument that “librarians can leave answering most questions to others and can now concentrate on working on tasks that better utilize their training and experience” (p. 399). In many cases, studies have concluded that a tiered reference model, which usually involves some combination of professional librarians, library staff, graduate students (especially at those universities that have MLS graduate students) and undergraduate students, is an effective alternative (Brenza et al., 2015; Stevens, 2013). Brenza et al. (2015) found that this is a more appropriate use of staff in general and argued that academic libraries should “not [use] masterlevel staff members to perform work that is appropriate for undergraduates” (p. 725).
This paper examines the use of undergraduate library assistants to staff chat reference services at an academic library. It analyzes chat transcripts for content and comparative quality, as defined by a set of answerer behaviors based on the Reference and User Services Association (RUSA) Behavioral Guidelines for Reference and Information Services (RUSA Guidelines) between different types of answerers: professional librarians, paraprofessional staff and undergraduate students (RASD Ad Hoc Committee, 2013). Literature review Reference services staffing in the academic library In recent years many college and university libraries have had to reconsider the role and staffing of academic reference services for a variety of reasons. Brenza, Kowalsky, and Brush (2015); Bracke, Brewer, Huff-Eibl, and Lee (2007); and Faix et al. (2010) noted the need for extended library hours accompanied by a lack of additional staffing to cover those hours. Seeholzer (2013) observed that library staff's roles and duties have grown and changed, even though the number of staff members hasn't increased (p. 216). Budgetary constraints are another commonly cited reason to consider staffing models. At times, staffing concerns have been driven by new reference service models such at chat reference. In 2006, Blonde surveyed Canadian academic libraries regarding chat reference staffing, and reported that chat reference was an added workload for most staffers who took on the activity (p. 83), which confirms an unsurprising hypothesis regarding the realities of reference staffing. In addition to budget and staffing concerns, discussions of reference staffing focus on the changing role of reference. Bracke et al. (2007) found that customers need less assistance from the reference and circulation desks, which is part of wider changes in library use patterns (p. 248). Similarly, King and Christensen-Lee (2014) found that visitors were asking more technical and directional questions, rather than ready reference and subject-specialty questions. (p. 34). Faix et al. (2010) noted that because patrons could access and seek assistance for electronic and online resources, rather than visiting a reference desk for assistance, reference librarians' responsibilities were changing (p. 94). Bracke et al. (2007) reached a similar conclusion, citing statistics demonstrating a decline in both the number and the complexity of queries at all service points (p. 261). Librarians are increasingly tasked with more and more high-level tasks and Hendricks and Buchanan (2013) indicated that one benefit to staffing changes was that faculty librarians were “being relieved of the ‘burden’ of sitting at the reference desk” (p. 39). Whether or not all librarians agree that staffing chat services is a “burden,” research clearly shows that librarians are expected to take on new duties. The most commonly mentioned shift is the expectation that professional librarians take on increased library instruction duties, leaving them less time to cover reference responsibilities. Faix (2014) found that “the reference librarians… were experiencing ever-increasing demands for their time to be spent at other places, in the classroom teaching library instruction sessions and all across campus in various committee meetings or at library outreach events” (p. 308). Similarly, Seeholzer (2013) explained that “Librarians and staff working at these service points were… asked to assist with coordinating new public programing, teaching more library instruction sessions, and assisting with weeding projects. The additional projects often proved taxing on the service model set up at the circulation and reference desks” (p. 216). Other librarians mentioned additional duties such as increased publishing (Bracke et al., 2007), grant writing (Bracke et al., 2007), digital content management (King & Christensen-Lee, 2014) and staffing new library locations (Faix et al., 2010). Blonde (2006), Bracke et al. (2007), Faix et al. (2010), Faix (2014), Stevens (2013) and others have examined the cost effectiveness of using professional librarians to staff reference services and found that it is
Tiered reference services staffing with undergraduate students The use of tiered staffing models that engage undergraduate students in reference services is not new. Faix (2014) found that “As academic libraries in the late 1990s and 2000s began to build information commons and to merge separate help desk into single service points with multiple functions, employing undergraduate student workers to provide this basic reference assistance in a tiered system has become more and more common” (p. 306). Bodemer (2014) traced it back even further, pointing out that libraries have been trying since the 1970s to engage undergraduates in reference (p. 165). Much of the literature is highly positive regarding the use of students at the reference desk. Bodemer (2014) concluded that “This is an optimal time to employ undergraduates for reference” (p. 168) and that “The case can be made that trained undergraduates are optimal for providing peer reference” (p. 163). Faix et al. (2010) concurred, stating, “Undergraduate students are not only capable but perhaps optimal at providing high-quality reference service to their peers” (p. 90–1). Her research found that “The undergraduate RSAs proved to be ideal reference providers. Not only did they have the advantage of being familiar with the particular courses students were involved in… but they also demonstrated a strong desire to present a knowledgeable face to their peers. They took their positions, and the responsibilities they entailed, very seriously” (Faix, 2014, pp. 101–2). Stevens (2013) noted the benefits to those patrons whose needs were well-met by undergraduate student assistance, allowing increased service hours and “circumventing time and space limitations” (p. 211). Other researchers found that undergraduate students improved the public service capacity in perhaps unexpected ways. Brenza et al. (2015) posited that in addition to providing key library services, students were shaping users' perception of libraries (p. 724). Brenza et al. also proposed that academic libraries think of student assistants as “ambassadors” of the library, stating “The value of student assistant extends well beyond the fulfillment of their duties as library employees. Specifically, they can bring positive or negative attitudes about the library to their friends and classmates,” which would affect future library patronage (p. 722). “The employment of student workers may be a great way to introduce the library to its users, encouraging them to make contact and then referring them, if necessary, to other professionals who can assist them with their research needs” (p. 726). Furthermore, Faix (2014) found that student patrons often preferred getting help from student employees (i.e. peer assistance) (p. 306). Similarly, Bodemer (2014) argued that “Academic libraries would be remiss in not seeking to harness peer learning dynamics to enhance student learning and success” (p. 162). Bodemer posited that students could communicate with peers in ways that librarians could not (p. 176), while Faix (2014) concluded that library users wanted their needs met and weren't concerned about who helped them meet those needs (p. 309). Brenza et al. asserted that because student reference assistants often are the first impression that users have, they greatly affect users' overall experience (p. 723), which appears to validate Bodemer's idea that “Peer reference… providers can create contiguity between student 470
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
(p. 171). Langan (2012) reviewed undergraduate student chat reference transcripts at her institution for adherence to the RUSA Guidelines, and found that they fell short on the standards listed under 2.0 Interest (p. 30). Based on this information, a training program was developed to target these areas, which succeeded in developing student employees' ability to interact professionally. Lux and Rich (2016) reviewed a heuristic assessment of 300 chat reference transcripts, reporting on answerer behaviors in undergraduate students and librarians. The authors analyzed features including presence or absence of a greetings, closing statements, reference interviews, referrals, and several other criteria, and they concluded that librarians outperformed student employees in most of these criteria, they did not do so by a meaningful margin (p. 133).
life as lived and library resources and service and can leverage cognitive and affective learning benefits by virtue of being peers” (Bodemer, 2014, p. 176). Several studies found that hiring undergraduate student assistants in the library benefits the library itself, librarians, and student employees alike. Brenza et al. (2015) found that employing students may increase library usage (p. 726). Faix et al. (2010) pointed out that employing students “improved patrons' relations both inside and outside the library. They became ambassadors between the library and their classmates by sharing their knowledge of library services” (p. 101). Of course, a tiered reference model can benefit librarians simply by giving them time to focus on other aspects of their work. Stevens (2013) concluded that their tiered program let librarians be more efficient and cost-effective in time management: “Rather than sitting and waiting at a desk where the majority of questions could be effectively handled by students, librarians are instead able to focus their time on projects that require their experience and expertise” (p. 211). In some cases, researchers have reported that reference librarians have expressed mixed feelings about relinquishing their time providing reference services. In answer to this, Faix (2014) concluded that a tiered reference structure frees up librarians to handle more technical or [specialized] assistance, rather than field all questions that come at the reference desk (p. 307). Bodemer (2014) determined that a librarian's role is to train and teach [student employees] to handle students' questions, not to field and answer every question students may have (p. 176). Faix (2014) recommended librarians to consider peer reference to be another of their instructional responsibilities, with [student workers] learning how to assist other students (pp. 307–8). There may even be a benefit to morale for librarians who work with undergraduate students in a reference capacity. Faix et al. (2010) noted that librarians experienced improved morale, because it was fun to work with students. Stevens (2013) also found that working with students allowed librarians to develop meaningful, long-lasting relationships with students, which suggests the possibility for improved engagement from librarians (p. 211). Other benefits of working with library student employees include the ability to get immediate student feedback and the opportunity to see things from a student point of view. Undergraduate student employees benefit from being included in reference services, as well. Faix et al. (2010) found that student employees at the library become better researchers (p. 100). Similarly, Brenza et al. (2015) found that the benefits to student reference assistants included: higher student academic performance, higher student retention and graduation rates (p. 724).
Assessing chat reference quality Several libraries have assessed chat reference based on a variety of criteria. In 2006, Zhuo, Love, Norwood, and Massia conducted a review of 100 chat reference transcripts using the RUSA Guidelines as a benchmark. Van Duinkerken, Stephens, and MacDonald (2009) analyzed chat transcripts to determine answerer adherence to RUSA Guidelines, with an emphasis on the reference interview. In addition to examining answerer behavior, the researchers looked for evidence of patron satisfaction in the transcripts. A 2010 study by Maximiek, Rushton, and Brown looked at 284 chat reference sessions, analyzing them for user demographics, traffic patterns, and quality, including answerer compliance with observable RUSA Guidelines. Logan and Lewis (2011) used the RUSA Guidelines to create a set of questions used to review 30 chat reference transcripts. Among the quality criteria were correct grammar, courtesy, provision of instruction, and referrals. This information was then used to develop a focused training and assessment program. Lux and Rich (2016) reviewed 300 chat reference transcripts to assess the performance of undergraduate student employees at their institution. Their study used guidelines similar to the RUSA Guidelines to evaluate chat sessions answered by librarians and students. These elements included greeting, closing, reference interview, instruction, building rapport, completeness, correctness, and referrals. Zhuo et al., found that 65% of transactions show evidence of instruction, and in 90% of cases, answerers were clear and concise (p. 86). Areas for improvement included initial response time, engaging a reference interview, and following up with patrons (p. 87–88). Van Duinkerken et al. identified that answerers were weak in terms of identifying patron goals, as well as in asking open- and closed-ended questions. Despite these limitations, the authors found evidence of patron satisfaction in 82% of transcripts. One conclusion of the study was that the RUSA Guidelines do not account for time as a limiting factor in chat reference, and that because of this, many of the standards are not applicable in many chat reference transactions (p. 117). Maximiek et al. found that 80% of the chat transcripts showed evidence for approachability, showing interest, and listening where necessary (p. 367). The authors concurred with Van Duinkerken et al. in regards to focusing on RUSA Guidelines that are applicable to a chat environment (p. 368). Logan and Lewis concluded that the individuals who staff a chat reference service should be included in the process of setting and evaluating the service standards (pp. 226–227). Lux and Rich found that librarians outperformed students in all areas, especially when identifying known items and providing referrals (p. 133). However, students did well in most areas examined, and the authors concluded that students were able to provide excellent chat reference service if trained appropriately (p. 134).
Tiered chat reference staffing with undergraduate students If much has been written on the topic of staffing reference services with undergraduate students, the literature regarding chat reference staffing models is equally as large. Much of it outside the scope of this study. However, thus far little has focused specifically on the use of undergraduate students to staff chat reference services. Several studies tried to determine whether a percentage of queries might be answerable by undergraduate students. Bravender, Lyon, and Molaro (2011) categorized chat reference using the Reference Effort Assessment Data (READ) Scale to determine staffing needs, and concluded that approximately 34% of queries ranked at READ Scale 1 or 2, which the authors deemed answered by student employee, with only 23% weighing in at a READ Scale 4 or 5, the level at which reference librarian expertise was considered necessary (p. 122). Maloney and Kemp (2015) cautioned against replacing librarians too quickly, noting an increase in question complexity at their institution after implementing a proactive chat reference service, with queries that require a librarian increasing from 15% of chat transactions to 27% (p. 970). Less is known about whether undergraduate students are capable of providing high quality chat reference services. Bodemer (2014)'s review of chat transcripts showed they were “knowledgeable and congenial”
Research question This paper analyzes chat transcripts to assess and compare the quality of the interaction when chat questions were answered by professional librarians, staff, and students to answer the research question 471
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
“How much and in what ways does the quality of the library's chat service vary by answerer type?” Quotes from transcripts used in this article have been modified for brevity, clarity, and/or to protect the privacy of patrons and library staff.
Table 1 Answerer Type codes.
Background
Code
Definition
# Transcripts
LibR LibNR
Librarian working in the reference department Librarian from another library unit (many librarians from other library units help at a reference desk) Library staff without an MLS Library staff with an MLS Student employee
1544 251
Staff +Staff Student
Our institution is a metropolitan research university with nearly 24,000 enrolled students. Like many academic libraries, our Library uses chat reference (i.e. virtual reference) as one of its ways to provide support to its users. The Library first started providing chat reference in 2001. From 2001 to June 2014, the Library used QuestionPoint, a virtual reference management system, participating in its 24/7 Reference Cooperative. The Library staffed QuestionPoint chat reference entirely with professional reference librarians. Library patrons had access to the chat service 24 h per day, seven days per week, though would often be chatting with a librarian from another academic institution or one of the QuestionPoint staff librarians. In July of 2014, the Library changed platforms to Springshare's LibChat. This change did unfortunately mean that the Library could no longer provide 24/7 chat reference service, and it was decided that chat would be staffed during all Library hours, approximately 103 h per week at the time. The Library's ten Research & Instruction Services (IRS) unit professional librarians shared a combined commitment to providing reference service for 77 of these hours, and library administrators determined that adding increased coverage was not an efficient use of resources. For this reason, it was decided that staff from the Access Services Unit would be trained to offer chat services, with some queries being referred to IRS librarians as needed, especially for difficult reference questions. In addition, IRS librarians shifted from staffing dedicated chat reference shifts to staffing the service at the reference desk, with double staffing during peak hours, and with scheduled backups signed in from their offices. In the beginning of the fall 2015 semester, four undergraduate student employees who already worked for the Access Services Unit were trained to answer chat reference. This group met with the Reference Services Coordinator for a similar training curriculum as that offered to staff, but with a more extensive best practices component and an emphasis on proper referrals. This program has since been expanded to include undergraduate students working in Access Services or as computer lab assistants, expanding the pool of possible answerers further.
466 222 545
researchers; transcripts that were cut off due to a software error, and chat sessions that were initiated internally for training or testing purposes. After this initial screening, 3028 transcripts remained in the sample. 3. Metadata not relevant to the study were then removed from the spreadsheet. Remaining data fields included: ID - a unique number identifying each chat event. Initial Question - Information typed into the chat widget by the patron prior to connecting with an answerer. Answerer - the name of the library staff person answering the question. (This terminology was chosen because it is used by the chat software.) Rating - patron ranking of the interaction on a scale of 1–4, if offered. Comment - patron comments for the interaction, if offered. 4. An Answerer Type column was used to label each answerer by status, as shown in Table 1. The Answerer field was then removed in order to protect the privacy of individual library employees.
• • • • •
5. The transcripts were scrubbed of other personally identifying data for both patrons and library employees using a series of find and replace operations. Names of patrons were replaced with the word “patron;” names of library employees were replaced with the word “answerer.” 6. A random number generator was used to select 15% of the transcripts for each answerer type, for a total of 454 transcripts to review (70 Staff; 33 + Staff; 231 LibR; 38 LibNR; and 81 Student). These transcripts were coded with the code RS (random set). 7. The table was filtered to hide excluded transcripts and those not coded for random sampling. 8. The Answerer Type, Rating, and Comment columns were hidden during the coding process in order to mitigate for potential coder bias. 9. Remaining visible data was exported to a new spreadsheet that included only the data to be analyzed (excluding transcripts not in the random sets and hidden columns). 10. After the coding of transcripts was completed, the answerer type column was added to the second spreadsheet for purposes of analysis. The unique ID for each transcript was used to ensure data integrity.
Methods For this study, the authors drew on the literature related to using observable answerer behaviors based on the RUSA Guidelines to assess chat reference services (RASD Ad Hoc Committee, 2013). This choice was made because the RUSA Guidelines are used for best practice training at our institution in addition to being a commonly used set of reference standards for academic libraries nationally.
Coding schema Preparation Coding was based on four groups of criteria: difficulty of the query, answerer behaviors, problems with the transcript or answer, and a freetext notes field for coders to add comments. The READ Scale was used to rank transactions by level of difficulty. The RUSA Guidelines (RASD Ad Hoc Committee, 2013) were used as a starting point to identify observable answerer behaviors. Some standards, such as “3.1.1 Communicates in a receptive, cordial, and supportive manner” were broken into several behaviors, while others were not used for the purposes of this study. The coding schema and associated RUSA Guidelines are outlined in Table 2. Our choice of behaviors to assess was additionally guided by chat reference answerer behaviors most strongly correlated with patron
Prior to beginning the project, the researchers submitted a proposal to the University Institutional Review Board for Human Subjects. Once the proposal was approved, automatically captured chat reference transcripts from May 2014 through September 2016 (3700 transcripts) were downloaded into an Excel spreadsheet, along with associated metadata. The data was then processed through a series of steps. 1. The data was formatted as a table such that metadata elements could be included, excluded, and sorted. 2. Many chat sessions were removed from the sample. Excluded transcripts included those that were answered by either of the two 472
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
Table 2 Coding criteria and definitions. Group
Category
Difficulty Answerer behaviors
READ# Answered?
Clear
Courtesy
Grammar
4.1.10 Asks the patron if additional information is needed after results are found. 5.1.1 Asks the patron if his/her question has been completely answered. 3.1.6 Seeks to clarify confusing terminology and avoids jargon.
3.1.1 Communicates in a receptive, cordial, and supportive manner. 3.1.2 Uses a tone of voice and/or written language appropriate to the patron and nature of the transaction. 3.1.1 Communicates in a receptive, cordial, and supportive manner.
Definition
Codes
READ Scale rating Asked the patron if you answered their question or if they have any other questions.
0–5 Yes No Unnecessary
Defined terms and avoided jargon; Answer is written at a level appropriate to the audience (e.g. not too much information for beginners). Polite and professional communication (e.g. showing interest in the patron's topic, using patron names).
Yes No Unnecessary Yes No Unnecessary
Uses (mostly) correct grammar and spelling in answers.
Yes No Unnecessary Yes No Unnecessary Yes No Unnecessary Yes No Unnecessary Yes No Unnecessary
Greeting
1.1.3 Acknowledges patrons by using a friendly greeting to initiate conversation.
Opens the chat transaction by greeting the patron.
Instruction
4.1.3 Explains the search strategy to the patron. 4.1.7 Explains how to use sources when appropriate.
Provides instructions for how to complete the tasks necessary for the query to be answered.
Referral
4.1.9 Recognizes when to refer patrons for more help. 5.1.6 Refers the patron to other sources or institutions if the query has not been answered to the satisfaction of the patron 4.2.1 Accompanies the patron in the search… unless the patron prefers to conduct the search him/herself. 4.3.1. Uses appropriate technology to help guide the patron through information resources, when possible. 3.1.1 Communicates in a receptive, cordial, and supportive manner. 5.1.1 Encourages the patron to return if he/she has further questions by making a statement such as “If you don't find what you are looking for, please come back and we'll try something else.” 4.1.8 Offers pointers, detailed search paths, and names of resources used to find the answer…
Included a referral when necessary.
Searching
Sign off
Sources
Problems
Associated RUSA guideline
Not viable
Transcript issue Problem
Answerer behaviors Kwon and Gregory (2007)
Answerer behaviors current study
4. Searching
Offering pointers Searching Answered? Come back
Instruction sources Searching Answered? Sign off
5. Follow-up
Acknowledge the close of the chat conversation. Ideally, tell the patron to return if they need further assistance.
Yes No Unnecessary
Include a link when referring to an online resource; Tell people where you found any information that you provide. Transcripts that were cut off to the point that coding was not possible, and which were not identified during the initial selection process. Transcripts that were cut off, but partial coding was possible. Especially problematic answers were flagged by the coder.
Yes No Unnecessary X to indicate
X to indicate P to indicate
satisfaction in a 2007 study by Kwon and Gregory (Table 3). A mapping of the behaviors chosen to Kwon & Gregory's, 2007 study is available in Table 4. One of the standards, Referrals, did not positively correlate with user satisfaction, perhaps because users generally prefer immediate answers. The standard was included in the current study because the authors believe it to be important for providing excellent reference service. For those criteria listed in Answerer behaviors, the UN code was used when the behavior was not required during the chat session as well as when the transcript was cut off and did not include the information needed to code the criteria, but for which enough information remained to code most of the other criteria.
Table 3 Behaviors correlated with user satisfaction at p < 0.025 (Kwon & Gregory, 2007). RUSA behavior area
Doing a search for or instructing a patron to do a search during the chat transaction.
Table 4 Problem codes. Code
Definition
Effort Incomplete
The answer was too quickly referred or would have benefitted from further follow-through by the answerer. The answer was correct, but did not include important information. Answers were not generally checked for completeness; this code was applied when the answer contained a glaring omission. The answer was partially or wholly incorrect. Answers were not generally checked for accuracy; this code was applied when the answer was obviously incorrect. The patron query implied a deeper question, discoverable through a reference interview. The answerer lacked the technical or subject-specific knowledge and should have referred the question or requested help.
Incorrect No reference interview Should have asked for help
473
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
Table 5 Percent agreement & Cohen's Kappa value. Variable
% Agreement
Cohen's Kappa
READ Scale Answered? Clear Courtesy Grammar Greeting Instruction Referrals Sign-off Searching Sources
93.20% 95.30% 94.60% 95.40% 87.50% 98.30% 98.50% 96.30% 87.50% 97.30% 95.10%
0.84 0.94 0.86 0.90 0.84 0.96 0.97 0.90 0.95 0.97 0.93
Table 6 Summary results. The percentages within each Answerer group are given for each question. The p-value (p) given for each question is for the chi-square test of association between Answerer Group and Rating (after eliminating those transcripts rated UN for the question). p-Values < 0.05 are bolded to indicate statistically significant associations.
Initially, a Closing category was included in addition to the Sign Off coding category. However, the authors combined the two, with the understanding that any form of cordial goodbye would suffice to make patrons feel welcome to return to the service. Transcripts flagged as problems were then sorted into categories for further analysis (see Table 4). These categories were developed via an inductive process wherein the coders met to discuss problem transcripts and identify the issues.
Answered?
N
Yes
Librarians Staff Students Courtesy Librarians Staff Students Greeting Librarians Staff Students Referral Librarians Staff Students Sign Off Librarians Staff Students
226 98 67 N 242 101 70 N 241 102 70 N 60 28 21 N 242 100 70
31.4% 34.7% 23.9% Yes 76.4% 88.1% 72.9% Yes 61.0% 70.6% 72.9% Yes 40.0% 46.4% 47.6% Yes 58.7% 57.0% 44.3%
p = 0.327
p = 0.031
p = 0.082
p = 0.765
p = 0.099
Clear
N
Yes
Librarians Staff Students Grammar Librarians Staff Students Instruction Librarians Staff Students Searching Librarians Staff Students Sources Librarians Staff Students
241 99 68 N 245 101 70 N 143 58 43 N 160 67 44 N 160 64 48
78.8% 81.8% 82.4% Yes 98.4% 90.1% 72.9% Yes 50.3% 53.4% 45.5% Yes 81.3% 89.6% 70.5% Yes 63.8% 62.5% 43.8%
p = 0.726
p = 0.001
p = 0.788
p = 0.040
p = 0.041
their query was answered or if they needed further help. Librarians and staff each exhibited this behavior in about of the transcripts, while students did so in just under one fourth of the transcripts coded. However, no significant association was found between answerer type and this behavior (p = 0.327).
Coding The two authors coded the data independently. A Cohen's kappa was generated for each variable in order to assess interrater reliability. The Landis and Koch interpretation of Cohen's kappa was used to determine the degree of agreement (Landis & Koch, 1977), with all variables demonstrating very good agreement (kappa > 0.8). Table 5 shows the percentage of agreement and kappa values for each variable. After coding, data for variables for which the coders disagreed were removed from the analysis. Data for other variables for the affected transcripts were still analyzed. Associations between categorical variables were analyzed using the chi-square test, or, if some of the expected counts were small, Fisher's exact test. The statistical significance level was chosen to be 0.05. Problem codes were discussed by the two researchers, and thus no test of interrater reliability was conducted for this variable. At the close of a chat transaction, the chat software asks patrons to rate their experience as great, good, so-so, or bad. In addition to a rating, patrons are invited to leave a comment. Comments from the random sample were assigned categories based on content and tone. Content categories included Answer/Answerer, Technical, Chat Service, and General. Comments were additionally analyzed for emotional tone, with the options of positive, negative, or neutral. A comment could be assigned more than one content and/or tone category.
Clarity 79% of the transcripts were deemed clear and jargon-free, with a similar rate of adherence for the three answerer types. No significant association was found between clarity and answerer type (p = 0.726). Courtesy 81% of the coded transcripts were deemed courteous. A significant association was demonstrated between courtesy and answerer type (p = 0.031), with staff being the most courteous (88%), followed by librarians (76%), and finally students (73%). Grammar Overall, 92% of the transcripts were grammatically correct. There was a significant association between user type and grammatical correctness (p = 0.0001). Librarians were most likely to use good grammar, with 98% of the transcripts coded meeting the criteria. Staff also fared well, with 90% adherence. Students used good grammar in 73% of the transcripts coded.
Results Overview
Greeting Of the 454 transcripts reviewed, 68, or 15% were deemed not viable due to the transcript being cut short, without leaving enough data for partial coding. The data was analyzed based on three answerer groups: librarians, staff, and students. Staff comparisons were included in the analysis in order to highlight answerer type differences, though this was not the focus of our study. Table 6 summarizes the results. A Pearson's Chi square test was used to analyze each variable and answerer grouping to check for significance. The UN ratings were removed from this analysis.
There was moderate but not significant evidence for an association (p = 0.082) between greeting patrons and answerer group. Students and staff were more likely to greet patrons (73% and 71% respectively), while librarians were least likely, with only 61% of the transcripts coded containing a greeting. Overall, 65% of the transcripts deemed to include a greeting. Instruction
Answered?
Instruction was judged to be necessary for 244, or 59%, of the chat transcripts analyzed. Of those, answerers provided instruction in 50% of the transcripts. No significant association was found between the
Overall, 31% of the transcripts showed evidence of asking patrons if 474
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
provision of instruction and answerer type (p = 0.788).
2, 2% Referral
8, 8%
27% of all transcripts would have benefitted from a referral. Overall, 43% of the transcripts deemed as needing a referral actually included a referral. No significant association was found between the provision of a referral and answerer type (p = 0.765).
13, 13%
Searching Searching for the patron was deemed necessary in 65% of the transcripts coded. Of those transcripts, evidence of searching was found in 82% of transcripts. Moderate, but not significant, evidence for an association between searching for the patron and answerer group (p = 0.099), with staff being most likely to do so at 90%, followed by librarians at 81% and students at 71%.
Bad 79, 77%
So-So Good
Sign off
Great Moderately strong, but not significant, evidence was found between answerer type and signing off (p = 0.099). Librarians were most likely to sign off (59%), followed by staff (57%) and students (48%). Overall, 56% of the transcripts included a sign off.
Fig. 1. Patron ratings.
48, 11%
Sources
78, 19%
Of the transcripts coded, 67% of patrons (?) required one or more sources. Of those, 60% of the answerers offered a source for the information provided. Provision of sources is significantly associated with answerer group (p = 0.041), with librarians being most likely to provide sources (64%), followed closely by staff (63%). Students were the least likely to provide sources (71%). Answers with problems
164, 39%
The researchers flagged transcripts wherein the answers contained serious problems not otherwise coded for. After coding was complete, the problem transcripts were analyzed and problem types were identified and categorized. Overall, 64 transcripts (15%) were flagged with problems, which fell into 6 categories: effort, incomplete, incorrect, no reference interview, and should have asked for help. Though the numbers are too low to identify any associations, problems spanned all answerer groups, appearing in 12% of librarian, 13% of staff, and 14% of student transcripts. 24 transcripts (6%) were flagged as containing incorrect information. 21 transcripts (5%) were coded as having problems with effort, which often took the form of referring patrons before making an effort to help the patron. 9 transcripts (2%) were incomplete. 6 transcripts (1%) were flagged for lack of a reference interview. For 2 transcripts (1%), the answerer should have asked for help from another staff person.
131, 31%
0s & 1s
2s
3s
4s & 5s
Fig. 2. READ Scale distributions.
40% of all queries were ranked at a level 3, while just 11% are levels 4 or 5. 83% of the chat transactions were initiated while the reference desk was staffed, with a fairly even distribution of READ Scale difficulty, as shown in Fig. 3.
Discussion Patron ratings Kwon & Gregory's, 2007 study reported a significant association between patron satisfaction and being asked if their question was answered. That only 31% of the transcripts coded included this component indicates room for improvement. Clarity is another area that could be improved across answerer types. For transcripts deemed unclear, the issue was often with vocabulary, such as using “database” in place of a more descriptive term such as “article database” or the acronym “ILL” in place of “Interlibrary Loan.” Transcripts coded for lack of courtesy were abrupt, and did not demonstrate attentiveness to interpersonal connection. It may be that librarians lagged behind students and staff in measures of courtesy due
102, or 24% of patrons provided a rating of their chat experience before closing the transaction. As shown in Fig. 1, 90% of ranked transcripts were rated as great or good. Ratings were analyzed using a Fisher's Exact Test, with no significant association found between rating and answerer type (p = 0.579). READ Scale distributions An analysis of the READ Scale rankings assigned to each transcript is demonstrated in Fig. 2. The analysis revealed that half are level 0–2, the level that clearly can be addressed by undergraduate students. Nearly 475
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
17%
19%
83%
81%
All
0-1s
14%
19%
86%
81%
2s
Staffed
3s
13% 87%
4-5s
Unstaffed Fig. 5. Referral comparison.
Fig. 3. Reference desk staffing and READ Scale rankings.
email system twice in the meantime. Patron: how do I check employment status? Answerer (Librarian): I would do an internet search.
95%
In comparison, the following partial transcript does use such linguistic cues:
60% 50% 48%
Answerer (Librarian): No problem! You were probably on the UF100 guide which does list some databases. Patron: Oh, I'm sorry - I did not mean too research UF courses specifically, but how to find out how to get back to the search engine we were shown how to use. Answerer (Librarian): Unfortunately, I don't know exactly which one(s) you were shown in class. If you go back to the library's home page, you can click on the “Articles & Databases” tab and see all of our available databases. Many of our students start out with the Academic Search Premier database.
47% 38%
Librarians
Students
Current Study
Lux & Rich (2016)
Fuller & Dryden (2015)
Logan & Lewis (2011)
The exclamation of “No problem!” in this example serves to make the transaction less formal, while “Unfortunately” is a sympathetic word, and tempers the information provided in the clause that follows. Lux and Rich (2016) study analyzed the behavior of “strived toward building rapport,” which they defined as “the art of conveying friendliness in a textual environment,” noting the difficulty of judging this criteria objectively (p. 127). Their findings for librarian answerers showed similar results, with answerers making a definite or minimal effort toward this end 78% of the time. In contrast, transcripts answered by student employees at our institution were judged courteous 83% of the time, while Lux and Green found that only 62% of students made an effort to convey a friendly demeanor (p. 127). Logan and Lewis (2011) reported similar results to this study and to Lux and Rich (2016) for librarians, judging 83% of the chat transactions to be courteous in tone. It is unsurprising that librarians and staff transcripts coded higher for grammatical correctness than did student employees, most of whom have less experience with the formality of professional communications. Where good grammar was lacking, it was frequently in the omission of capital letters and punctuation, errors commonly associated with text message and instant messaging lingo. This suggests that student employees may be engaging in less formal rhetoric due to the format of the interaction. In addition, student answerers may still be developing the subtle communication skills required to maintain a professional tone while engaging with peers (other students). In contrast to our University, Lux and Rich (2016) found that librarians at their institution provided a greeting more frequently than
Fig. 4. Instruction comparison.
to a tendency to be more focused on the task of answering the patron's query, and less on the socio-interpersonal content of their communication. For example, in the following partial transcript, the answerer provided factual information that the patron requested, but the tone is impersonal and robotic due to a lack of linguistic cues to replace verbal and nonverbal features of in-person communication. Patron: I knew Joe Smith many years ago. I am looking for his e-mail address. Can you help? Answerer (Librarian): Is Joe Smith connected with our University? Patron: he wrote thesis on hawks. Answerer (Librarian): That was in 1995. Mr. Smith is no longer connected with our University so we do not have a current email. Patron: maybe an old e-mail address? Answerer (Librarian): I would suggest you do an Internet search and see if he is currently employed. Answerer: No, we don't have any sort of an email address in the system. Patron: how does that work? Answerer (Librarian): He graduated so long ago, that we've changed our 476
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
Conclusion
students, 69% to 50%, respectively (p. 123). Conversely, librarians at our institution greeted the patron 60% of the time while students greeted patrons in 72% of transcripts. The rate for provision of instruction was low for all three answerer groups. It is interesting to note that when students offered instruction, they frequently did a very good job of breaking down processes into steps and explaining them, as shown in the transcript below.
Our analysis of chat reference transcripts determined that undergraduate students can indeed provide capable chat reference services, comparable in quality and content to that of paraprofessional staff and professional librarians. Our data demonstrates a need for increased training for students regarding referrals, providing sources, and signing off. Our analysis also shows that students exceeded staff and librarians in greeting patrons, and librarians in maintaining a courteous tone, which indicates both that students are performing well and a need to reiterate chat reference best practices for librarians. Students also did better than anticipated at providing instruction. Given the changing role of the academic library and the resulting trend toward a tiered reference model, it is our conclusion that having well-trained undergraduate students staff chat reference is a viable, and even desirable, option.
Patron: What keywords can I use to narrow my search about managing a business in agriculture so that it excludes information about just management or just agriculture? Answerer (Student): I recommend using AND between the terms to see if that will work. For example: management AND agriculture. If that doesn't work, you could also use quotation marks around the whole term you would like to use, i.e. “Agriculture Management” If you're looking for articles with specific words, sometimes those tricks help when you have specific terms you can use. Did you need further ideas on what terms you can use as well?
Funding
Surprisingly, librarians only provided instruction 50% of the time that the interaction required it. This could be because reference librarians are juggling more than one interaction, such as in-person reference desk assistance; due to a perception that instruction is not desired by chat reference patrons; due to the difficulty of providing instruction via chat; or, most likely, some combination of factors. Fig. 4 illustrates findings from other studies. Lux and Rich (2016) similarly found that 48% of librarians provided instruction. However, studies by Fuller and Dryden (2015) and Logan and Lewis (2011) found a higher rate of instruction among librarian answerers, with 95% and 60% respectively. Lux and Rich (2016) found a lower rate of instruction among student employees. The low percentage of referrals provided when needed is another area where improvement is needed. As Fig. 5 demonstrates, Lux and Rich (2016) and Fuller and Dryden (2015) found that over 80% of librarians provided a referral when one was needed. Lux and Rich (2016) found that 53% of student employees provided a referral when one was needed, similar to this study's finding of 50%. Librarians and staff searched for the patron much of the time that it was necessary, while students did so less often. Despite the fact that this behavior was identified by Kwon and Gregory (2007) as significantly associated with patron satisfaction, the researchers believe that searching for patrons is only of benefit if the answerer has the requisite knowledge and skill, and so did not find the lower rate of doing so among student employees to be a deficit.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. References Blonde, J. (2006). Staffing for live electronic reference: Balancing service and sacrifice. In D. R. Lankes (Ed.). The virtual reference desk: Creating a reference future (pp. 75–87). New York: Neal-Schuman Publishers. Bodemer, B. B. (2014). They CAN and they SHOULD: Undergraduates providing peer reference and instruction. College & Research Libraries, 75(2), 162. Bracke, M. S., Brewer, M., Huff-Eibl, R., & Lee, D. R. (2007). Finding information in a new landscape: Developing new service and staffing models for mediated information services. College & Research Libraries, 68(3), 248–267. http://dx.doi.org/10.5860/crl. 68.3.248. Bravender, P., Lyon, C., & Molaro, A. (2011). Should chat reference be staffed by librarians? An assessment of chat reference at an academic library using LibStats. Internet Reference Services Quarterly, 16(3), 111–127. http://dx.doi.org/10.1080/ 10875301.2011.595255. Brenza, A., Kowalsky, M., & Brush, D. (2015). Perceptions of students working as library reference assistants at a university library. Reference Services Review, 43(4), 722–736. http://dx.doi.org/10.1108/RSR-05-2015-0026. Faix, A. (2014). Peer reference revisited: Evolution of a peer-reference model. Reference Services Review, 42(2), 305–319. http://dx.doi.org/10.1108/RSR-07-2013-0039. Faix, A. I., Bates, M. H., Hartman, L. A., Hughes, J. H., Schacher, C. N., Elliot, B. J., & Woods, A. D. (2010). Peer reference redefined: New uses for undergraduate students. Reference Services Review, 38(1), 90–107. http://dx.doi.org/10.1108/ 00907321011020752. Fuller, K., & Dryden, N. H., II (2015). Chat reference analysis to determine accuracy and staffing needs at one academic library. Internet Reference Services Quarterly, 20(3), 163–181. http://dx.doi.org/10.1016/j.acalib.2008.03.006. Gerlich, B. K.. The READ scale (reference effort assessment data). (2013). Retrieved from http://readscale.orgl (n.d.). Hendricks, A., & Buchanan, S. (2013). From exhaustion to exhilaration: Assessing librarian job satisfaction with virtual reference. Library Hi Tech. 31(1). Library Hi Tech (pp. 42–63). . http://dx.doi.org/10.1108/07378831311303921. King, V., & Christensen-Lee, S. (2014). Full-time reference with part-time librarians. Reference and User Services Quarterly, 54(1), 34–43. http://dx.doi.org/10.5860/rusq. 54n1.34. Kwon, N., & Gregory, V. L. (2007). The effects of librarians' behavioral performance on user satisfaction in chat reference services. Reference and User Services Quarterly, 47(2), 137–148. http://dx.doi.org/10.5860/rusq.47n2.137. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 159–174. Langan, K. (2012). Training millennials: A practical and theoretical approach. Reference Services Review, 40(1), 24–48. http://dx.doi.org/10.1108/00907321211203612. Logan, F. F., & Lewis, K. (2011). Quality control: A necessary good for improving service. The Reference Librarian, 52(3), 218–230. http://dx.doi.org/10.1080/02763877.2011. 557314. Lux, V. J., & Rich, L. (2016). Can student assistants effectively provide chat reference services? Student transcripts vs. librarian transcripts. Internet Reference Services Quarterly, 21(3–4), 115–139. http://dx.doi.org/10.1080/10875301.2016.1248585. Maloney, K., & Kemp, J. H. (2015). Changes in reference question complexity following the implementation of a proactive chat system: Implications for practice. College & Research Libraries, 76(7), 959–974. http://dx.doi.org/10.5860/crl.76.7.959. RASD Ad Hoc Committee on Behavioral Guidelines for Reference and Information Services (2013). Guidelines for behavioral performance of reference and information service providers. Retrieved from http://www.ala.org/rusa/resources/guidelines/ guidelinesbehavioral. Ryan, S. M. (2008). Reference transactions analysis: The cost-effectiveness of staffing a
How much and in what ways does the quality of the Library's chat service vary by answerer type? Librarians outperformed staff and students in grammar, signing off, and providing sources, though the margin between librarians and staff for signing off and providing sources was only 2%. Librarians also outperformed students in searching for the patron. Staff ranked above librarians and student employees for courtesy and searching, and above students in signing off and providing sources. Students provided greetings more frequently than librarians or staff, though they were lowest for signing off, searching, and providing sources. Overall, the tiered staffing model works well. Patron ratings and comments indicate that users are largely happy with the service they received, and chat reference can be provided during hours that librarians are unavailable. However, chat transcript analysis demonstrates a need for increased training for all library employees who staff chat reference. 89% of chat reference queries ranked between 0 and 3 on the READ Scale. Focusing on the skills required to handle these questions will help to focus student training. Training students to recognize and make appropriate referrals for more difficult queries will help ensure good service for the remaining 11%. 477
The Journal of Academic Librarianship 43 (2017) 469–478
K. Keyes, E. Dworak
desk-centric services with LibStARs and LibAnswers. Journal of Academic Librarianship, 39(2), 202–214. http://dx.doi.org/10.1016/j.acalib.2012.11.006. Van Duinkerken, W., Stephens, J., & MacDonald, K. I. (2009). The chat reference interview: Seeking evidence based on RUSA's guidelines. New Library World, 110(3/4), 107–121. http://dx.doi.org/10.1108/03074800910941310.
traditional academic reference desk. The Journal of Academic Librarianship, 34(5), 389–399. http://dx.doi.org/10.1016/j.acalib.2008.06.002. Seeholzer, J. (2013). Making it their own: Creating meaningful opportunities for student employees in academic library services. College & Undergraduate Libraries, 20(2), 215–223. http://dx.doi.org/10.1080/10691316.2013.789690. Stevens, C. R. (2013). Reference reviewed and re-envisioned: Revamping librarian and
478