A Survey of the Usability of Digital Reference Services on Academic Health Science Library Web Sites by Cheryl Dee and Maryellen Allen
Reference interactions with patrons in a digital library environment using digital reference services (DRS) has become widespread. However, such services in many libraries appear to be underutilized. A study surveying the ease and convenience of such services for patrons in over 100 academic health science library Web sites suggests that cumbersome access and difficulty of use may be a key restraining force.
INTRODUCTION
Cheryl Dee is Assistant Professor, School of Library and Information Science, University of South Florida, Tampa, FL 33620, USA; Maryellen Allen is Assistant Librarian, Tampa Library, University of South Florida, Tampa, FL 33620, USA
[email protected].
The proliferation of varying forms of interactive electronic reference mediums, commonly referred to as ‘‘digital reference services’’ (or DRS), has been the benchmark of quality service for most libraries for a number of years now. According to Bernie Sloan, Senior Library Information Systems Consultant with the University of Illinois Office for Planning and Budgeting, ‘‘Digital reference services maybe defined as the provision of reference services, involving collaboration between library user and librarian, in a computer-based medium. These services can utilize various media, including e-mail, Web forms, chat (including instant messaging services), video, Web customer call center software, Voice over Internet Protocol (VoIP), etc.’’1 However, as the implementation of digital reference services reaches ubiquity, concerns over usability consistently top the list of issues discussed by developers and designers of digital information environments in academic libraries. The considerations over what kind of services to offer (i.e., chat, e-mail, or both) combined with additional uncertainties regarding their future create challenging decisions for librarians. Perhaps the source of much trepidation lay in the very real possibility for expansion of existing services and the implications involved with increasing the scope and outreach of digital reference services (DRS) as these services develop. In tandem with the ‘‘coming of age’’ of DRS, librarians have witnessed a parallel maturation of the general online environment of the virtual library characterized by a certain degree of understanding and expectation on behalf of users over the types and formats of online assistance services provided by their libraries. Indeed, most users of online library services expect a wide range of information resources, preferably in full-text, and the ability to contact someone (usually by telephone or e-mail) when they require help using those resources. Usability, that is, how easy-to-use and intuitive end-user interfaces are, becomes the developmental keystone of a successful online library presence, simply because if users cannot access the information easily, there is little point in expending the enormous amount of effort and expense it takes to provide digital reference. However, libraries are finding that
The Journal of Academic Librarianship, Volume 32, Number 1, pages 69–78
January 2006 69
as accessible as their online resources appear to be, patrons still require substantial assistance in effectively utilizing these resources. The problem of providing effective assistance is often exacerbated by patrons who are increasingly located at a distance from the librarian (i.e., distance learners or remote users) in combination with the rapid increase in the number of discrete electronic proprietary (i.e., subscription-based) resources now at the patron’s disposal both on campus and off. The dizzying array of potential information sources has driven users of library services to seek assistance in greater depth, and from a wide variety of locales. Thus, the rise of digital reference services. Consider the area where concerns over usability intersect the library’s efforts to implement an effective digital reference service. Factors such as link visibility, position, and the language used to label the service all contribute to the success or failure of a library’s DRS. If one takes into account the rules of usability theory such as those defined by Jakob Nielsen2 and applies them strictly to DRS, suddenly an entirely new tier of usability concerns emerges related to the greater sphere of Web site usability but with a few new facets. Indeed, most of the efforts involved with Web site usability in libraries focus upon the design and structure of primary and secondary informational pages, while very little attention is paid to these same aspects with respect to a library’s DRS pages. As a result, the user may be able to locate the service but encounter great difficulty actually trying to use it. Features such as frames, browser incompatibilities, and the requirement for the user to download a special client before he or she can use the service still pose barriers to users of DRS. This may be due to the fact that many libraries employ a third-party software application to provide their services. In such cases, the library may have very limited control over the design and functionality of the system. Most academic libraries seem to view their digital reference services as a valuable and rewarding, albeit somewhat challenging endeavor. With the exception of a handful of institutions who have successfully marketed their services and are reporting high traffic levels,3 many academic libraries find that their chat-based digital reference services are failing to live up to expected use.4–7 And yet, while some libraries’ chat statistics may not measure up to the number of e-mail transactions, the goal to provide assistance to users in all types of available formats abrogates the possibility of not offering this type of service. For a number of librarians, the low usage level of chat reference services is perhaps seen as a blessing in disguise due to the tremendous amount of staff time necessary, and the many drawbacks inherent in attempting to effectively instruct a remote patron. However, it is important to ask why these services are not experiencing the level of use that common sense and expressed user need would indicate they should. Small to medium-sized academic medical libraries’ use of digital reference chat is growing slowly. An exploration of 132 academic United States health science library Web sites found that 36 (27 percent) academic provided a combination of chat and e-mail reference services, as compared to 25 (21 percent) of 121 surveyed in a previous 2002 study, showing an approximate 6 percent increase. Ninety percent of the academic health science libraries surveyed offered at least e-mail digital reference services.8 A prior study conducted in 20029 pointed out that chat services in medical libraries ‘‘ . . .were not offering
70 The Journal of Academic Librarianship
chat service for a variety of reasons, including (1) staffing shortages, (2) perceived lack of interest by library users, (3) chat software issues, and (4) use of other reference services.’’9 This study proposes that a low level of usability for these services combined with lack of effective prominent placement of the service on the library’s home page contributes to their diminished use, and thus the slow implementation and userutilization of online chat reference services. In this study, 119 medical school libraries were studied to determine the type of digital reference services (DRS) offered (if any), how easily study participants were able to locate these services, and how quantitative data regarding user-activity compares to user perception of site navigation.
LITERATURE REVIEW While literature specifically concerning usability issues with online reference services is just beginning to emerge, the general literature concerning the usability of Web pages is extensive, ranging from ‘‘how-to’’ guides to case studies. Oulanov and Pajarillo10 reported on a usability study at a New York university focusing on the positive/negative emotional affect, efficiency, learnability, control, and overall helpfulness of the CUNY+ database. This study used an altered version of the de facto standard for software evaluation, Software Usability Measurement Standard (SUMI), and ultimately reported a mixed outcome, showing that users ‘‘feelings’’ toward the system were more positive than their assessment of the technical aspects. Allen11 reported a similar finding on the subjective emotional aspects of usability testing and stated that Most participants rated the interface quite favorably for its navigability and visual aesthetics, yet the results show a large number of missed answers, needlessly long navigational paths, and fairly long intervals of time between mouse clicks, clearly indicating user difficulty.
However, user reaction to the aesthetic and subjective qualities of a page may not be a reliable indication of the efficacy of the more quantifiable aspects of the interface, such as the number of navigational links required to successfully locate services. Indeed, Gullikson, et al.12 report in their overview of usability testing of a university Web site, that ‘‘In essence, a site may be visually appealing, contain all the resources that meet the site’s objectives, but still be humanly unusable.’’ In further support of the dichotomy between aesthetics and functionality, Gullikson goes on to state that ‘‘Overall, participants’ performance in using the Web site to find answers to questions was poor-and these were not difficult questions.’’ It becomes an even more surprising finding when it is understood that the university Web site was chosen for evaluation because it had received awards for its design. In order to overcome some of the obstacles of site navigation, effective online help is extremely important from a usability standpoint, yet the literature addressing this specific point is somewhat sparse. In her overview of a university Web site that had been redesigned, McMullen insightfully observes, ‘‘With distance learning and the resultant proliferation of remote library use, we often do not ever see or meet the users that we are serving.’’13 As a result of the removal of the patron to locations far distant from the library, it becomes increasingly imperative that the Web interface ‘‘be clear and uncluttered, easy to maneuver, and provide built-in redundancy to
accommodate different learning styles.’’ Moreover, McMullen observed ‘‘users are not interested in reading a research guide prior to doing research. They only want help to be available when they need it and from the screen that they are using.’’ This was an important observation as it represents the recurring thread in the literature lamenting the shortcomings of online help. McGillis and Toms14 include locating help as one of the tasks that subjects participating in a usability study of a university Web site were asked to complete. Interestingly, they reported that while the majority of participants were able to locate an online guide, many had difficulties with the language used, which ultimately impeded their efforts. The authors report, ‘‘The terminology was not meaningful to participants. When asked what was the hardest part about completing the task, participants answered: ‘‘not knowing what heading to look under’’ . . . and ‘‘figuring out what the categories meant.’’ However, many view online help screens as an afterthought, or a tool of secondary importance. Battleson, Booth, and Weintrop,15 in their case study of the usability testing of a university Web site, state, dWeb SearchT and dNeed HelpT were viewed as secondary tools for supporting library research. However, the test revealed surprising data on the latter two links. From the main screen the dWeb SearchT and dNeed HelpT links both failed the deasy to learnT criterion, creating enough confusion with users that they were rarely selected.
All too often, the usability aspects of a site that pertain to online help are overlooked or not given the attention that is due. Given the amount of time and resources required to conduct full-scale usability testing, one can hardly blame testers for not concentrating on this facet. Dickstein and Mills16 point out that when conducting formal usability testing, ‘‘You can’t test everything. [You must] decide what are the most important tasks you want users to be able to perform on your site.’’ And yet seeking effective online help is important enough to warrant further study, even if its significance is not immediately apparent. Dickstein and Mills recognized this and included help features as a result of their comprehensive usability testing. Not only did the new interface include an impressive list of help topics (called ‘‘How to Find. . .’’), they also incorporated a help feature in the form of ‘‘Tips’’ sprinkled throughout the site at the point of need.16 Point of need assistance is vital to the effectiveness and success of any digital reference initiative. Schneider17 points out that ‘‘One of the cardinal and transformational rules of online reference is that the user isn’t remote; the librarian is.’’ Schneider goes on to quote Ann Lipow, who says ‘‘It’s the librarian’s job to meet the users where they are, to seek them out, to market in language intelligible and attractive to our target communities, and to customize services based on the users’ needs, preferences, and timetables. . .’’ Additionally, Trump and Tuttle18 echo this sentiment when they state ‘‘Autonomous Internet users have come to expect to find the information online, and they expect to find the same level of support for the information they find online as they would in the library.’’ However, even though the study outlined by Trump and Tuttle goes a long way in explaining why multiple links to online reference services must be present at the point of need, they fall short of actually commenting on the usability aspects of the reference service itself.
Perhaps in an effort to avoid a cluttered home page, or perhaps to keep site links orderly, many institutions that have implemented a digital reference service have placed the link to the service underneath a category heading, such as ‘‘services’’ or ‘‘reference’’. Commenting on this practice, Alice Kawakami19 writes, Burying the link is a sure way to sink a project. During UCLA’s first pilot in spring 2001, the link was buried two clicks down on the pages of the Undergraduate Library and the Biomedical Library. We only had 14 callers during the first quarter. When the icon was placed on the main library homepage, the number of calls went up to 45. The following quarter it went up to 100. If you fear that your service will be overrun, limit it to primary clientele or to specific resources such as the catalog.
But making links more obvious and meeting the patron at the point of need is winning only half the battle. In addition to positioning online reference services in the right places, the service’s interface must also be usable. The literature concerning the usability aspects of digital reference services is just beginning to emerge, and studies principally concerning academic medical libraries are scarce. Perhaps the best overview of usability with respect to digital reference services comes from Johnson20 who contributed a crucial chapter that focuses almost exclusively on accessibility and appearance of digital, or de-referenceT pages in a book on the implementation of digital reference service. Although Johnson cites some of the inherent challenges of digital reference services as a possible reason why users have become slow to accept it, he suggests more strongly that ‘‘significant problems lie in the presentation of the service itself.’’ Johnson further asserts that ‘‘Many [ereference service] sites are difficult to find, and a few can only be located by wandering from page to page until one accidentally happens upon them. When they are located, some sites offer insufficient guidance as to what information to include, while others are extremely restrictive in what they allow.’’ Johnson’s study, a review of twenty-eight libraries in the Midwestern and Western United States, is one of the only multi-state studies currently focusing primarily on the usability of digital, or de-referenceT services and therefore may be viewed as a foundation for further inquiries. Johnson examined several key aspects of digital reference including the number of pages within each library’s site that included a link to a digital reference service, as well as the language used to describe the service. In addition, he reviewed the overall ease of use of the service and the inclusion/exclusion of explanatory text, along with several other usability aspects. Johnson makes several recommendations for improving online reference services including the following:
! service should be linked from library home page; ! links should appear throughout the site; ! the name should describe the service; and ! links throughout the site should have the same name.20 Johnson’s study, along with the peripheral body of literature concerning both usability and the implementation of digital
January 2006 71
reference services, respectively, is especially important to the continued research activities and evaluation of online digital reference services. Unfortunately, a more extensive study specifically focused upon the usability aspects of digital reference services appears to be lacking in the literature. Johnson’s work represents a significant start toward the in-depth examination of this area and will serve as the egress through which the authors will proceed and attempt to broaden the knowledge base of the field.
METHODOLOGY This study utilized a multi-tiered approach using a survey instrument, coordinating online worksheets, and a proxy server, intended to build upon the results of a preliminary case study conducted by Dee8 described briefly herein: Preliminary Study In 2004, with the aid of graduate students in an advanced health science bibliography course, and graduate assistants in the school of library and information science, Dee8 updated a 2002 study21 on digital reference services, gathering updated data on current trends. Data were collected through exploration of over 100 academic health science library Web sites in search of links to digital reference services, by e-mail correspondence with a sampling of academic health science library personnel on the topics of chat (including IM) and e-mail reference services, and by chat sessions with health science libraries and consortia offering chat services. Students in the study8 recorded their impressions of the usability of the Web sites they explored, specifically in terms of the ease of location of digital reference services. The students explored the Web sites by locating and following hyperlinks labeled with words that they believed were likely to take them to digital reference services. Results indicated that digital reference services were not easily identified on some of the academic health science library Web sites. On over a third of the library Web sites surveyed, students disagreed or strongly disagreed that the site was organized/ presented in a way that made it easy to find the link(s) leading to digital reference services, particularly the labeling of relevant links. Over a third of students also indicated that they disagreed or strongly disagreed that the buttons or hyperlinks were clearly labeled. This multi-tiered approach to the exploration of digital reference services was useful in that some Web sites that appeared to have active digital reference chat services in fact had discontinued them, and in several cases e-mail inquiries uncovered a chat location that had previously gone unnoticed. E-mails were also useful in determining if a library currently without chat or IM services intended to implement something in the near future, or had recently discontinued the service. Comments were also solicited from librarians without such services to collect their reasons for not implementing chat or IM reference. Live chat with librarians provided valuable additional information. However, some academic health science libraries did not allow open access to their digital reference services beyond their primary users, and in those cases e-mail messages were sent to request information about chat services offered. All data collected were recorded using the Flashlight Survey tool.
72 The Journal of Academic Librarianship
Prior to this study, students enrolled in a medical and consumer information sources class provided preliminary data from their surveys and interviews and this class raised the questions about the medical school library usability that triggered additional research. Current Study The current study used both qualitative and quantitative methodologies to gather additional data, building upon what was already gleaned from the preliminary study. Participants used in the study were volunteers enrolled in a graduatelevel library science course dealing with special libraries with an emphasis on the health sciences. The class contained twenty-seven students, fourteen of whom opted to participate in the study. In all, the study population was comprised participants ranging in age from twenty-six to forty-five years old. The study population was comprised of approximately 50 percent males and 50 percent females. All participants were enrolled in graduate-level library science curricula, taking advanced courses, and all had completed their required library and information science core courses (including one pertaining to reference sources). When asked to rate their information searching skills, participants reported that their skill-levels ranged from low–intermediate to advanced. Fourteen students responded to questions from the same survey instrument used in the preliminary study, but they were not asked to contact a librarian at the library being surveyed. Students were placed in a controlled environment for these Web explorations for a specified period of time. Each student was provided with a computer in a lab on which to complete their library Web site explorations. A list of medical school libraries located within the United States and Canada was gleaned from two Web-based sources. The first source was the National Library of Medicine’s online membership directory (http://nnlm.gov/members/adv.html),22 which allows users to locate member libraries around the country within certain parameters. For the purposes of the study, full-member academic libraries in all states and regions were selected for retrieval. As a result of these parameters, 420 libraries were included on the original list. Upon examination of the list, it was determined that certain entries such as small departmental or very specialized libraries, libraries with very limited holdings, and those with no associated Web pages were not good candidates for the study and were eliminated from the participant list. As a secondary resource, the listing of ‘‘Medical/Health Science Libraries on the Web’’23 provided by the Hardin Library at the University of Iowa (http:// www.lib.uiowa.edu/hardin/hslibs.html) provided the investigators with a list of health science libraries in Canada. Large academic health science libraries were selected from this list and combined with those taken from the NLM online directory. Ultimately, 119 medical libraries were selected for evaluation for the study. Schools were assigned alphabetically to participants. The first seven participants were assigned nine schools, while the last seven participants were each assigned eight schools. A paper copy of the survey instrument (see Appendix 1) was placed next to each computer for the student to follow. Each participant was provided with an online worksheet listing their group of school Web sites to review and instructed to fill in the answers from the survey in the spaces provided in the online
worksheet. Each worksheet was assigned a unique name based upon the IP address of each computer and saved to the investigator’s network drive. At the end of the exercise, the student was instructed to keep the worksheet open on the computer’s desktop while the investigator went around to each machine and saved each participant’s answers. While the students were working on the survey, all network activity for each computer located in the computer lab was routed through a proxy server. The investigators used this proxy server to record quantitative data that could be extracted from Web logs. Each log detailed the navigational activities for each student including the time/date stamp and the number of hyperlinks a student traversed to locate specific information on a medical school library Web site. These data, after analysis, were compared to student-reported perceptions of time spent on site and number of clicks to reach digital reference services resources. Data Collection Formal methods of data collection using qualitative and quantitative instruments were developed to rigorously investigate students’ perceived usability of medical school library Web sites to compare to student feedback from the earlier studies indicating that the digital reference services were difficult to find. The authors developed four data-gathering instruments to test the usability of the medical school Web sites from the perspective of a student accessing the site for the first time. Instruments:
! Online worksheet with hyperlinks to access the medical school Web sites.
! Survey Instrument. ! Online response sheet tailored to each participant to accompany survey instrument.
! Online session using a proxy server to record students’ navigational activities.
Each data collection instrument is described in more detail below: Online Worksheet With Hypertext Links Graduate students in the first and second rounds of the study each accessed and reviewed a prescribed group of medical school library Web sites to assess the usability of the site in terms of ease of locating the library’s digital reference services (DRS). The intent of the data collection was to simulate the perspective of a student’s search for assistance with medical information from a location remote from the medical library. To provide easy access to the Web sites, a Web-based document containing a list of the libraries with an active link to the library’s home page was distributed to each student. Each student was assigned a group of schools to evaluate, and each student provided information for their assigned schools’ library Web sites based upon questions provided by the survey instrument. Survey Instrument Students also completed a twenty-question survey (see Appendix 1) relying on information gathered at each medical
school library Web site. The survey was developed in Word and copies of the survey were printed out and supplied to each participant. The survey questions contained a combination of Yes/No responses, multiple choice, Likert scale, and openended questions. Online Response Sheet Students evaluated the perceived usability of the digital reference services for each school, counting and recording the number of links they followed to get to the libraries’ digital reference services. While evaluating the services, students also answered a series of questions concerning the usability of the Web sites from the point of view of a student. Students reported their observations on an online response sheet tailored to match the survey instrument. Each survey question was copied into an Excel spreadsheet to assist participants in recording the information gleaned from the medical school libraries about their digital reference services and to gather data on the students’ impressions of the usability of the medical school Web sites. Proxy Server The second part of the data collection involved the Web logs generated by a proxy server through which all computers in the IP range of the computer lab were routed during the investigational session. Students met in two sessions at the University of South Florida Tampa Library computer laboratory networked through a proxy server. The resulting log files from the proxy server were used to determine the actual number of navigational levels (or clicks) a student had explored in order to find digital reference services on each of the 119 medical school library Web sites. The proxy server recorded usable, quantifiable data by providing a navigational trail from which the investigators could glean the number of times a student clicked the mouse on a specific university library’s Web site to locate DRS. To extract relevant data from over one thousand pages of log files produced by the proxy server, character strings representing the HTML images and selected JavaScript applets had to be deleted. It should be noted that sorting the log files and removing irrelevant data is at the same time both a challenging and tedious process and introduces some small amount of error into the data set.
RESULTS Data from each instrument were gathered, compiled, and averaged to obtain an overall picture of student perceptions of DRS usability and site navigability. The averaged results of student responses on each question from the survey were compared with the averages compiled from the proxy server logs. Specific results from the survey and the proxy log files are detailed below. Survey Results The survey measured three distinct areas of digital reference services including the kind and extent of services offered, the navigability of the service, and the overall usability of the site with relation to the digital reference service offered. Within the first area, survey respondents provided data regarding the type of service(s) offered (e-mail, chat, both, or none) and any additional information they could glean such as the operating
January 2006 73
hours and turnaround times for responses. In the second area, respondents provided specific feedback concerning the amount of time they spent looking for digital reference services, the number of links they followed to locate the service, whether they employed the use of a site search engine or site map to locate the service, and whether or not services were linked directly from the home page. Finally, respondents reported on the overall usability of the service including whether or not the site used frames, the level of language used (noting if terms were employed that the respondent did not understand), whether labeled graphic buttons were used, and the general clarity of labeling for digital reference services. Upon examination of the compiled data from the survey, patterns began to emerge surrounding the various offerings and qualities of digital reference services offered by health science libraries. Due to reporting errors, three surveys were disqualified. This left 116 surveys out of the original 119. However, given the percentage of successful respondents (97.5 percent) left from the original population of 119, the investigators are reasonably confident that the data represent an accurate portrayal of participants’ experiences. In the first area (kind and extent of services offered), survey respondents reported that 21 out of 116 (18.1 percent) school libraries surveyed offered a live chat service (Fig. 1). One hundred out of 116 (86.2 percent) offered at least e-mail reference (some of which also offered chat), 79 (68.1 percent) offered email service only, and 16 libraries (13.8 percent) offered neither e-mail nor chat services. It should be noted that these numbers are based on what participants were able to locate and do not necessarily reflect actual digital reference service offerings. Additionally, 68 out of 116 (41.4 percent) libraries provided additional information about their digital reference services, such as operating hours and turn-around times. Of the twentyone libraries that offered chat and e-mail, eighteen (85.7 percent) provided information such as hours of operation and/ or turnaround times. For those libraries offering only e-mail services, forty-four of the seventy-nine (55.7 percent) provided additional information. Sixteen libraries that offered digital reference services in some form offered no additional information. Within the entire population of libraries offering both chat and e-mail services with additional information provided,
Figure 1 Survey Participant Response Indicating Percentage of Libraries Offering Chat
74 The Journal of Academic Librarianship
Figure 2 Service Linked from Home Page
sixteen libraries (88.9 percent) furnished hours of operation for the service while eleven (61.1 percent) gave turnaround times for responses. In libraries with e-mail-only services, thirty-one (70.5 percent) gave the hours of operation and thirty-seven (84.1 percent) provided turnaround times for responses. The second area concerned the number of links navigated and the amount of time students reported spending in searching for the library’s digital reference services. The survey data showed that each student navigated an average of 2.5 pages and spent an average of 5.8 minutes before either locating the service or giving up. It is important to note, however, that the data compiled from the surveys represent student-reported times only. When comparisons were made between students’ self-reported times and actual time recorded on the log files, large discrepancies emerged (please refer to the Proxy server/ Web log results section). With respect to the position and usability of the services once located, students reported that the digital reference services were located, on average, 2.25 clicks away from the library’s home page. Eleven participants (78.6 percent) reported that the service was linked directly from the library’s home page, while two participants (14.3 percent) reported that the library provided no link from the home page directly to the service. One participant (7.1 percent) did not respond to the question (Fig. 2). With respect to student opinions regarding the ease with which they were able to locate DRS services, nine participants (64.2 percent) responded that they either strongly agreed or agreed that the service was easy to locate. In contrast, five participants (35.7 percent) disagreed or strongly disagreed. Overall, the results concerning the overall student opinion of ease of location for DRS’ are fairly consistent when compared to the student-reported data regarding the position of the service. In both cases, the majority of students who reported that the service was linked directly from the library’s home page also reported that they felt the service was easy to locate. A significantly higher percentage felt that the service was not easy to locate when compared with the number of students reporting that no links to the DRS were provided from the home page. The number of students who failed to respond to the questions concerning the number of clicks from the home page to the DRS may account for the discrepancy to some degree.
Figure 3 Use of Language for Labeling DRS Services
With respect to the language used in naming digital reference services, students overwhelmingly reported that the libraries did not use obscure or unknown labels to identify their digital reference services. In fact, only one of the respondents (7.1 percent) indicated that the libraries they surveyed had used confusing language (Fig. 3). In a related query, students were also asked if the library made use of graphical buttons to link to their DRS. In response to this question, ten students (71.4 percent) indicated that the libraries they surveyed did not use graphical buttons to link to their digital reference services, while three students (21.4 percent) stated that the library did use buttons, and one student (7.1 percent) did not respond to the question. Of the three respondents who reported that the library used buttons, two of them (66.7 percent) agreed or strongly agreed that these buttons were clearly labeled. One student (33.3 percent) strongly disagreed to the proposed statement.
Proxy Server/Web Log Results As a means of measuring the accuracy of student-reported data, all traffic during the testing period was routed through a proxy server and the resulting logs analyzed and compared to the student-reported data. Due to technical errors, twenty-four of the Web log sessions had to be thrown out of the original pool of 119. The remaining ninety-five Web logs revealed some very interesting inconsistencies. Firstly, in comparing the time each user spent on each site from the Web log to the user-reported data, it became apparent that test participants almost always overestimated the amount of time spent on a site, sometimes grossly. While the average time users reported spending on sites was 5.8 minutes, log data showed actual average time spent on a site to be only 2.7 minutes (Fig. 4). The average gap between the user-reported time and the actual time was 2.2 minutes. On one occasion, the user-reported spending over ten minutes on a site when log files revealed that less than two minutes was actually spent on the site. Such an extreme might be attributable to an error in reporting. Interestingly, the proxy server log files disclosed user behavior oddities. For example, upon analysis of the user logs, it appeared as though one user had veered off the instructed course entirely and instead visited the America Online and the CNN Web sites. The investigators theorized that the survey participant might have had two concurrent instances of Internet Explorer running on the machine while attempting to complete the survey. Indeed, interspersed among the log entries for the abovementioned sites could be found those for one of the libraries assigned to the student. Although this particular student evaluated several schools, this was the only instance where nonlibrary sites appeared in the log file, indicating that the majority of participants stayed on track throughout the testing period. When the investigators went back during the log analysis and retraced selected students’ steps as they navigated through the sites, they discovered that in many cases, students took circuitous routes in locating the library’s digital reference services. In addition, it was noted by one investigator that several students had reported that links to libraries’ DRS were
Figure 4 Reported Versus Actual Time Spent on Site
January 2006 75
not accessible directly from the home page, when in fact the link was present. These observations hold a great deal of significance for those designing library Web pages. Indeed, if graduate students studying information science are unable to locate links present on a home page, the probability that medical students would be able to identify such links is likely to be significantly less.
DISCUSSION
AND
CONCLUSION
These results indicated that while most libraries offered some form of digital reference services, the extent and nature of the services varied, with only a minority of the libraries offering chat while an even smaller group offered no digital reference services. This shows a strong commitment on behalf of academic medical libraries to reach out to their patronage using asynchronous electronic means (e-mail). For the most part, the students reported favorably on the usability aspects of the DRS for the libraries, although the discrepancy between reported numbers of navigated links and actual average numbers as determined from the proxy server log files is interesting. To clarify, most of the participants (78.6 percent) reported that digital reference services were linked directly off the libraries’ home pages, but proxy server logs indicate that users navigated an average of four links before they found it. Upon reflection, the investigators supposed that the students were realizing (after locating the libraries’ DRS) that they may have missed the link to the service from the home page and therefore reported that the service was linked from the home page, even if they did not find it immediately. Nevertheless, libraries seem to be doing a fairly good job in labeling their services so that students are able to find them. The fact that very few students reported libraries using obscure or unknown language to label their services is encouraging, and an aspect of usability that is often overlooked with respect to digital reference services. It is significant to note, however, that the participants of this study were graduate-level information science students who may possess a somewhat extensive knowledge of library Webpage design and jargon. The graduate students enrolled in library science programs may also have more training in information retrieval than the average medical student. As a result, participants were likely to understand the use of terms or wording on the medical school library pages more than nonlibrary-oriented students, and perhaps even see them as nonjargon, when in fact they were not entry-level terms. Would a medical school student know to access the button or link labeled ‘‘Services’’ to locate the digital reference services? Perhaps not, but a library science student might. In recognition of this, it would be advisable for future investigators to use first or second-year medical students, or even pre-medical students for their study populations. Ultimately, the investigators concluded that the lack of usability as an explanation for the underuse of digital reference services (primarily chat) was not wholly supported by the results of the study. While proxy server logs did indicate that students were not able to locate the link to libraries’ DRS immediately, most of the time the service was successfully located. In addition, when asked their opinions, the majority of students reported that they felt the services were easy to locate. Only, it must be acknowledged that the goals of the study participants were different than those of the typical medical student searching for information. The study participants viewed the exercises as
76 The Journal of Academic Librarianship
tasks to be completed and approached them as such. This orientation is much different from that of a medical student who may simply happen upon services during the course of their research activities, or who may be searching for help. Of course, over 30 percent of respondents did feel that locating the service was not easy. The investigators felt that this was a significant percentage and should serve as a reminder that usability issues must always remain in the forefront of the design of digital reference services. Additionally, the investigators recognize the need for additional research in this area, as this study focused upon generalities and involved some significant limitations. The investigators recognize the need in future studies to employ the use of a control group. Such a group would ideally be comprised of new Web site users in the medical and allied health fields with a specific research goal. Members of this control group would be able to provide more definitive answers regarding the usability of, and nature of, digital reference services offered by institutions. The findings of the control group could then be compared with those of the test group to establish a higher degree of confidence in the validity of survey results. Acknowledgment: The authors would like to thank Mr. Douglas Turk, Mr. Joshua Newhouse, Ms. Barbara Lewis, Ms. Ebony Grigsby, and Ms. Sarah Austin.
APPENDIX A Survey Instrument 1. What Virtual Reference Services were listed on the medical school site? 5 5 5 5
Live chat E-mail virtual reference services Both E-mail and chat services were available Other. Please describe. ___________________
2. If the medical school had an e-mail reference service (either with Chat or as the only virtual reference service), what were some of the characteristics of the service that were available on the Web page? 5 5 5
Hours Turn-around time Other? Please list:____
3. Approximately (your nearest estimate) how many minutes did you browse the Web site before you were able to locate information indicating whether or not the medical school offered a virtual reference chat service? ____Minutes 4. How many links/Web pages did you navigate to get from the Home Page to the general information page concerning Digital References Services (chat OR e-mail). ___# of links 5. How many links/Web pages did it take to get to the Chat Service (if available) ___# of links
6. Was there a search engine of the site to help find Digital References Services? 5 5
1
Yes No (skip to question 8)
5
Yes, the search did locate the Chat I could not find while browsing No
8. Was there a link/ button inviting e-mail to ask questions about the site? 5 5
Yes No
Yes Examples:_______________________ No
10. Did the site use a divided window (or frame) to present the content of the page? 5 5
Yes No (If no, skip to 12)
11. If yes, do you think the information in the frame (i.e., table of contents) helped you locate information on digital reference services. 5 5
Yes, it helped me No, the divided window confused my search or annoyed me
Answer each statement below, identifying your response to the statement with 1 representing ‘‘I strongly disagree with the statement’’ and 4 representing ‘‘I strongly agree with the statement’’. Otherwise, choose ‘‘Yes’’ or ‘‘No’’ in response to the question provided. 12. The site was organized/presented in a way that made it easy to find the link(s) leading to digital reference services. 1
2
3
Strongly Disagree
4 Strongly Agree
13. The site offered Help screens, FAQ help, or offer E-mail help, to assist with the location of either the email reference service or the virtual chat services. 1
2
Strongly Disagree
3
4 Strongly Agree
14. Did this site use graphical buttons as links to the digital reference services? 5 5
Yes No (If no, go to 16)
3
4 Strongly Agree
16. Did you use a Web site map or site index to look for the Chat? 5 5
Yes No (If no, you are finished)
17. The site map was useful to locate Chat. 1
2
3
Strongly Disagree
NOTES
9. Did the link(s) to the digital reference service(s) include words on the Web site that you did not understand? 5 5
2
Strongly Disagree
7. Did a search of the library’s site reveal that the library had a chat service that you did not find while browsing the their site. 5
15. The buttons were clearly labeled or identified.
4 Strongly Agree
AND
REFERENCES
1. Sloan, B., Digital Reference Primer (2002), http://people.lis.uiuc. edu/~b-sloan/primer.htm. 2. Nielsen J., in Designing Web Usability, Vol. 419 (New Riders: Indianapolis, IN, 2000). 3. Ronan J. S., Interactive Reference Coordinator, RefeXpress (2004). 4. Campbell K. A., Jones M. F., & Shuttle J., ‘‘Chat Reference: One University’s Experience,’’ Reference Librarian (2002): 297 – 309. 5. Lee I. J., ‘‘Do Virtual Reference Librarians Dream of Digital Reference Questions?: A Qualitative and Quantitative Analysis of Email and Chat Reference,’’ Australian Academic and Research Libraries 35 (2004): 95 – 110. 6. Wells, C. A., ‘‘Location, Location, Location: The Importance of Placement of the Chatrequest Button,’’ Reference and User Services Quarterly 43 (2003): 133 – 137. 7. Coffman S. & Arret L., ‘‘To Chat or not to Chat—Taking Another Look at Virtual Reference, Part I,’’ Searcher 12 (2004): 38 – 46. 8. Dee C., ‘‘Digital Reference Service: Trends in Academic Medical Libraries,’’ Medical Reference Services Quarterly 24, no.1 (2005) 19–27. 9. Dee C. R., ‘‘Chat Reference Service in Medical Libraries: Part 1. An Introduction,’’ Medical Reference Services Quarterly 22 (2002): 1 – 13. 10. Oulanov A., & Pajarillo E. J. Y., ‘‘CUNY + Web: Usability Study of the Web-based GUI Version of the Bibliographic Database of the City University of New York (CUNY),’’ Electronic Library 20 (2002): 481 – 487. 11. Allen M., ‘‘A Case Study of the Usability Testing of the University of South Florida’s Virtual Library Interface Design,’’ Online Information Review 26 (2002): 40 – 53. 12. Gullikson S., Blades R., & Bragdon M., ‘‘The Impact of Information Architecture on Academic Web Site Usability,’’ Electronic Library 17 (1999): 293 – 304. 13. McMullen S., ‘‘Usability Testing in a Library Web Site Redesign Project,’’ Reference Services Review 29 (2001): 7 – 22. 14. McGillis L., & Toms E. G., ‘‘Usability of the Academic Library Web Site: Implications for Design,’’ College & Research Libraries 62 (2001): 355 – 367. 15. Battleson B., Booth A., & Weintrop J., ‘‘Usability Testing of An Academic Library Web Site: A Case Study,’’ Journal of Academic Librarianship 27 (2001): 188 – 198. 16. Dickstein R., & Mills V. A., ‘‘Usability Testing at the University of Arizona Library: How to Let the Users in on the Design,’’ Information Technology and Libraries 19 (2000): 144 – 151. 17. Schneider K., The Distributed Librarian: Live, Online, RealTime Reference (November 2000), http://www.ala.org/ala/alonline/ inetlibrarian/2000columns/november2000.htm. 18. Trump J. F., & Tuttle I. P., ‘‘Here, There, and Everywhere: Reference
January 2006 77
at the Point-of-Need,’’ Journal of Academic Librarianship 27 (2001): 464 – 466. 19. Kawakami A. K., ‘‘Delivering Digital Reference,’’ Library Journal 1976 (2002): 28 – 29. 20. Lankes R. D., in Implementing Digital Reference Services: Setting Standards and Making It Real, vol. 232 (Facet: London, 2002). 21. Dee C. R., ‘‘Chat Reference Service in Medical Libraries: Part 2.
78 The Journal of Academic Librarianship
Trends in Medical School Libraries,’’ Medical Reference Services Quarterly 22 (2003): 15 – 28. 22. National Library of Medicine, Find a Library, http://nnlm.gov/ members/adv.html. 23. Hardin Library for the Health Sciences, University of Iowa, Medical/Health Sciences Libraries on the Web, http://www.lib. uiowa.edu/hardin/hslibs.html.