LISR 17, 163-182 (1995)
Quality in Information Services: Do Users and Librarians Differ in Their Expectations? Susan Edwards Mairkad Browne Department of Information Studies University of Technology, Sydney, Australia This article describes part of a project designed to develop a user-based approach to measuring the quality of an information service, that is, the extent to which the service provided by the library meets or exceeds the users’ expectations for an excellent or superior service on a consistent basis. The main question investigated is whether there are differences in the expectations that academics hold of information services provided by academic libraries, and librarians’ perceptions of these expectations. The results show that academics and librarians have similar expectations but there are differences in the emphasis each group places on aspects of service. Librarians, for example, underestimate the importance to academics of the responsiveness of a service and overestimate the importance of the characteristics of the staff who provide the service. Practical applications of the findings are discussed and theoretical implications explored.
Librarians have long recognized the need to increase the range and maintain the quality of the products and services which they offer, and “total quality management” and “quality control” concepts have become important in many academic libraries. Nevertheless “quality” remains a very difficult concept to define, and there are a number of difl%rent approaches which can be taken toward assessing quality in services (e.g., Bolton & Drew, 1991; Brown & Swartz, 1989; Cronin & Taylor, 1992; Murfin & Gugelchuk, 1987). This article looks at “perceived quality”-a user-based approach which assumes that quality “lies in the eyes of the beholder” (Garvin, 1984, p. 27). According to Parasuraman and his colleagues, users make this judgment of quality by assessing the extent to which an actual service meets or exceeds their expectations for an excellent or superior service on a consistent basis (Hebert, 1994; Parasuraman, Zeithaml, & Berry, 1985; Zeithaml, Parasuraman, & Berry, 1990). A high quality service then is defined as one in which (1) there is a perceived congruence between what clients expect and what they receive or (2) perceptions of service quality exceed expectations.
Direct correspondence to Susan Edwards or Mairkad Browne, Department of Information Studies, Faculty of Humanities and Social Sciences, University of Technology, Sydney, PO Box 123, Broadway NSW 2007, Australia
163
164
Edwards & Browne
A low quality service is one in which perceptions of actual service are lower than expectations. There are many reasons why users might perceive a service quality gap in an organization like a library. For example: l
l
l
l
l
Clients’ expectations for the service and management’s perceptions expectations may not match;
of these
Managers may not be able to translate clients’ expectations effectively into services that meet users’ expectations; The service which is delivered may not be consistent with service specifications; The service which is actually delivered and the information which is provided to the client about this service may not be congruent (ZeithamI et al., 1990); and The experience of the client and the providers’ perceptions of these experiences may differ (Brown & Swartz, 1989; Dervin, 1977; Parasuraman et al., 1985; Zeithaml et al., 1990).
Although each of these reasons is crucial, the first reason for a gap is a fundamental point from which quality of service must be measured. If there is a lack of congruence between users’ expectations and providers’ perceptions of these expectations, service quality will suffer regardless of how well services are planned, delivered, and marketed. This potential mismatch forms the focus for this article, which, in brief, seeks to identify whether there is a gap between expectations of academics as users of university library information services and librarians’ perceptions of these expectations. Among noncommercial service providers, librarians are exemplary in the amount of attention they have paid to the views of their users. Since the early 1970s there has been a high volume of studies which have tapped user responses to library services of all kinds. It might be reasonable then to conclude that there would be no difficulties in making a match between user expectations and librarians’ perceptions of these expectations. However, when these user studies are looked at carefully along with the library performance measures which are used to tap user reaction to library services, it becomes clear that although users are asked for their viewb, it is usually on terms dictated by librarians. So, for example, when users are asked to assess a reference service, the questions are based on factors that the librarians feel are important, such as relevance and accuracy of information provided, and the helpfulness of staff (e.g., Dalton, 1992; Hernon & McClure, 1987; Van House, Weil, & McClure, 1990). These may, of course, be very important factors from the users’ point of view, but there is opinion (e.g., Webster, 1987), as well as some research (e.g., Hebert, 1994), which suggests that the criteria applied by users in judging the quality of a service can be quite different from those that librarians consider most important. For example, Dervin and her colleagues have shown through extensive work that people seek information as part of the process of making sense of their world, getting past a barrier, or solving a problem (Dervin & Nilan, 1980). It follows that when these users are evaluating an information service, it will be in terms of the extent to which
Quality in Information Services
165
the ~o~ation provided helped them make sense of a si~ation or solve a problem. In other words, users have frames of reference which are different than those of information providers, and, hence, may evaluate the quality of a library service by using quite different expectations criteria (Hebert, 1994). RESEARCH QUESTION
Based on the literature in the field, it is uncertain whether or not a gap exists between academic library users’ expectations and librarians’ perceptions of these expectations. This article provides findings which attempt to address the following question. What, if any, are the differences in the expec~tions that academics hold of info~~on services provided by academic libraries, and libr~i~s’ perceptions of these expectations? This question and the results reported here form part of a larger research study which aims (1) to identify the dimensions along which users of academic libraries assess the quality of information services and (2) to develop a questionnaire which can be used by university librariansto evaluate their information services from a user point of view. This work is still in progress. FRAMEWORK
The research is based on the ~s~ption that there are inherent differences between products and services. The features which ~st~~sh services from products are: l
Services are based on a set of relatively intangible performances;
l
Services are most often used at the time they are being produced;
l
l
Services are heterogeneous and vary from one transaction to the next (Parasuraman et al., 1985); and The user is usually involved in the production and delivery of the service (Crosby, Evans, & Cowles, 1990).
Reference services and online bibliographic searches provide examples of both the differences between products and services and the difficulties users face in assessing the quality of these services. Most particularly, a user has no product to examine prior to “purchase,” and he or she must consume a process as well as an outcome. In light of this difficulty, one major thrust in consumer behavior and marketing research has been an attempt to identify the indicators that consumers use in their assessment of service quality. Generally this literature shows that there are one or more dimensions on which consumers base their evaluations of services. The most influential work in this area has been done by Parasuraman, Zeithaml, and Berry, (1985,1988,1991; see also Zeithaml et al., 1990). Their early work id~ti~~ 10 dimensions along which cons~ers were found to evaluate commercial services (see Table 1).
166
Edwards & Browne
TABLE 1 Ten Dimensions of Service Qualitv
esponsrveness
gofer
Table adapted from Parasuraman et al. (1985).
The more recent studies by Parasuraman and his colleagues have suggested that there are five broader dimensions of service quality (Zeithaml et al., 1990). Three of the broad dimensions, tungibEe.s,reliability, and responsiveness, are the same as the original dimensions listed in Table 1. The new broad dimension assurance includes items from the original communication, credibility, security competence, and courtesy dimensions. The new broad dimension empathy includes the items from the original ~~~er~tu~~i~g the c~~t~rner and access dimensions (Zeithaml et al., 1990). Table 2 gives def~tio~ for these broader dimensions. This work led to the development of a standardized ins~ent (SERVQU~) which is widely used by retail companies to identify customers’ assessment of service quality. The research reported in this article drew on both the earlier and later studies of the Parasuraman research group using their conceptualization as a basis for comparing librarian and user expectations.
Quality in Information Services
167
TABLE 2 Five Broad Dimensions of Service Qualitv Tangibles
II
Appearance of physical facilities, equipment, personnel, and communication materials. Ability to perform the promised service dependably and accurately. Willingness to help customers and provide prompt service. convey trust and confidence.
~~~ ~~~ ~ ~ ~ ~
Caring, individualized attention the firm provides its
Note:
Table adapted from Zeithaml et al. (1990).
METHODS The first step in assessing whether or not librarians and academics differ in how they think of quality of an information service was to develop a working definition of “information services.” Using the characteristics which distinguish services and products described earlier, “information services” was defined as those activities and outputs which facilitate the use of materials and information and which normally involve interaction between the user and the librarian. The most obvious examples are the reference and information desk, reader education programs, interlibrary loan, and bibliographic searches. Neither the collection nor catalogs per se were included within the parameters of the definition, although activities to make these more useable were included, such as maintaining information retrieval equipment in good order. Using this definition and the 10 dimensions of quality (see Table l), a senior librarian with extensive experience in reader services in university libraries developed a list of potential expectations (indicators of quality) which users might have of an information service. The authors of this article and the librarian assessed and revised the items in order to ensure that all 10 Parasuraman dimensions were covered by several items, and that the items were clearly stated and had face validity. The items were then formatted into Likert-type statements modeled on the SERVQUAL questionnaire (Parasuraman et al., 1988). A sequence of the 61 items was developed using random methods. Examples of the items are:
168
Edwards & Browne
Strongly disagree Excellent libraries purchase reference materials quickly
1234567
In an excellent library the information staffknow individual users and their needs
1234567
Strongly agree
The 61 items formed the first section of the questionnaire. The second part of this first version of the questionnaire listed the five broad dimensions of quality derived from the more recent work of the Parasuraman group (Table 2). Respondents were asked to allocate a total of 100 points among the five broad dimensions (tangibles, reliability, responsiveness, assurance, empathy) according to how important each dimension was to them. In order to ensure that the questionnaire included user-generated indicators of quality, we also held three focus group discussions with academic staff in science and technology faculties at three universities in metropolitan Sydney. Each of the groups was composed of between 8 and 13 participants who were selected from names suggested by librarians in each of the universities. Discussions centered around identificationof those characteristics which the academics believed indicated a superior or excellent academic library information service. The participants also completed the questionnaire (designated as Version 1) at the end of open discussion. Group discussions were also held with senior information service staff at the same universities.The format of the groups was the same as that used for the academics. The library staff, however, were asked to take the perspective of their users, that is, they were asked to react and respond the way they thought their users would. The librarians also completed the Version 1 questionnaire, and again they were asked to put themselves in the place of their users. As a result of the focus group discussions with the academics, 32 new items were added to the Version 1 questionnaire. Items generated by the librarians were not included. This resulted in a 93-item inventory which became Part 1 of a revised questionnaire (Version 2). Part 2 was unchanged. To eliminate bias which might be an artifact of the order of items, two versions (A and B) of the questionnaire were developed. Version 2 (A and B) of the questionnaire was mailed to a randomly selected group of 300 academics at four Australian universities. The universities were selected in order to get a range of different types of Australian universities: l
A university with a significant external studies program,
l
A major research university,
l
A small new university, and
l
A networked university formed, in 1989, from the former college of an advanced education sector.
Quality in Information Services
169
We also sent the Version 2 questionnaires to the Chief Librarians at the same universities and asked that it be distributed to their senior librarians who collectively numbered 55. The librarians were asked to fill out the questionnaire as they thought their users would. Eighty-two replies were received from academics (27% response) and 30 from librarians (55%). DATA ANALYSIS
In order to measure reliability (consistency of measurement) in the questionnaire, Cronbach alphas were used. This technique measures the internal consistency or the degree of relationship among the items in a scale, or, as in this study, a particular dimension of a scale. If each item in a dimension is measuring the same aspect of quality, there should be a high correlation among the items in a dimension. The alphas for the 10 dimensions used in this study ranged from .73 to .88 and were considered acceptablereliability coefficients for a project of the kind being carried out (Churchill, 1979). As stated earlier, the respondents’ expectations were measured by asking them to indicate, on a 7-point scale (.strongZydisagree (1) to strongly agree (7)), the extent to which they considered an excellent library service would possess the feature described by each of the 93 statements. Consistent with previous research in the area, the scales were assumed to be continuous and at an interval level of measurement. It was also assumed that (4) was neither agree nor disagree. Means were then calculated. If an item received a mean greater than 4 it was considered that the indicator was perceived necessary for an excellent information service; a mean less than 4 was interpreted as not necessary. In order to facilitate further comparisons between responses of the academics and the librarians, the means were also converted to ranks. Within each group of respondents the item with the highest mean was assigned a rank of 1 and the item with the lowest mean a rank of 93. Because of the low response rate in the survey of academics, it was decided to compare the pattern of results of the first phase of the study which included the completion of Version 1 of the questionnaire. If similar patterns emerged from the two sets of results the reliability and validity of the findings would be strengthened. The results between the first phase and the main study were strikingly similar. The major points of similarity between the first phase and the main survey are highlighted in the next section. RESULTS The results are organized into five parts. The first four sections address the question of whether there are gaps between academics’ expectations and librarians’ perceptions of these expectations. Specifically,
Is there is a gap in perceptions about the broad dimensions academics most expect in a quality information service? Do librarians accurately perceive the level of expectations academics hold for the indicators of quality as a group?
170 l
l
Edwards & Brown@ Are there gaps between the expec~tions academics hold for prices of quality and librarians’ per~ptions of these expeditions?
~di~ators
Are there gaps between academics and librarians on those items which academics most strongly agree are present in a high-quality information service?
The fifth section highlights the main results of the first phase of the work and how these relate to the main survey. Is There a Gap in Perception About Which of the Broad Dimensions Academics Most Expect in a Quality Information Service? Each respondent was asked to allocate a total of 100 points to each of five broad dimensions of quality given in Table 2 according to how important each feature was to them, It was emphasized to librarians that they should answer as they thought their users would. The scores were then examined to see the extent of congruence between the two groups.
TABLE 3 importance of the Broad Dimensions in a Qualitv Information Service Assianed Academics n=80
Librarians n=30
M
SD
M
SD
Reliability: Ability to perform service dependably and accurately
36.00
16.03
31.43
15.67
Responsiveness: Willingness to help customers and provide prompt service
23.05
8.03
22.86
5.84
16.82
7.66
20.93
8.48
14.72
9.16
14.64
7.06
9.35
7.13
10.14
6.48
Feature
Assu~affce: Knowledge and
courtesy of staff and ability to convey trust and confidence Empathy: Caring, individualized
attention Tangibles: Appearance of physical facilities, equipment, personnel, materials
Total
100.00
100.00
Quality in Information Services
171
Academics and librarians agreed upon the ranking of the five broad dimensions (see Table 3). Moreover, the mean number of points assigned by academics and librarians was virtually the same for three of the broad dimensions: tangibles, responsiveness, and empathy. The greatest difference between the academics and librarianswas on the dimension reliability. This difference, however, is not statistically significant 0, > .05). Librarians, on the other hand, gave more points to the assurance feature than did the academics. This difference, although smaller, is statistically significant. The tendency for librarians to ~derest~ate the importance of re~i~bi~i~to their users reappears in Figures 1 and 2. When asked to indicate the most spout broad dimension, 76% of the academics chose reZ~~bi~i~.Although more libr~~s selected this tension than any other, the percentage (43%) is considerably lower than that of the academics. Figures 1 and 2 again suggest that librarians may overestimate the assurance feature. Twenty five percent of the librarians thought assurance was the most important feature to academics, but only 3% of the academics selected it. There was agreement that responsiveness is the second most important feature (Figure 2), and tangibles the least important (Figure 3).
FIGURE 1 Most ImPortant Broad Dimension
80% i
m
Academics (n=74)
m
Librarians (n=28)
60%-
0%. Tangibles
Responsiveness Reliabiiity
Empathy Assurance
172
Edwards & Browne FIGURE 2 Second Most Important Broad Dimension
m
Academics (n=69)
0
Librarians (n=27)
60%-
Empathy
Responsiveness
Tangibles Reliability
Assurance
FIGURE 3 Least Important Broad Dimension
m
Academics (n=64)
0
Librarians (n=26)
60%.
60%. v) : Q U 6 ti $ t a.
40%.
20%-
O%_ Responsiveness
Tangibles Reliability
Dimension
Empathy Assurance
Quality in Information
Services
173
Do Librarians Accurately Perceive the Level of Expectations Academics Hold for Indicators of Quality? Although it is important to assess the extent of any gap across dimensions of quality, users do not experience or expect reliability, responsiveness, or tangibles as such; rather, they hold expectations for concrete indicators of these dimensions, for example, interlibrary loans which are delivered rapidly, and online equipment which is maintained in working order. This section looks at the characteristics which academics and librarians expect to find in an excellent service and the relative strength of these expectations. The academics agreed that each of the 93 items in the questionnaire was an indicator of an excellent information service. The lowest mean score was for “staff look professional in their appearance” (4.29) which was slightly above the midpoint of the 7-point scale. For most other items, the agreement with the statements was quite strong with only 2 additional items (“keep accurate profiles of past requests” and “staff know individual users and their needs”) with means lower than 5. The librarians accurately perceived that the academics would expect all of the indicators to be present in a quality service with one exception, “journals are arranged by title separate from the books,” whihc had a mean of less than 4 (2.93). They did, however, give 7 additional indicators mean scores under 5; as indicated, the academics had only 3 items below 5. The highest rating given by the academics was 6.66 on 2 items “staff liaise , . . to ensure course related materials are available” and “staff will try to obtain item from another source.” The highest mean score for the librarians was 6.83 on the item “libraries notify users promptly of the availability of inter-library loan.” The other librarianscores that were higher than 6.66 were “staff do not mislead users on the costs of any special services provided for them,” “are willing to assist users having difficulties or queries using the catalogue, ” “users are never made to feel embarrassed by staff when they are unable to use the library effectively,” and “staff have a broad understanding of. . . information services.” Overall, academics gave higher means to a greater number of items (51) than did the librarians (42), but on 22 items the differences in means were less than .lO. The librarians overall mean for the 93 items was 5.92 (SD = .68); the corresponding mean for academics was 6.01 (SD = .48). This difference is neither large nor significant (p = .27). Are There Gaps Between the Expectations Academics Hold for Particular Indicators of Quality and Librarians’ Perceptions of These Expectations? The differences in scores between librarians and academics for individual items shows that for 5 1% (47 items) the differences were less than .25 of a scale point. The means for 22 of these items (24%) differed by less than . 10. There were 19 indicators (20%) with differences equal to or greater than .50 (Table 4). Because of the exploratory nature of the study and the number of items in the scale, significance tests have not been reported for the specific items. When an independent two sample means t test was applied to the 19 items with differences of .50 or greater, however, all but one of the differences was significant at the .05 level. It is recognized, of course, that it is not good practice to use individual t tests in this situation.
174
Edwards & Browne TABLE 4 Indicators with Diirences
Note:
A = academics;
of 50 or Greater
L = librarians.
Items Rated More Highly by Academics
Most (14) of these 19 items were rated higher by academics than librarians. Of these 14,7 were abont some aspect of the provision of computer-based or electronic services, As there were only 11 items related to online services in the questio~a~e, there is
Quality in information Services
175
evidence that the librarians did not perceive the extent of the academics’ expec~tio~ for these services. Items Rated More Highly by Librarians Three of the 5 items which librarians rated 0.5 of a scale point or more higher than the academics are from the a.s~urance feature. This is consistent with responses on the relative importance of the five broad dimensions of service (Table 3). Three of these items center around interactions with the users, that is, interpersonal or communication factors. The tendency for librarians to rate these areas higher than the academics was seen throughout the questionnaire, and many of the 42 items to which librarians gave higher means were related to direct or indirect aspects of user gyration ~ciu~g user education programs. It is particularly interesting to look at the origin of the items where there are sizeable differences in the mean scores of librarians and academics. Of the 29 items which academics scored 0.3 of a scale point or more higher than the librarians, 18 (62%) were items generated by the academics in the focus groups (see Table 4 for those with differences of .50 or more). Only 32 (34%) of the 93 questionnaire items were generated from the focus groups. In summary, the academics agreed that each item was an indicator of an excellent service and agreed so quite strongly. The librarians accurately perceived that the academics would expect each indicator to be present in a high quality service. Although Libras slightly ~derest~~ the strength of the academics’ a~eement for many of the items and overestimated a few items, the differences for most indicators were small and not significant. Are There Gaps Between Academics and Librarians on Those Items Which Academics Most Strongly Agree are Present in a High Quality Information Service? Although the findings suggest that there are some gaps between academics and librarians, not all of these may be critical to the provision of an excellent information service. Rather what may be crucial is the extent to which there are gaps on these attributes of service which the academics rated highest. When the results for academics and libel are compared for the 15 ~~ca~rs rated highest by academics (Table S), 7 have a difference of. 10 or less, and 13 have a difference of less than .50. The 2 items with relatively large differences were “s&uY will try to obtain item from other source” (responsiven,zs), and “reference shelves tidy and in order” (tangibles). To determine the extent to which there was agreement between academics and librarians on the relative ordering of the attributes, the means were converted to ranks. The item with the highest mean was assigned a rank of 1 and the lowest mean a rank of 93. Items with relatively large differences in means are not necessarily accompanied by large differences in ranks. For example, “staff look professional” had one of the largest differences in means ( -.8 1; see Table 5), yet the difference in rank was only 10 (93 vs. 83). Conversely, some indicators have large differences in ranks even though the gap in means is relatively small. For example, the difference in ranks for “users’ inquiries are confidential” is 61 (Table 6) even though the difference in means is less
176
Edwards & Browne TABLE 5 Differences in Means and Ranks for Indicators Rated Most Hiahlv bv Academics Ranks
56.0
54.5 I
ment maintained in
9.0
6.0 I
6.5
1
0.5
31 .o
23.0
24.5
15.5
5.0
6.0
11.5
0.5
62.5
49.0
54.5
4.0
t
6.5
21.5 I
Nofe:
A = academics; rank 93.
L = librarians;
highest M = rank 1; lowest M =
Quality in Information
Services
177
than .50. These patterns are reflected in the moderate (.578) but significant (p <. 05) correlation coefficient between the ranking of the two groups.
TABLE 6 Other Items with Rank Differences
II
Item
((inquiries confidential ll~otify of policy changes IlResponses are polite I(Sufficient space on shelf
of 30 or Greater
1 Academics 1 Librarians I Difference I
I
I I
71 54 16
I
I
I I
21.5
I I
8
1
46
59
I
43
19
I
61 50.5
77
I I
37
I
40
lllnformation desks located conveniently 1
64
1
27
I
37
IIIndexes on disk
I
36
71.5
I
35.5
/IOrientation provided
I
IlUser friendly information system
IIUser education sessions
IlServices at standard promised IlServices to keep up-to-date Staff locate materials not on shelf Note:
I I
80
I I
17
58.5
65
I I
31
29
I
61.5
18 39
I I
50 69
I
I I
41.5
34 32.5 32 30
Rank 1 = highest rated item; rank 93 = lowest rated item.
More importantly in terms of congruence of views, there was close agreement between the two groups on the 15 items ranked highest by the academics (Table 5). There is a difference of 10 or less ranks for 9 of the items, and only 3 items have a dil%rence of 20 or more places. The 3 items are “staff will try to obtain item from other source” (responsiveness), “on-line searches supplied quickly” (responsiveness), and “reference shelves tidy and in order” (tangibles).
Comparison of Results: First Phase and Main Study As described in the Methods section, the academics and librarians who participated in the focus group discussions in the first phase completed the 61-item draft questionnaire. Because the sampling methods, and indeed the characteristics of the participants, were different from those in the main study described earlier, the results from the two administrations of the questionnaire were compared to determine whether there were commonalities in the findings which would add strength to the findings of the main survey. The results show that while there are some differences there are striking similarities. For example, in both the first phase and the main study:
178 l
l
l
l
Edwards & Browne Academics and librarians agreed that the items in the questionnaire were indicators of an excellent library; Academics’ and librarians’ scores on the broad dimensions responsiveness were similar;
tangibles
and
A higher percentage of academics than librarians selected reliability as the most important dimension; and Librarians rated assurance higher than academics academics’ strength of expectation on responsiveness.
and underestimated
the
Although the number of items in the fist phase and the main survey was different (6 1 and 93 respectively) in making comparisons of responses on individual items problematic, there were, nonetheless, 5 items which showed mean differences of .50 or more in both phases: l
If item needed is not held staff will try to obtain from another source;
l
Information services give quick answers to users’ inquiries;
l
Subject matter and details of a user’s inquiries are confidential;
l
Suggestions made by users are responded to in a polite manner; and
l
Staff look professional.
The rankings of these and the other common items were similar for the two phases. The correlation coefficient between the ranking of academics and the ranking of librarians was .58. This was almost identical to the main study (.578) thus demonstrating again a moderate relationship between the rankings of the two groups.
Summary Although caution about generalizing the results is necessary because of the judgmental methods used in some parts of the process of selecting respondents and the low return rate, the consistency of the results in phase 1 and the main study provides evidence that academics across a number of institutions and fields of study share similar expectations for the quality of information services. The findings also show that the librarians in the sample had an accurate perception of their users’ expectations across the broad dimensions that research has found to be important in determining service quality. In addition, librarians were able to identify many of the attributes of service which the academics most strongly agree are expected of a quality information service. This does not mean that there are no gaps in perceptions about individual indicators of quality. In particular, librarians underestimated the level of expectation on items about computer-based services, responsiveness in obtaining material, timeliness of service, and the arrangement of materials. They overestimated academics’ expectations for aspects of service involving user and librarian relationships and for
Quality in lnfwmation Services
179
user education programs. Although many of these differences occurred for indicators of quality that the academics rated relatively low, the academics considered that each one was expected of an excellent information service. DISCUSSION OF RESULTS There are s~gg~tio~ emerging in our ongoing research that the dimensions developed by Parasuraman and his colleagues may not hold for information services in a university library. Other researchers (e.g., Carman, 1990; Cronin & Taylor, 1992) report similar difficulties in application of the framework with a suggestion that in some settings quality may even be unidimensional. In our results there are groupings of items relating to technological features of service. There is also evidence that some items which cluster around communication are rated relatively highly by academics and stand out from the other components (e.g., competence) as an aspect of the LZLW-unce dimension. User education, which is included in “communication,” may also form a separate dimension. There is also a need to explore the re~iabili~ feature in further research. Given the importance of ~eZ~u~j~~~ in Table 3 and Figure 1, it is ~ffic~t to explain why the items in this feature rarely appeared in the items rated most highly by academics. When the responses for the 15 items rated highest are analyzed by the Parasuraman broad dimensions (Table 6) only one item from the r&ubiEity feature appears on the list even though there were a number of questions covering this dimension. The factor analyses of data which are currently under way in our study may well clarify the reliubil&y feature. In terms of the working definition of “information service” adopted for this study, there was some drift away from this in the discussions at the focus groups held with academics. This may be attributableto memory lapse or lack of clarity in the discussion leader’s introduction to the concept. It could also be an outcome of the fact that academics conceptualize “notation service” di~erently, at least in their day-to-day ~d~~g of an ~fo~ation service provided by a ~versi~ setting. The 32 items added as a result of discussion to the original group of 63 were examined to see if they fell comfortably within the definition for the study-“those activities and outputs which facilitate the use of materials and information and which normally involve interaction between the user and the librarian” (see the section on methods). Of the 32 new items added by the academics to the bank of questions, 23 of the items fell within the working definition of information service and required library staff-user interaction. The remaining 9 related to matters of access to the collections and physical arrangement of the collections, These findings point up the difficulty of distinguishing conceptually between the “products” in the library and the “service” attached to it, Products and services may prove to be more of a contour in some users’ minds, part of one entity, the university library service. It also suggests that the element of the de~tion which proposes that there is “normally” an interaction between librarian and user is basically sound, but that the interaction can just as easily be indirect (say, with completion of a request form sent to the library by mail) as direct (with face-to-f ace communication}. In short, this outcome of the research suggests there is further work to be done to clarify the notion of “i.nGomation service” when it is applied to libraries.
180
Edwards & Browne
The number of items added by academics in the focus groups also emphasized for us the absolute necessity of going to the users themselves to determine the questions to be asked in an evaluation of any kind of service. It should be noted, however, that the academics did not rate most of the new items they added as among the most important overall. Indeed, almost all the indicators rated and ranked highest were generated originally by the research team. A striking feature of the additional items was the number which refer to electronic information services. The research team has extensive experience in university libraries,but like the librarians in the study, we underestimated the importance of some computer-based information services to academic users. Certainly, we had included a number of items (4) which related to provision of, for example, customized information products via online services, but generally tended to de-emphasize the format of the information, or the output provided, in the way we phrased these items. This is consistent with the approach taken in the early days of online databases in libraries in regard to such services, where the emphasis was on the information rather than the format in which it was provided. However, the strength of emphasis by users on electronic services and the phenomena of direct user access in recent times to electronic networks is demonstrated in the results of the study. This may be partially an artifact of the period in which the study was carried out and the fact that the items were generated by groups of science and technology academics, but the preliminary factor analysis strongly suggests that from the academics’ perspective, electronic services are conceptually different to other information services. IMPLICATIONS In terms of what needs more attention in the design of information service being provided for academics through their university library, there are several areas for attention where there is a mismatch between provider and client views. As indicated, the providers of information services generally understand the range of needs and expectations of their users. There are, however, some issues to do with the failure of librariansto recognize the strength of certain expectations and the relative importance of particular features of an information service. This means that librarians need to pay more attention than hitherto to several aspects of service. Librarians need to recognize the greater emphasis in the expectationsof academics on responsiveness and attend to building their service profile so that the willingness of front-line staff to help users is apparent and responses to requests such as interlibrary loan are prompt with adequate follow-through. Specifically, there is a need to pay attention to communicating with users about willingness to obtain material from other sources and to arrange for course-related material to be made available for students. The results suggest also that it is important for librarians to assess the extent of their emphasis on provision of electronic information services as electronic information services, rather than simply one more format within a range. Another aspect to be looked at (even though it did not fall strictly within the definition of information services in this study) is the response of academics who show they have preference for arrangement of serials by title. A further area which needs attention by practitioners relates to relationships between library staff and users. The librarians’ responses suggested that they overestimate the importance of the a.sSuYuncefactor, that is, the extent to which they
Quality in Information
Services
181
as service providers are knowledgeable and courteous, and engender a feeling of trust and confidence in the service being provided. The users appear to be more focused on what they came to the library for, rather than on the characteristics of the people who provide it. They are not quite so concerned with the competence of the librarian or the politeness, friendliness or the impression of trustworthiness they give. Clearly, it is important to provide a degree of civility and competence in a service, but information services managers need to recognize this as a means and not an end in itself. A further implication of this research for providers of information services is that the tangibles of a service, such as the physical surroundings and appearance of staff, are not, in the minds of users, central to the provision of an excellent service. This suggests that librarians should not be overly concerned with the physical appearance of the setting for information services or the staff looking “professional,” as these are not important factors when users make judgments about the quality of a service. There is perhaps one exception to be noted here, and that relates to the level of “housekeeping” in the reference area. Academics ranked tidiness in the reference shelves as a much higher priority than did librarians, pointing up the need to pay more attention to this feature of the information service. CONCLUSION
The research set out to establish what, if any, are the differences in the expectations academics hold of information services provided by academic libraries and librarians’ perceptions. The answer to the question was derived from analysis of data obtained by the questionnaire from academics and librarians and the exploration of four subquestions. The conclusion to be drawn is that, by-and-large, there is congruence between librarians and academics in what they view as characteristics of a quality information service. It is, however, necessary for librarians to recognize the points of departure between them and academics in the emphasis they place on particular aspects of the service they provide. In particular, librarians have a tendency to underestimate the importance to academics of the ability of an information service to perform the promised service dependably and accurately. They also underestimate expectations for some specific services such as electronic information services and the arrangement of materials on the shelves. REFERENCES
Bolton, Ruth N., & Drew, James H. (1991). A multistage model of customers’ assessments of service quality and value. Journal of Consumer Research, 17,375384.
Brown, Stephen W., & Swartz, Teresa A. (1989). A gap analysis of professional service quality. Journal of Marketing, 53,92-98. Carman, James M. (1990). Consumer perception of service quality: An assessment of the SERVQUAL dimensions. Journal ofRetailing, 66,33-55. Churchill, G.A. (1979). A paradigm for developing better measures of marketing constraints. Journal of Marketing Research, I6,64-73. Cronin, Joseph J., & Taylor, Steven A. (1992). Measuring service quality: A reexamination and extension. Journal of Marketing, 56, 55-68.
182
Edwards & Browne
Crosby, Lawrence A., Evans, Kenneth R., & Cowles, Deborah. (1990). Relatio~~p quality in services selling: An ~te~erson~ influence perspective, Joshua of Marketing, 54,68-8 1. Dalton, Gwenda M. E. (1992). Quantitative approach to user satisfaction in reference service evaluation. South African Journal of Library and Information Science, 60, 89-103.
Dervin, Brenda. (1977). The development of strategiesfor dealing with the information needs of residents, Phase 2: Informationpractitioner study (ED 136791). Seattle: University of Washington. Dervin, Brenda, & Nilan, Michael. (1986). Information needs and uses. Annual Review of Info~ation Science and Technolo~, 21,3-33,
Garvin, David A. (1984). What does “product quality” really mean? Sloan management Rain, 26,25-3 1, Hkbert,Franpise. (1994). Service quality: An unobtrusive investigation of interlibrary loan in large public libraries in Canada. Library & Information Science Research, J&3-21.
Hernon, Peter, & McClure, Charles R. (1987). Unobtrusive testing and library reference service. Norwood, NJ: Ablex. Murfin, Marjorie E., & Gugelchuk, Gary M. (1987). Development of testing of a reference transaction assessment instrument. College & Research Libraries, 48, 3 14-338.
Parasuraman, A., Berry, Leonard L., & Zeithaml, Valerie A. (1991). Refinement and re~sessment of the SERVQUAL scale. Journal ofRetailing, 67,420-450. Parasmaman, A., Zeith~, Valerie A,, & Berry, Leonard L. (1985). A conceptual model of service quality and its ~plications for future research. Journal of Marketing, 49,41-50.
Parasuraman, A., Zeithaml, Valerie A., & Berry, Leonard L. (1988). SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64, 12-40.
Van House, Nancy A., Weil, Beth T., & McClure, Charles R. (1990). Measuring academic librarype~formance: A practical approach. Chicago: American Library Association. Webster, Duane E. (1987). Examining the broader domain. Journal of Academic Librarianship, I3,79-80.
A., & Berry, Leonard L. (1990). delivering quali~ service: valancing custo~erpercep~ons and expectations. New York: Free Press.
Zeithaml, Valerie A., P~~~~~,