Library & Information Science Research 24 (2002) 169 – 194
Assessing the quality of academic libraries on the Web: The development and testing of criteria$ Hungyune Chao Wilson Information Services Corporation, P.O. Box 102, New Market, MD 21774, USA E-mail address:
[email protected] (H. Chao).
Abstract This study develops and tests an instrument useful for evaluating the quality of academic libraries on the World Wide Web (Libweb). By consulting authoritative criteria used for traditional print resources and Internet/Web resources, a set of 68 essential indicators was generated and later reorganized and reduced to 16 criteria through factor analysis. After a survey of library experts, the instrument’s reliability was verified by analysis of variance. Furthermore, a regression model considering both the respondents’ demographics and the quality criteria was applied to identify 11 significant factors, which were later reduced to eight factors. These eight factors represent the most salient and nonredundant criteria. Two instrument forms are suggested for prospective users to evaluate academic Libweb quality and to construct and maintain a good site. D 2002 Elsevier Science Inc. All rights reserved.
1. Introduction Academic libraries provide information resources and services to students, faculty, and staff in an environment that supports learning, teaching, and research. The rapid development of information technology is transforming library service at a spectacular rate. By connecting to the Internet and World Wide Web, the academic library expands its access to information resources, some of which appear as a digital library. Despite the continual increase in the number of academic libraries on the Web (Libweb; a total of 2,078 U.S. academic Libweb sites existed on August 31, 2001), there is a lack of authoritative guidelines or criteria to help $ This article is based on an unpublished PhD dissertation, Department of Library and Information Studies, SUNY – Buffalo, New York.
0740-8188/02/$ – see front matter D 2002 Elsevier Science Inc. All rights reserved. PII: S 0 7 4 0 - 8 1 8 8 ( 0 2 ) 0 0 111 - 1
170
H. Chao / Library & Information Science Research 24 (2002) 169–194
library professionals determine how the quality of an academic library Web site can be measured and improved to serve patrons better. Both resource providers and users face the problem of determining the hallmarks of a welldesigned, useful resource on the Web, and Web-site designers continue to deliberate on how to unite content and form effectively to create useful Web sites (Abels, White, & Hahn, 1997). In particular, the existence of the Internet as a massive information repository opens up a new set of challenges for academic librarians wishing to provide quality digitized information sources and reliable Web sites to their constituencies. Because there are no generally accepted guidelines or criteria to assist Web-site administrators in assessing existing Web sites, it is difficult for Webmasters to incorporate criteria during the planning/design phase of site development (Internet Business Network, 1995; Stoker & Cooke, 1995; Tillman, 1997). Not surprisingly, given the anarchistic nature of the Internet, there are few, if any, quality controls for Web information and design, let alone for academic Libweb sites. Little attention has been given to the overall assessment of academic Libweb site quality. For example, the existing studies are limited to the evaluation of general Internet sources, general library resources on the Web, and federal government Web sites. Although a few studies have focused on academic libraries, their scope has been confined to online reference materials, Association of Research Libraries (ARL) home pages, and particular groups of users in academic libraries. There is no proper framework to allow experts to perform an evaluation on the quality of academic Libweb sites. In contrast to general Web sites containing commercial and noncommercial information, academic Libweb sites should receive stricter audits and provide higher quality to ensure credibility of information. It is pertinent for some intrinsic criteria of the academic library to be incorporated in the development of a Libweb evaluation instrument. Although many studies have proposed evaluation criteria for information sources and Web sites, there are limited studies that have provided guidelines to rate the quality of library Web sites, and there exists no study on the quality of academic Libweb sites. The current study develops and tests an evaluation instrument that library professionals can use to assess the quality of academic Libweb sites. More specifically, this study explores the following questions: What are the criteria, according to experts, for measuring the quality of academic Libweb sites? Can the academic Libweb experts apply the established evaluation criteria to discern differences in academic Libweb site quality? Which criteria are most highly related to academic Libweb site quality? This research differs from previous studies in its investigation of the criteria influencing academic Libweb site quality; the investigation was conducted and tested through (1) soliciting academic Libweb experts’ opinions and (2) performing a systematic statistical analysis and validation of the instrument.
2. Literature review Literature on Web theory is scant for two reasons: First, the Internet is a relatively new area and many people are finding their way around using the web for the first time. Second, those who are at the frontlines of the Web—the design technologists—are not typically inclined to
H. Chao / Library & Information Science Research 24 (2002) 169–194
171
reflect on their practice, search for relevant theory, and write about it (Day, 1997). The following sections discuss the various categories of literature review. 2.1. The traditional print perspective Traditional models for evaluating print-on-paper library services are chiefly based on whether the library and its resources and tools can assist users in locating the items or information they need (Lancaster, 1997). Katz’s (1992) basic criteria (purpose, authority, scope, audience, cost, and format) were expanded and adapted from print reference sources to evaluate Internet resources (Alexander & Tate, 1996a, 1996b; Brandt, 1996; Dickstein, Greenfield, & Rosen, 1997; Garlock & Piontek, 1996; Grassian, 1997; Hinchliffe, 1997; Pratt, Flannery, & Perkins, 1996; Rettig, 1996; Stoker & Cooke, 1995; Tillman, 1997). However, the digital library is sufficiently different from the traditional print-on-paper to present a new set of parameters relating to the evaluation of its use (Lancaster, 1997). Cooper (1997) argued that electronic aids reach a more diverse group and present new questions when initiating guide design and library policy. Smith (1997) pointed out that some new criteria (authority, content, currency, and ease of use) should be incorporated in the Internet environment. Saracevic (2000) suggested that traditional library, information retrieval, and human-computer interface criteria should be adapted to digital libraries. 2.2. The human-computer interface perspective The human-computer interface evaluation literature is based on research. Strong analogies can be drawn between the design of computer interfaces, especially those associated with hypertext systems, and the design of Web sites (Botafogo, Riivlin, & Shneiderman, 1992; Instone, 1996; Mahajan & Shneiderman, 1995; Nielsen, 1993, 1995; Nielsen & Sano, 1994; Shneiderman, 1996; Shum, 1996; Stein, 1997). The quality of hyperdocuments depends on the factors of usability, readability, maintainability, correctness, integrity, testability, consistency, and so forth. 2.3. The design-based perspective The design principles are derived primarily from Web and HTML features and the designers’ personal preferences (Abels et al., 1997). The principles focus on content and form-related features (Alexander & Tate, 1996b; Caywood, 1996; Ciolek, 1996; Clyde, 1996; Eschenfelder, Beachboard, McClure, & Wyman, 1997; Grassian, 1997; Harris, 1997; Law, 1996; McMurdo, 1995, 1998; Skov, 1998; Smith, 1997). Internet quality can be measured with a checklist of credibility, accuracy, reasonableness, support, fitness, content, authority, currency, navigation, ease of access, style, performance, and so forth. Some universities and organizations have begun to publish their own Web-page standards (Berkeley Digital Library SunSITE, 2000; Lynch & Horton, 1998). The rating systems (Argus Associates, 1998) are somewhat useful. However, they are overly subjective and do not indicate any grounding in the principles of organization and design (Stover & Zink,
172
H. Chao / Library & Information Science Research 24 (2002) 169–194
1996). McKinley/Magellan (http://www.mckinley.com) and Point Communication (http:// www.lycos.com) typically focus on such indicators of quality as how ‘‘fun or entertaining’’ a Web site appears to be, rather than the instructional value or validity of the content within that site (Oliver, Wilkinson, & Bennet, 1997). 2.4. The user-based perspective The incorporation of users in developing the Internet, for the network as a whole and the Web sites in particular, has been recognized recently. Muller (1996) reported on the multicultural constituencies accessing the Internet. Shneiderman (1996) noted that the initial questions facing Web-site designers relate to identifying the users and the user tasks. Wilson (1996) reported on a company’s plans to evaluate the performance of Web sites from the enduser response times. Wyman, McClure, Beachboard, and Eschenfelder (1997) designed analytical tools from system-based techniques and user feedback to assess federal Web sites. Day (1997) suggested that the quality of a Web site is ‘‘customer focused.’’ Bell and Tang (1998) conducted a survey and demonstrated that Web sites rate highly in terms of ease of access, content, and structure, but score poorly for unique features. 2.5. Evaluating the quality of library Web sites Csir (1998) evaluated six Web sites that provide access to online reference materials at academic and public libraries while concentrating on their currency, accuracy, relevancy, structure, presentation, maintenance, and features. Stover and Zink (1996) reviewed 40 Web home pages to assess their quality of Web page design and to uncover trends, patterns, and anomalies. King (1998) examined the home pages of all 120 libraries in the ARL to compare design similarities and differences. Abels et al. (1997) identified userbased design criteria in Web sites and found that users rate content and ease of use more important than appearance. 2.6. Evaluating the quality of Internet sources Wilkinson, Oliver, & Bennett (1997) conducted a project to develop a set of criteria and standards for evaluating the quality of general Internet information sources by applying the Delphi method and the following six phases: (1) identification of criteria, (2) consolidation of criteria, (3) evaluation of criteria, (4) development of an instrument, (5) field test, and (6) dissemination of products. 2.7. Evaluating the quality of federal Web sites McClure and Wyman (1997) explored the degree to which federal agencies’ Web sites meet the needs of their constituencies. The purpose of their project was to establish analytical tools based on both technical criteria and user feedback by which federal Web-site administrators may assess the quality of their Web sites.
H. Chao / Library & Information Science Research 24 (2002) 169–194
173
3. Objectives This study’s objectives are: (1) identify a set of criteria that appears to be useful for assessing the quality of academic Libweb sites; (2) use these criteria to develop an instrument for evaluating the quality of academic Libweb sites; (3) test the instrument to see if it is capable of being used to discriminate among academic library Web sites of differing quality; (4) explore which criteria exert a significant relationship with the academic Libweb sites’ quality; and (5) suggest an operative instrument for users to evaluate, construct, and maintain a quality site.
4. Procedures This study was executed in three stages. In the first stage, the criteria identified from the literature review represented a broad cross-section of guidelines on both Internet and library Web sites. These criteria were also integrated with the mission and function of the academic library in mind. The second stage consisted of seven phases as follows:
Creation of the preliminary questionnaire. The set of 70 essential criteria for evaluating the academic Libweb quality generated in the previous stage was formatted to develop the preliminary questionnaire (Chao, 2001, Appendix-B). It was designed to allow a respondent to evaluate the importance of each of the 70 criteria by using a five-point scale (not, slightly, moderately, very, or absolutely important). Definition and identification of the Libweb experts. Academic Libweb experts are defined as individuals or groups who, having attained positions of authority and responsibility, are in charge of building or maintaining a Web site. Their titles, which vary across different academic institutions, include librarian, Webmaster, Web editor, Web advisory team, and others. A total of 886 academic Libweb experts from six regions (West, Mountain, and Plains states; Southwest, Midwest, and Great Lakes states; the Southeast; and the Northeast) procured on Berkeley Digital Library SunSITE (http://sunsite.berkeley.edu/libweb) were identified. These experts were then systematically divided into two groups. Each expert was assigned a number in a straight numerical sequence. The odd-numbered experts were assigned to Group 1 while the even-numbered experts were assigned to Group 2. Pretesting of the instrument. The preliminary questionnaire was distributed electronically and tested on academic Libweb of 16 SUNY Libweb sites (Northeast Region, Berkeley Digital Library SunSITE). There was no attempt to analyze the collected data from the pretest. The objective of the pretest was to obtain dependable advice with respect to the clarity of the criteria, and to correct possible misinterpretations of the questions. Administration of the preliminary survey to Group 1. A modified questionnaire (Chao, 2001, Appendix D) of 68 criteria was distributed electronically to the 443 Libweb
174
1
H. Chao / Library & Information Science Research 24 (2002) 169–194
experts in Group 1 to obtain their opinions on the importance of the criteria assessing academic Libweb quality. In addition, the Libweb experts were requested to provide a list of academic Libweb site names (at least one) for each of the Libweb quality levels: low, adequate, and high. There were 316 responses after six subsequent followup requests. Analysis of the results of the preliminary survey. All data in the study were analyzed using the Statistical Package for the Social Sciences for Microsoft Windows, version 8.0 (SPSS, Inc., 1998). The descriptive statistics (Chao, 2001, Table 7) for the Libweb quality criteria attained through the survey are presented in a ranked order by mean scale scores. There were 148 missing values out of the 21,488 possible data points (68 criteria by a total of 316). Missing data represented only 0.7% of the dataset and appeared to be randomly distributed across the criteria and were replaced by the mean scores for the respective criteria in which the missing data occurred. This way, the sample size for the factor analysis was not reduced. The minimum rule is to have at least five times as many cases as there are variables (Hair, Anderson, Tatham, & Black, 1995). Moreover, only 32 of the total 2,278 (68C2) correlation coefficients (or 1.4%) between pairs of criteria were greater than 0.50 in absolute value. Also, Bartlett’s Test1 of Sphericity was significant (c2 = 11,127.514, df = 2,278; p = .01). Therefore, the multicollinearity problem was minimized. The data were then submitted to a principal components analysis and the extracted factors with eigenvalues 1 were selected as the factorized criteria. Of 68 criteria, 68.3% of the variation could be explained by the variance of these 18 extracted factors (Chao, 2001, Table 8). Next, the initial factors were rotated using the varimax criterion2 with Kaiser normalization. The rotated component matrix (Chao, 2001, Table 9) was constructed to categorize criteria. Creation of the final instrument. A minimum proportion of variance rule (setting the selected factor loading at a minimum of 0.50) was applied to determine which criterion was qualified to be assigned to which extracted factor in the rotated component matrix. Because it was possible that some important quality criteria did not load onto any extracted factors, the researcher decided that any quality criterion that received a very high rating (mean score 4) from the preliminary survey would not be automatically discarded. Because of similar dimensions suggested by the Libweb experts from the survey feedback, two pairs of the 18 factors were combined to form the final instrument of
This procedure tests the hypothesis that the correlation matrix is an identity matrix. It requires that the data be sampled from a multivariate normal population. If the null hypothesis that states the population correlation matrix is an identity matrix cannot be rejected, and the sample size is reasonably large, then one will need to reconsider the use of multivariate analysis. 2 The varimax rotation maximized the variance of the squared factor loading on each factor, thereby minimizing the number of variables that had high loading on each factor. In this manner, the varimax rotation facilitated the interpretation of each factor.
H. Chao / Library & Information Science Research 24 (2002) 169–194
175
16 criteria (Chao, 2001, Appendix I). Each of these final 16 factors was assigned a proxy criterion term, increasing their usefulness to the Group 2 experts (see Table 1). Identification of three Libweb sites of low, adequate, and high quality. The respondents in the preliminary survey were asked to recommend a Libweb site of low, adequate, and high quality. Based on the responses, the Libweb site in each category with the highest frequency (Chao, 2001, Table 14) was selected as the test site (eight, five, and three times for each of the one high-, five adequate-, and two low-quality Libweb sites suggested by the respondents, respectively) that the Libweb experts in the second survey would be asked to evaluate. Based on the respondents’ profile, a panel of six academic Libweb experts was selected to evaluate the quality of the three Libweb sites: The panel consisted of four women and two men, three of whom were from universities, two from four-year colleges, and one from a two-year college; four of these experts were from public
Table 1 A designation list for the 16 factors Factors
Attributes
Q1, Presentation
Suitable background, color, font, icon, image, size, layout, and text. Organized and consistent scheme, reliable links, and concise home page. Convenient e-mail address to a responsible party, and links to the library’s/parent institution’s home pages. Inclusion of library’s/institution’s names and logos, and online forms for request or feedback. Quick connection and delivery, minimal use of large graphics and bright color, and easy access to links. Pertinent instructions or warning statements to file/document types. Clear, coherent, and concise headings; clearly titled screen. Comprehensive, current, and accurate information relevant to institution’s learners and faculty. Secured private interaction. Credible and appropriate sources and documents. Applicable index/table of contents and various search engines. Consistent text and graphics in different browsers. Options available for various features, such as text-only view. Clear site map, hypermedia index, and short number of clicks to online catalog, reference tools, and databases. Original materials, institutional archives, and news/events.
Q2, Integration
Q3, Speed Q4, Information about links Q5, Heading and titles Q6, Institutional information Q7, Reliability Q8, Search capability Q9, Compatibility Q10, Navigability
Q11, Inclusion of special collections and ‘‘what’s new’’ Q12, Facilitation and help Available ‘Help’ information; stable URL or hot-link to the new URL. Q13, Content Up-to-date library catalogs, library services, research tools, resources and collections, online request forms, faculty partnerships, and ‘‘about the library’’ pages. Q14, Graphic design Limited use of blinking, italics, other attention-getting devices, and extraneous navigational aids (e.g., back and forward buttons, history lists). Q15, Authority Knowledgeable site Webmaster/maintainer. Q16, Services Accessible remote library services: (e.g., library instruction, reference assistance, and document delivery).
176
H. Chao / Library & Information Science Research 24 (2002) 169–194
institutions and two were from private institutions. All the experts had served at least three years on the Libweb. Without prior knowledge of which Libweb site was low, adequate, or high quality, the panelists were asked to evaluate each of the sites using the original 68 criteria and to summarize the quality of each of the three sites as low, adequate, or high. The validation (Chao, 2001, Table 16) of these three representative academic Libweb sites’ quality was ultimately based on a majority of the experts’ opinions (five, six, and five of the six experts, respectively, agreed on the corresponding low-, adequate-, and high-quality Libweb sites assigned them to evaluate). The third stage contained three phases:
Use of the instrument on the three Libweb sites of low, adequate, and high quality. Because it was not feasible to request each of the 443 Group 2 experts to evaluate each of the three academic Libweb sites thoroughly, the Group 2 experts were further split into three subgroups. A systematic sampling was also applied to assign each alternating expert to a subgroup 1, subgroup 2, or subgroup 3. Each of the three subgroups evaluated only one representative academic Libweb site. To magnify the experts’ perception difference, an 11-point scale (F, D, C, C, C+, B, B, B+, A, A, A+) from failing to excellent was used to evaluate the site’s overall quality (Q00). There were 326 responses after seven follow-up requests. The response rates were 69.6%, 77.0%, and 74.2% for subgroups 1, 2, and 3, respectively. Descriptive results of the final survey. The descriptive statistics (Chao, 2001, Table 19) for assessing the Libweb quality showed that there were a total of 82 missing values out of the possible 5,216 data points (16 criteria times 326). Missing data represented 1.57% of the dataset and were spread over the criteria. Because a subsequent analysis of variance (ANOVA) was applied to measure the quality difference of these three Libweb sites, the cases with missing data should not be counted in order to maintain an objective analysis. For the same reason, the regression analysis using the aggregated data also excluded those cases with missing data. Analysis of the final survey. The analysis of the final survey data was designed to answer the two questions discussed in the following paragraphs: Question 1: Could the instrument discriminate among the quality of three different academic Libweb sites? If so, did these three quality levels of academic Libweb sites rank consistently with respect to each other? By applying a one-way ANOVA on the final survey, it can be concluded that there were indeed differences among the three quality levels (see Table 2). Except for criterion Q02 (Integration), all other criteria resulted in significant F statistics. Quality differences did exist among the academic Libweb sites. Furthermore, with the exception of criterion Q03 (Speed), all other criteria held a consistent order of ¯2 < X ¯ 3). This suggested that, in general, the low¯1 < X sample mean values (i.e., X quality level was inferior to the adequate-quality level, which, in turn, was also inferior to the high-quality level of an academic Libweb site.
H. Chao / Library & Information Science Research 24 (2002) 169–194
177
Table 2 Decisions based on the one-way ANOVA Mean quality Criterion
Group 1
Q00, Overall quality
5.6373 7.3274 8.9633 ¯1 < X ¯2 < X ¯ 3 (consistent) X 3.4466 3.5965 4.2844 ¯1 < X ¯2 < X ¯ 3 (consistent) X 3.6893 3.7965 3.8716 ¯1 < X ¯2 < X ¯ 3 (consistent) X 3.6505 4.2456 4.0642 ¯1 < X ¯3 < X ¯ 2 (inconsistent) X 2.8137 3.2456 3.6698 ¯2 < X ¯ 3 (consistent) ¯1 < X X 3.3465 3.8772 4.2661 ¯1 < X ¯2 < X ¯ 3 (consistent) X 3.0594 3.6964 3.9439 ¯1 < X ¯2 < X ¯ 3 (consistent) X 2.9200 3.6542 3.9151 ¯1 < X ¯2 < X ¯ 3 (consistent) X 2.7600 2.9737 4.2710 ¯1 < X ¯2 < X ¯ 3 (consistent) X 2.9588 3.5044 4.0762 ¯1 < X ¯2 < X ¯ 3 (consistent) X 3.1262 3.4825 4.2130 ¯1 < X ¯2 < X ¯ 3 (consistent) X 1.8725 3.1071 4.0000 ¯2 < X ¯ 3 (consistent) ¯1 < X X 2.4554 2.8761 3.9159 ¯1 < X ¯2 < X ¯ 3 (consistent) X 2.6602 3.8684 4.3832 ¯1 < X ¯2 < X ¯ 3 (consistent) X 3.4257 3.6991 4.3211 ¯1 < X ¯2 < X ¯ 3 (consistent) X 3.1919 3.7358 4.0095 ¯1 < X ¯2 < X ¯ 3 (consistent) X 2.5243 3.4018 3.8426 ¯1 < X ¯2 < X ¯ 3 (consistent) X
Q01, Presentation Q02, Integration Q03, Speed Q04, Information about links Q05, Heading and titles Q06, Institutional information Q07, Reliability Q08, Search capability Q09, Compatibility Q10, Navigability Q11, Inclusion of special collections Q12, Facilitation and help Q13, Content Q14, Graphics Design Q15, Authority Q16, Services
Group 2
Group 3
F statistics Significant F 92.6108 .0000* 34.8993 .0000* 1.1207 .3273 12.7024 .0000* 26.8840 .0000* 30.4901 .0000* 24.2902 .0000* 37.6960 .0000* 70.3342 .0000* 47.4055 .0000* 33.8829 .0000* 106.3128 .0000* 74.5104 .0000* 90.2459 .0000* 25.5319 .0000* 27.0971 .0000* 48.0780 .0000*
Support/not support Support Support Not Support Support Support Support Support Support Support Support Support Support Support Support Support Support Support
* Statistically significant with a significance level of .01.
Though not statistically significant in its sampling F statistics, criterion Q02 (Integration) still kept a consistent order of sample mean value. This implied that, when used to measure the ‘‘Integration’’ quality of an academic Libweb site, the final instrument might not be able to differentiate clearly between the low, adequate, or high quality levels of academic Libweb sites. For the criterion Q03 (Speed), though statistically significant in its sampling F statistics, the order of sample mean quality ¯3 < X ¯ 2). This indicated ¯1 < X scores was displayed in an inconsistent magnitude (i.e., X
178
3
H. Chao / Library & Information Science Research 24 (2002) 169–194
that, when used to measure the ‘‘Speed’’ quality of an academic Libweb site, the final instrument was able to reliably discern the quality differences of academic Libweb sites. However, the high-quality site may not always rank first with regard to the lowand adequate-quality sites in the ‘‘Speed’’ criterion. The consistent order was simply based on the sample mean scores of the low-, adequate-, and high-quality groups. A post hoc (Scheffe´) test3 (Chao, 2001, Table 22) was applied to clarify whether this assertion on consistency was reasonable. Generally, the post hoc test (42 of the 51 cases) substantiated the consistency declaration. It was desirable to further explore whether the respondents’ demographics, in conjunction with subgroups 1, 2, and 3, had any interactive effects in discerning the low, adequate, or high quality levels of Libweb sites with the overall quality and the individual quality. In general (69 of the 80 cases), the two-way interaction analysis (Chao, 2001, Table 23) disclosed that the respondents’ characteristics (gender, affiliation, institution, experience, and frequency) in conjunction with the subgroups (1, 2, and 3) carried no significant effects on distinguishing the overall quality of academic Libweb sites. However, to base the quality of Libweb sites solely on the overall quality was too broad a generalization to be an effective quality assessment tool. Therefore, the 16 quality criteria provided another legitimate qualification, supplementary to the overall quality, for evaluating the quality of academic Libweb sites. Question 2: Which criteria (Q01–Q16) were most highly related to the overall quality (Q00) of academic Libweb sites? This question addresses two issues: First, in terms of the potential usefulness of the instrument for diagnostic purposes, which criteria would require special consideration in the construction and maintenance of a high-quality Libweb site. Second, given the potential commonality among the criteria, which subsets of criteria would be the most salient and nonredundant indicators of quality. To address the first issue, the overall quality scores were regressed onto each of the criteria scores in a series of simple regression analyses. Prior to conducting the simple regression analysis, the association of the 16 quality criteria with the overall quality was reviewed (see Table 3). It was shown that all 16 essential site-quality criteria were linked to the overall quality (Q00) with a medium (0.47 for Q03-Speed) to high correlation coefficient (0.74 for Q13-Content). Each of the simple regression models and every marginal contribution (regression coefficient) of the quality criteria in the 16 regression models did exist statistically (see Table 4). The R2 scores represented the percentage of the overall quality score’s variation accounted for by each of the quality criterion’s scores. Hence, the descending order of the criteria from Q13 (Content) to Q03 (Speed) in Table 4 provided useful diagnostic information as to which criterion need to be attended to first in order to construct and maintain a betterquality Libweb site.
A post hoc test allows for multiple comparisons between means. The Scheffe´ test performs simultaneous joint pair-wise comparisons for all possible pair-wise combinations of means.
H. Chao / Library & Information Science Research 24 (2002) 169–194
179
Table 3 The correlation matrix of the 16 quality criteria and the overall quality Q00 Q01 Q02 Q03 Q04 Q05 Q06 Q07 Q08 Q09 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q00 Q01 Q02 Q03 Q04 Q05 Q06 Q07 Q08 Q09 Q10 Q11 Q12 Q13 Q14 Q15 Q16
1.00 0.70 0.52 0.47 0.58 0.66 0.70 0.66 0.63 0.61 0.63 0.62 0.61 0.74 0.64 0.62 0.69
1.00 0.50 0.43 0.43 0.51 0.48 0.46 0.47 0.45 0.55 0.29 0.42 0.44 0.60 0.49 0.49
1.00 0.31 0.35 0.47 0.45 0.40 0.27 0.32 0.38 0.16 0.34 0.41 0.47 0.43 0.41
1.00 0.31 0.46 0.43 0.38 0.28 0.21 0.39 0.30 0.23 0.37 0.42 0.36 0.31
1.00 0.44 0.38 0.50 0.46 0.48 0.46 0.40 0.42 0.44 0.33 0.38 0.46
1.00 0.58 0.59 0.54 0.36 0.59 0.39 0.39 0.60 0.44 0.45 0.46
1.00 0.63 0.48 0.43 0.47 0.49 0.42 0.62 0.43 0.47 0.58
1.00 0.55 0.47 0.48 0.45 0.40 0.57 0.45 0.52 0.51
1.00 0.51 0.57 0.53 0.60 0.48 0.38 0.40 0.42
1.00 0.44 0.53 0.51 0.41 0.39 0.49 0.45
1.00 0.34 0.43 0.46 0.49 0.42 0.39
1.00 0.62 0.59 0.33 0.45 0.58
1.00 0.51 0.35 0.44 0.52
1.00 0.43 1.00 0.57 0.53 1.00 0.63 0.45 0.53 1.00
All correlation coefficients are statistically significant with a two-tail significance level of .01. Bold values represent a correlation coefficient .50.
Table 4 The regression analysis for overall quality with each of the 16 quality criteria Criterion Q13 Q01 Q06 Q16 Q05 Q07 Q14 Q08 Q10 Q15 Q11 Q12 Q09 Q04 Q02 Q03 a
R2
F statistics
Significant F*
B (regression coefficient)b
t Statistics*
.54085 .49539 .48542 .47978 .44127 .44018 .41435 .40265 .39204 .37953 .37912 .37267 .37025 .33714 .26733 .22163
376.94209 316.12002 298.09354 294.20324 252.72236 242.96558 225.69809 213.67783 206.99825 187.17295 191.73707 188.31731 182.84662 161.73701 117.12651 91.68556
.0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000 .0000
1.373488 1.808071 1.548419 1.363815 1.576386 1.579190 1.399488 1.163868 1.291464 1.585887 1.008591 1.257049 1.465261 1.421756 1.294378 1.141510
19.415 17.780 17.265 17.152 15.897 15.587 15.023 14.618 14.387 13.681 13.847 13.723 13.522 12.718 10.823 9.575
Individually, the percentage of the overall quality score’s variation can be attributed to each of the quality criterion’s scores. b The marginal contribution of each quality criterion’s score can add to or deduct from the overall quality score. * All statistics are statistically significant with a significance level of .05.
180
H. Chao / Library & Information Science Research 24 (2002) 169–194
Fig. 1. The structure of a regression model.
For the second issue, the overall quality scores were regressed onto all of the criteria scores using multiple regression analyses. This issue concerned which criteria were the most essential and nonredundant indicators of Libweb quality. Aside from the extracted factors that relate to the academic Libweb quality, the characteristics of the experts being surveyed may also reveal some influence of demographics on academic Libweb quality (11 of 80 cases in the two-way ANOVA). Figure 1 presents the structure of the testing procedure on the relationships among the overall quality, the respondents’ demographics, and the quality criteria. Prior to reporting the results of the regression analysis, the correlation of the respondents’ demographics with the overall quality was examined (see Table 5). The
Table 5 The correlation coefficients of the respondents’ demographics with the overall quality r
Gender
Affiliation
Institution
Experience
Frequency
Q00 Significance probability
0.1331 0.017*
0.0944 0.090
0.0348 0.532
0.2189 0.000*
0.0883 0.113
Bold values represent a low association between Q00 and demographics. * Statistically significant with a significance level of .05 (two-tail).
H. Chao / Library & Information Science Research 24 (2002) 169–194
181
correlation coefficients (jrj 0.25) revealed that there was a low association between the overall quality (Q00) and the respondents’ demographics, even though the correlation coefficients of gender and experience were statistically significant. Again, intercorrelation of the 16 quality criteria and the overall quality should also be examined (see Table 3). The first coefficient column presented a moderate to high correlation (0.47– 0.74) between the 16 quality criteria and the overall quality (Q00). For intercorrelation among the 16 quality criteria themselves, only 32 of the 120 (16C2) correlation coefficients were located between 0.50 and 0.63 (moderate correlation), while the majority (73%) of the correlation coefficients were under 0.50. This evidence supported the appropriateness of the multiple regression in incorporating those independent variables (quality criteria) with low correlation to each other yet highly related to the dependent variable (overall quality), thus abating the multicollinearity problem. 4.1. Step 1 The criterion Q00 was regressed onto all demographics, which were entered in a hierarchical sequence. First, it was assumed that the assessment of the overall quality of an academic Libweb site was a function of the respondents’ demographics. A hierarchical regression model was conducted by entering the following factors: gender, affiliation, institution, experience, and frequency, respectively. This way it could be observed whether individual demographic characteristics accounted for any additional variances on the overall quality scores. The regression analyses (Chao, 2001, Table 30) indicated that the factors of the respondents’ demographics accounted for only a trivial percentage (1.8%–8.7%) in explaining the variation of the overall quality (Q00), despite the fact that the t statistics were construed as ‘‘significant.’’ When the relationship model turned trivial, the significant regression coefficients could be disregarded. 4.2. Step 2 In addition to the demographics, Q00 was then regressed onto all factor (criteria) scores, which were entered stepwise. Again, it was assumed that the overall quality of an academic Libweb site could be related to the set of 16 quality criteria in addition to the respondents’ demographics. The regression results (Chao, 2001, Table 31) by the stepwise method4 uncovered the following: 1. The statistically significant F statistics showed that the 11 variables in the equation were indeed significantly related to the overall quality.
4
This is a method that adds and removes individual variables according to the criteria (here, we use F significance levels 0.05 to enter and 0.10 to remove a variable) until a model is reached in which no more variables are eligible for entry or removal.
182
H. Chao / Library & Information Science Research 24 (2002) 169–194
2. The high R2 score indicated that 85.3% of the variation in the overall quality could be attributed to the 11 significant variables in the equation. 3. The regression model (Model 1) could be finalized as follows: Q00 ðoverall qualityÞ ¼ 0:5278 Q01 ðPresentationÞ þ 0:1914 Q04 ðInformation About LinksÞ þ 0:1823 Q05 ðHeading and TitlesÞ þ 0:2423 Q06 ðInstitutional InformationÞ þ 0:2197 Q08 ðSearch CapabilityÞ þ 0:2938 Q09 ðCompatibilityÞ þ 0:1220 Q11 ðSpecial CollectionsÞ þ 0:3450 Q13 ðContentÞ þ 0:3430 Q14 ðGraphic DesignÞ þ 0:2570 Q16 ðServicesÞ 0:0930 ðFrequency of Using LibwebÞ 2:0022 Only one demographic category (Frequency of Using Libweb) existed in Model 1 and it contained a negative marginal contribution (0.093) to the overall quality score. Here, the rationale could be that the more frequently an evaluator used Libweb sites, the more stringent the evaluator was in his or her assessment of academic Libweb quality. However, the trivial correlation coefficient (0.0883) of Frequency with the overall quality was not significant (see also Table 5). Hence it was appropriate to exclude the Frequency characteristic of demographics as a predictor of the overall quality. Consequently, only those factors having low correlation (see also Table 3) with the independent variables (16 quality criteria) and a high association with the dependent variable (overall quality) needed to be considered in the regression model. This alternative regression analysis (Chao, 2001, Table 33) generated the relationship model (Model 2) by ordering the regression coefficients to allow the user to know which criterion contributes the most in improving the overall quality: Q00 ðoverall qualityÞ ¼ 0:5560 Q01 ðPresentationÞ þ 0:4290 Q13 ðContentÞ þ 0:3410 Q14 ðGraphic DesignÞ þ 0:3078 Q09 ðCompatibilityÞ þ 0:2927 Q16 ðServicesÞ þ 0:2827 Q08 ðSearch CapabilityÞ þ 0:2749 Q06 ðInstitutional InformationÞ þ 0:2247 Q04 ðInformation About LinksÞ 2:3502 This alternative Model 2 eliminated three significant factors—Frequency, Q05 (Heading and titles), and Q11 (Inclusion of special collections)—that were present in Model-I. Yet, the remaining eight criteria still accounted for the 84.6% variation of the overall quality (Q00).
H. Chao / Library & Information Science Research 24 (2002) 169–194
183
5. Discussion This study developed and tested operative criteria that experts can apply to evaluate the quality of academic Libweb sites objectively. By consulting authoritative criteria used for traditional print resources and Internet/Web resources, a set of essential criteria was generated. After an initial pretest, the 70 aggregated guidelines were modified into 68 essential criteria that were then integrated into the first survey distributed to the academic Libweb librarians. Using a statistical factor analysis, 18 factors accounting for 68.3% of the variation of the 68 criteria were extracted from the returned sample data. Simultaneously, three academic Libweb sites of low, adequate, and high quality were identified and verified by way of a majority agreement in a subsequent validation survey distributed to a panel of six academic Libweb experts. Based on the derived 18 factors, most of the long-list 68 criteria were properly merged into a concise 16-criteria instrument, which was then applied in the final survey toward the three subgroups of academic Libweb librarians. The statistical ANOVA analysis confirmed that this final instrument could be used to reliably differentiate among the low, adequate, or high quality levels of academic Libweb sites based on the overall quality criterion as well as the 16 other quality criteria. Furthermore, a stepwise regression model was performed to identify the 11 key components measuring the quality of an academic Libweb site. These 11 components accounted for the 85.3% variation of the overall quality (Q00). They included 10 quality criteria (Presentation-Q01, Information About the Links-Q04, Heading and Titles-Q05, Institutional Information-Q06, Search Capability-Q08, Compatibility-Q09, Inclusion of Special Collections-Q11, Content-Q13, Graphic Design-Q14, and Services-Q16) and a demographics factor (Frequency, average times per weekday in browsing Libweb). Alternatively, taking into account the most salient and nonredundant quality criteria, eight criteria accounted for the 84.6% variation of the overall quality (Q00). The components were Presentation (Q01), Content (Q13), Graphic Design (Q14), Compatibility (Q09), Services (Q16), Search Capability (Q08), Institutional Information (Q06), and Information About the Links (Q04), respectively, in a ranked order of marginal contribution to the overall quality. Supplemental to the traditional service of physical library sites, academic Libweb sites act as virtual sites that cater to patrons without the limitation of time, location, and space. With the pace of information technology development increasing rapidly, academic librarianship professionals are striving to meet the challenge. So who are currently responsible for the academic Libweb sites, and what are their profiles? An aggregated set of Group 1 and Group 2 respondents’ demographics (see Table 6) shows the following characteristics among the respondents:
5
Among academic Libweb experts, women are in the majority (57%).5 The percentage of the respondents’ affiliations reflected that the number of those who were associated with the university (68%) was far higher than the com-
Traditionally, women librarians have dominated the workforce and most fields of library professionals. In 1998, there were 208,000 librarians in the United States, and 84% of them were women (Statistical Abstract of the United States, 1999). Although among academic Libweb experts, women are in the majority, they are less so than their numbers in the profession would indicate.
184
H. Chao / Library & Information Science Research 24 (2002) 169–194
Table 6 The profiles of academic Libweb experts Group 1 a
Group 2
Total
Category
n
%
n
%
na
%
Gender Female Male Subtotal
174 142 316
55.06 44.94
189 136 325
58.15 41.85
363 278 641
56.63 43.37
Affiliation University 4-year College 2-year College Subtotal
219 69 26 314
69.75 21.97 8.28
212 78 33 323
65.63 24.14 10.22
431 147 59 637
67.66 23.08 9.26
Institution Public Private Subtotal
199 114 313
63.58 36.42
183 138 321
57.00 43.00
382 252 634
60.25 39.75
29.28 44.41 26.31
75 124 119 318
23.58 39.00 37.42
164 259 199 622
26.37 41.64 31.99
28 44 248 320
8.75 13.75 77.50
78 91 458 627
12.44 14.51 73.05
62 6 20 172 66 326
19.02 1.84 6.13 52.76 20.25
111
17.29
49b 357 125 642
7.63 55.61 19.47
Years served in any Libweb 1 – 2 years 89 3 – 4 years 135 5+ years 80 Subtotal 304 Frequency in using any Libweb (average 1 – 2 times 50 3 – 4 times 47 5+ times 210 Subtotal 307 Title Webmaster Web editor Web designer Librarian Other (specify) Subtotal a b
times/weekday) 16.29 25.31 68.40
49
15.51
23 185 59 316
7.28 58.54 18.67
a
Excluding the cases with missing data. Including the number of Group 2 Web editors.
bined number of those associated with four-year (23%) and two-year colleges (9%) by 2 to 1. A total of 60% of the respondents worked in public institutions and 40% worked in private institutions. The average experience of academic Libweb experts was 3.11 years. This implied that academic Libweb jobs are still being created.
H. Chao / Library & Information Science Research 24 (2002) 169–194
185
Table 7 16 fundamental criteria to be assessed to determine the quality of academic Libweb sites (based on the simple Model 3) Step 1: Rate the academic Libweb site’s performance based on the 16 quality criteria Academic Libweb site’s quality based on the following criteria (in a ranked order by correlation coefficients)
Failing E
site quality ! Excellent D
C
B
A
C01, Content, such as up-to-date library catalogs, library services, research tools, resources and collections, online request forms, faculty partnerships, and ‘‘about the library’’ pages C02, Presentation, such as suitable background, color, font, icon, image, size, layout, and text; organized and consistent scheme, reliable links, and concise home page C03, Institutional information, such as comprehensive, current, and accurate information relevant to institution’s learners and faculty C04, Services, such as accessible remote library services (e.g. library instruction, reference assistance, and document delivery) C05, Heading and Titles, such as clear, coherent, and concise headings; clearly titled screen C06, Reliability, such as secured private interaction; credible and appropriate sources and documents C07, Graphic design, such as limited use of blinking, italics, other attention-getting devices, and extraneous navigational aids (e.g., back and forward buttons, history lists) C08, Search capability, such as applicable index/table of contents and various search engines C09, Navigability, such as clear site map, hypermedia index, and short number of clicks to online catalog, reference tools, and databases C10, Authority, such as knowledgeable site Webmaster/maintainer C11, Inclusion of special collections and ‘‘what’s new,’’ such as original materials, institutional archives, and news/events (continued on next page)
186
H. Chao / Library & Information Science Research 24 (2002) 169–194
Table 7. (continued ) Step 1: Rate the academic Libweb site’s performance based on the 16 quality criteria Academic Libweb site’s quality based on the following criteria (in a ranked order by correlation coefficients)
site quality ! Excellent
Failing E
D
C
B
A
C12, Facilitation and help, such as available ‘‘Help’’ information; stable URL or hot-link to the new URL C13, Compatibility, such as consistent text and graphics in different browsers; options available for various features, such as text-only view C14, Information about links, such as pertinent instructions or warning statements to file/document types C15, Integration, such as convenient e-mail address to a responsible party, and links to the library’s/parent institution’s home pages; inclusion of library’s/institution’s names and logos, and online forms for request or feedback C16, Speed, such as quick connection and delivery, minimal use of large graphics and bright color, and easy access to links Step 2: Replace the right-hand-side criteria (C01 – C16) below with their corresponding graded values (A = 5, B = 4, C = 3, D = 2, E = 1) Q (overall quality) = (11/80) * {C01 (Content) + C02 (Presentation) + C03 (Institutional information) + C04 (Services) + C05 (Heading and Titles) + C06 (Reliability) + C07 (Graphic design) + C08 (Search capability) + C09 (Navigability) + C10 (Authority) + C11 (Inclusion of Special Collections) + C12 (Facilitation and Help) + C13 (Compatibility) + C14 (Information About Links) + C15 (Integration) + C16 (Speed)} Step 3: Tally the overall quality score (Q). Step 4: Compare and judge the quality level of Q score: Refer to Table 8 (based on the sample mean score) or Table 9 (based on the sample quartiles score) or Table 10 (based on the 11 scales [F to A+] and the trichotomy of sample cumulative percentage).
The mean frequency of browsing Libweb sites was 4.21 times per weekday. This also suggested that the higher the browsing frequency, the more rigorous the respondents were in their assessment of a Libweb site’s quality.
H. Chao / Library & Information Science Research 24 (2002) 169–194
187
Table 8 A measure defining the overall quality of an academic Libweb site by mean (individual level) Quality level
Mean
If the Q(00) score resides betweena
Low quality Adequate quality High quality
5.6373 7.3274 8.9633
1.00 Q(00) 5.64 5.65 Q(00) 8.95 8.96 Q(00) 11.00
a
Using the means of low and high quality for demarcation
Though the Web-related specialists (Webmaster, Web designer, and system librarian) are emerging in academic libraries, the general term ‘‘librarian’’ is still used by more than half (56%) of these new specialists.
6. The instrument In creating the final instrument, the question is whether all 16 criteria are useful and whether some are irrelevant. The regression analyses are generally useful for research purposes in identifying the most significant factors but are not appropriate methods for recognizing items to eliminate or for structuring the final instrument. The criteria that had been eliminated by means of regression analyses may, in fact, be highly correlated to the overall quality and an academic library may need to attend to those items just as much as it needs to attend to the other items identified through the regression analyses. For example, regression Model 2 eliminated eight quality criteria (Q02, Q03, Q05, Q07, Q10, Q11, Q12, and Q15). Table 3 shows that those eight quality criteria actually contained moderate to high correlation coefficients. Therefore, to obtain a full set of information in evaluating academic Libweb quality, none of these 16 quality criteria should be ignored for inclusion in the final instrument. Furthermore, this final instrument consisting of the 16 quality criteria in a ranked order of correlation coefficients (using the R2 column in Table 4) should enable the evaluator to pay more attention to the most highly correlated criterion, thus aiding in the development and maintenance of a high-quality academic Libweb site. For the validated instrument to be operative, a form is desirable to assist prospective users in performing an evaluation. For research and diagnostic purposes, two forms fit this requirement. 6.1. Form 1: Full-length version Form 1 allows prospective users to ponder all 16 fundamental quality criteria ranked by their degree of correlation to the overall quality from highest to moderate. This full version (see Table 7) contains 16 quality criteria in a ranked order by their correlation coefficients. To obtain the evaluation score,6 let Model 3 be 6
Because Q00 and the 16 criteria were measured by the respective 11-point and 5-point scales, the maximal scores (5 16 = 80) tally of the 16 criteria required multiplication by a weight (11/80) to fit the corresponding highest score (11) of the overall quality. Also, the transformation of the minimal scores led to a score of 2.2 (16 11/80), enough to cover the lower 1 and 2 scores of the overall quality.
188
H. Chao / Library & Information Science Research 24 (2002) 169–194
Table 9 A measure defining the overall quality of an academic Libweb site by quartiles (aggregated data) Quality level
Quartiles
If the Q(00) score resides betweena
Low quality Adequate quality High quality
First quartile - > 6 Second quartile -> 8 Third quartile -> 9
1.00 Q(00) 6.00 6.01 Q(00) 8.99 9.00 Q(00) 11.00
a
Using the first and third quartile quality for demarcation.
Q00 ¼ ð11=80Þ * fQ13 þ Q01 þ Q06 þ Q16 þ Q05 þ Q07 þ Q14 þ Q08 þ Q10 þ Q15 þ Q11 þ Q12 þ Q09 þ Q04 þ Q02 þ Q03g In essence, the quality level of an academic Libweb site being measured can be determined by two approaches. The resulting information enables prospective users to compare the quality of their Libweb sites with that of the Libweb sites in this sample as well as compare the quality of the attributes of their Libweb sites with the attributes of the Libweb sites in this sample. Also, to improve the quality of their Libweb sites, the users can readily identify which of their particular Libweb site characteristics may need attention. Approach 1-1 applies the mean and quartiles (Chao, 2001, Table 38) of the Group 2 sampled data (see Tables 8 and 9). Approach 1-2 applies the cumulated percentage of the Group 2 sampled data (see Table 10). 6.2. Form 2: Short version This version (see Table 11) provides prospective users with eight essential quality criteria explaining 84.6% of the overall quality score’s variation. Moreover, the quality level of an academic Libweb site being measured can be determined by two approaches. This enables prospective users to apply the regression model with either the sample data they collect about the quality of their own Libweb sites or with the result of a singular evaluation of their own Libweb sites. Approach 2-1 uses the confidence interval by the mean score of the sample data toward regression Model 2 (Table 12). Approach 2-2 uses the confidence interval by the individual score of the sample data toward regression
Table 10 A measure defining the overall quality of an academic Libweb site by cumulated percentage (aggregated data) Quality level
Cumulated %
If the Q(00) score resides betweena
Low quality Adequate quality High quality
Covers the first 1/3 Inferior than B (27.8) Covers the next 1/3 between B and B + Covers the third 1/3 Superior than B+ (66.4)
(F) 1.00 Q(00) (B-) 6.00 (B) 6.01 Q(00) (B+) 8.00 (B+) 8.01 Q(00) (A+) 11.00
a
Using the B- (6) and B+ (8) quality point for demarcation.
H. Chao / Library & Information Science Research 24 (2002) 169–194
189
Table 11 Eight significant criteria to be assessed to determine the quality of academic Libweb sites (based on the regression Model 2) Step 1: Rate the academic Libweb site’s performance based on the eight quality criteria Academic Libweb site’s quality based on the following criteria (in ranked order by regression coefficients)
the sites quality ! Excellent
Failing E
D
C
B
A
C01, Presentation, such as suitable background, color, font, icon, image, size, layout, and text. Organized and consistent scheme, reliable links, and concise home page C02, Content, such as up-to-date library catalogs, library services, research tools, resources & collections, online request forms, faculty partnerships, and ‘‘about the library’’ pages C03, Graphic design, such as limited use of blinking, italics, other attention-getting devices, and extraneous navigational aids (e.g., back and forward buttons, history lists) C04, Compatibility, such as consistent text and graphics in different browsers; options available for various features, such as text-only view C05, Services, such as accessible remote library services (e.g., library instruction, reference assistance, and document delivery) C06, Search capability, such as applicable index/table of contents and various search engines C07, Institutional information, such as comprehensive, current, and accurate information relevant to institution’s learners and faculty C08, Information about Links, such as pertinent instructions or warning statements to file/document types Step 2: Replace the right-hand-side criteria (C01 – C08) below with their corresponding graded values (A = 5, B = 4, C = 3, D = 2, E = 1) Q(Overall quality) = 0.5560 C01 (Presentation) + 0.4290 C02 (Content) + 0.3410 C03 (Graphic Design) + 0.3078 C04 (Compatibility) + 0.2927 C05 (Services) + 0.2827 C06 (Search Capability) + 0.2749 C07 (Institutional Information) + 0.2247 C08 (Information About Links)- 2.3502. Step 3: Tally the overall quality score (Q). Step 4: Compare and judge the quality level of Q score: Refer to Tables 12 (based on the sample mean’s confidence interval by regression Model 2) or Tables 13 (based on the individual case’s confidence interval by regression Model 2).
190
H. Chao / Library & Information Science Research 24 (2002) 169–194
Table 12 95% Confidence interval of the mean score Quality level
Low bound of the mean
Upper bound of the mean
Low Moderate High
5.4633 6.9622 8.6195
6.1154 7.5708 9.1825
Table 13 95% Confidence interval of the individual score Quality level
Low bound of the individual mean
Upper bound of the individual mean
Low (n = 103, missing 13) Moderate (n = 114, missing 6) High (n = 109, missing 7)
4.0099 5.4905 7.1292
7.5687 9.0425 10.6727
Model 2 (Table 13). The Appendix offers a few suggestions on how the instrument can be applied properly.
7. Conclusion Currently, the World Wide Web is one of the most appealing media for academic libraries to serve their constituencies. Despite the continual increase in the number of academic libraries on the Web (Libweb), there is a lack of authoritative guidelines or criteria to help library professionals define how the quality of an academic Libweb site can be properly measured and improved to serve Web-accessing patrons better. By browsing numerous academic Libweb sites, it was uncovered that defective design and maintenance are prevalent. There is a necessity to improve the quality of academic Libweb sites because the defects and errors not only detract from a favorable image of the library but also hinder the users from using the Web site efficiently and effectively. This research systematically explored and verified an operative instrument for assessing the quality of academic Libweb sites. The instrument developed in this study can be utilized in the development, evaluation, and maintenance of quality academic libraries on the Web. By evaluating an academic Libweb site with the essential criteria provided by either the fulllength (Table 7) or short (Table 11) versions, academic Libweb professionals can readily determine a site’s quality and note which components require special attention. It is interesting that the essential 16 criteria listed in Table 1, with the exception of two (Q06-Institutional Information and Q11-Inclusion of Special Collections), apply to all Web sites not only academic library sites. Therefore, both full-length and short versions of the instrument can be easily adapted for assessing general Web-site quality. Although ‘‘Content,’’ one of the most important quality criterion (Abels et al., 1997; Bell & Tang,
H. Chao / Library & Information Science Research 24 (2002) 169–194
191
1998), was substantiated in this study, ‘‘Speed’’ no longer played a key role in the regression Models 1 and 2. The development of information technology enhanced the latter. In essence, two dimensions—content richness (effectiveness) and structural appeal (efficiency)—should be kept in mind when designing or evaluating a quality Web site. Regarding the rapid change in Web technology and trends, some of the 16 criteria might change as new hardware/software evolves. The automation of information technology will greatly improve the structural appeal. However, content richness will still rely on the experience of the librarian as the demarcation between a physical and a virtual library may become blurred in the future.
Acknowledgments The author thanks Dr. A. Neil Yerkey, Dr. George D’Elia, and Dr. George S. Bobinski, Department of Library & Information Studies, the State University of New York at Buffalo, for their guidance as advisory members of the dissertation committee.
Appendix: Suggestions for using the evaluation instrument For survey users: This instrument was developed and tested through surveys directed to the study’s academic Libweb experts and analyses based on those experts’ perceptions. It assists the academic Libweb experts in designing and maintaining a good-quality academic Libweb site. Therefore, it may not be appropriate to apply this instrument in evaluating nonacademic Libweb sites or in surveying nonacademic Libweb experts (e.g., students, faculty, and staff). Evaluation checklists for Webmasters: Are you an academic Libweb expert (e.g., Web-knowledgeable librarian)? If your answer is no, do not use the instruments. If yes, continue. There are two instruments—the full-length version with 16 criteria (Form 1) and the short version with 8 criteria (Form 2)—that can be used to evaluate your site. However, there are a couple of guidelines for evaluating your site’s quality as follows: Are you evaluating your site exclusively? If your answer is yes, should you use Form 1? Use one of Tables 8, 9, or 10 to judge your site’s overall quality level (low/adequate/high) and to plan the necessary improvement by the ranked order of criteria (see Table 7) to determine if you should use Form 1.
192
H. Chao / Library & Information Science Research 24 (2002) 169–194
If your answer to the first question is no, you should use Form 2 and Table 13 to judge your site’s overall quality level (low/adequate/high) and to plan the necessary improvement by the ranked criteria (see Model 2). If you are using sampling survey data, should you use Form 1? If yes, use one of Tables 8, 9, or 10 to judge your site’s overall quality level (low/adequate/ high) and to plan the necessary improvement by the ranked order of criteria (see Table 7). If you are using Form 2, you should use Table 12 to judge your site’s overall quality level (low/adequate/high) and to plan the necessary improvement by the ranked criteria (see Model 2). Design guidelines for new sites: The prospective user of this instrument may either follow the steps of Form 1 (full-length version) and Model 3’s 16 ranked criteria by correlation coefficients (r: a relevant indicator) or follow the steps of Form 2 (short version) and Model 2’s eight ranked criteria by regression coefficients (b: a marginal contribution) to construct and maintain a good-quality academic Libweb site.
References Abels, E. G., White, M. D., & Hahn, K. (1997). Identifying user-based criteria for Web pages. Internet Research: Electronic Networking Applications and Policy, 7, 252 – 262. Alexander, J., & Tate, M. (1996a). Checklist for a personal home page. Retrieved January 12, 1998, from Widener University, Wolfgram Memorial Library Web site: http://www.science.widener.edu/~withers/evalout.htm Alexander, J., & Tate, M. (1996b). Teaching critical evaluation skills for World Wide Web resources. Computers in Libraries, 16(10), 49 – 55 (Retrieved January 12, 1998, from http://www.science.widener.edu/~withers/ webeval.htm). Argus Associates. (1998). Are you a Webmaster? Retrieved September 10, 1998, from http://www.clearing house.net/ratings.html. Bell, H., & Tang, N. K. H. (1998). The effectiveness of commercial Internet Web sites: A user’s perspective. Internet Research: Electronic Networking Applications and Policy, 8(3) Retrieved January 15, 1999, from http://www.emerald-library.com/brev/17208cb1.htm. Berkeley Digital Library SunSITE. (2000). Guidelines for Web Document Style and Design. Retrieved June 2, 2000, from http://sunsite.Berkeley.edu/libweb. Botafogo, R. A., Riivlin, E., & Shneiderman, B. (1992). Structural analysis of hypertexts: Identifying hierarchies and useful metrics. ACM Transactions on Information Systems, 10, 142 – 180. Brandt, S. D. (1996). Evaluating information on the Internet. West Lafayette, IN: Purdue University Libraries Retrieved January 12, 1998, from http://thorplus.lib.purdue.edu/~techman/evaluate.htm. Caywood, C. (1996, May/June). Library selection criteria for WWW resources [revised 6/98]. Public Libraries, 35, 169 (Retrieved July 10, 1998, from http://www6.pilot.infi.net/~carolyn/criteria.html). Chao, H. (2001, February). The development and testing criteria for assessing the quality of academic libraries on the Web. Unpublished PhD dissertation, SUNY at Buffalo. Ciolek, T. M. (1996). The six quests for the electronic grail: Current approaches to information quality in WWW resources. Retrieved June 1, 1998, from http://www.ciolek.com/papers/quest/questmain.html. Clyde, L. A. (1996). The library as information provider: The home page. The Electronic Library, 14, 549 – 558. Cooper, E. A. (1997). Library guides on the Web: Traditional tenets and internal issues. Computers in Libraries, 17(9), 52 – 55.
H. Chao / Library & Information Science Research 24 (2002) 169–194
193
Csir, F. J. (1998). Evaluation and criteria of the World Wide Web: Reference web sites. Unpublished master’s thesis, Kent State University. Day, A. (1997). A model for monitoring Web site effectiveness. Internet Research: Electronic Networking Applications and Policy, 7, 109 – 115. Dickstein, R., Greenfield, L., & Rosen, J. (1997). Using the World Wide Web at the reference desk. Computers in Libraries, 17(8), 61 – 65. Eschenfelder, K. R., Beachboard, J. C., McClure, C. R., & Wyman, S. K. (1997). Assessing U.S. federal government Websites. Government Information Quarterly, 14, 173 – 189. Garlock, K. L., & Piontek, S. (1996). Building the service-based library Web site: A step-by-step guide to design and options. Chicago, IL: American Library Assocation. Grassian, E. (1997). Thinking critically about World Wide Web resources. Los Angeles, CA: University of California at Los Angeles, College Library. Retrieved April 12, 1998, from http://www.library.ucla.edu/ libraries/college/instruct/web/critical.htm. Hair, J. F. Jr., Anderson, R. E., Tatham, R. L., & Black, W. C. (1995). Multivariate data analysis with readings (4th ed.). Englewood Cliffs, NJ: Prentice Hall. Harris, R. (1997). Evaluating Internet research sources online. Retrieved May 5, 1998, from http://www.sccu.edu/ faculty/R_Harris/evalu8it.htm. Hinchliffe, L. J. (1997). Resource selection and information evaluation. Retrieved February 1, 1998, from http:// alexia.lis.uiuc.edu/~janicke/evaluate.html. Instone, K. (1996). HCI and the Web: A CHI 96 workshop. SIGCHI Bulletin, 28(4), 42 – 45. Internet Business Network. (1995). Characteristics of a great Website. Retrieved March 12, 1998, from http:// www.interbiznet.com/greatweb1.html. Katz, W. A. (1992). Introduction to reference work. New York: McGraw-Hill. King, D. L. (1998, December). Library home page design: A comparison of page layout for front-ends to ARL library web sites. College & Research Libraries, 59, 458 – 465. Lancaster, F. W. (1997). Evaluation in the context of the digital library. Essen: Publications of the Essen University Library, 21, 156 – 167. Law, G. (1996). Get on the Net: Homepage advice. Management-Auckland, 43(10), 46 – 52. Lynch, P. J., & Horton, S. (1998). Yale C/AIM Web style guide. Retrieved April 10, 1998, from http:// info.med.yale.edu/caim/manual/contnets.html. Mahajan, R., & Shneiderman, B. (1995, December). A family of user interface consistency checking tools: Design analysis of SHERLOCK. Proceedings of the 20th Annual Software Engineering Workshop (NASA), Greenbelt, MD, pp. 169 – 188. McClure, C. R., & Wyman, S. K. (1997). Quality criteria for evaluating information resources and services available from federal Websites based on user feedback. Retrieved November 20, 1998, from http:// istweb.syr.edu/~mcclure/abstract.html. McMurdo, G. (1995). Electric writing: How the Internet was indexed. Journal of Information Science, 21, 479 – 489. McMurdo, G. (1998). Electric writing: Evaluating Web information and design. Journal of Information Science, 24, 192 – 204. Muller, M. J. (1996). Defining and designing the Internet: Participation by Internet stakeholder constituencies. Social Science Computer Review, 14(1), 30 – 33. Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press. Nielsen, J. (1995). Multimedia and hypertext: The Internet and beyond. Cambridge, MA: Academic Press. Nielsen, J., Sano, D. (1994). SunWeb: User interface design for Sun Microsystem’s internal Web. Paper presented at the Second World Wide Web Conference, Mountain View, CA. Oliver, K. M., Wilkinson, G. L., & Bennet, L. T. (1997). Evaluating the quality of Internet information sources. Paper presented at the Annual Convention of the Association for the Advancement of Computing in Education, ED-MEDIA/ED-TELECOM 97, Calgary, AB, Canada. Retrieved April 10, 1998, from http://itech1.coe.uga. edu/faculty/qwilkinson/webeval.html.
194
H. Chao / Library & Information Science Research 24 (2002) 169–194
Pratt, G., Flannery, P., & Perkins, C. I. D. (1996). Guidelines for Internet resources selection. College & Research Libraries News, 57, 134 – 135. Rettig, J. (1996). Beyond ‘‘cool’’: Analog models for reviewing digital resources. Online, 20, 52 – 54, 56, 58 – 62, 64. Saracevic, T. (2000). Digital library evaluation: Toward an evolution of concept. Library Trends, 49, 50 – 51. Shneiderman, B. (1996). Designing information-abundant Websites. Retrieved February 4, 1998, from ftp:// ftp.cs.umd.edu/pub/hcil/reports-abstracts-bibliography/3634.txt. Shum, S. B. (1996). The missing link: Hypermedia usability research and the Web. SIGCHI Bulletin, 28(4), 68 – 75. Skov, A. (1998). Separating the wheat from the chaff: Internet quality. Database, 21(4), 38 – 40. Retrieved October 11, 1998, from http://www.onlineinc.com/database. Smith, A. G. (1997). Testing the surf: Criteria for evaluating Internet information resources. The Public-Access Computer Systems Review, 8(3). Retrieved February 2, 1998, from http://info.lib.uh.edu/pr/v8/n4/smit8n3.html. SPSS, Inc. (1998). SPSS for MS Windows: Release 8.0. Chicago, IL: SPSS Inc. Stein, L. D. (1997). How to set up and maintain a Web site. Reading. MA: Addison-Wesley. Stoker, D., & Cooke, A. (1995). Evaluation of networked information sources. In A. H. Helal, & J. W. Weiss (Eds.), The information superhighway: The role of librarians, information scientists, and intermediaries ( pp. 287 – 312). Essen, Germany: Essen University Library. Stover, M., & Zink, S. D. (1996). World Wide Web home page design: Patterns and anomalies of higher education library home pages. Reference Services Review, 24(3), 7 – 20. Tillman, H. N. (1997). Evaluating quality on the Net. Updated for presentation at Internet Librarian, Monterey, CA, November 17, 1997. Retrieved May 2, 1998, from http://www.tiac.net/users/hope/findqual.html. Wilkinson, G. L., Oliver, K. M., & Bennett, L. T. (1997). Evaluating the quality of Internet information sources. Educational Technology, 37(3). Retrieved April 2, 1998, from http://itech1.coe.uga.edu/Faculty/qwilkinson/ webeval.html. Wilson, T. (1996, November). Tracking Web flaws. Communications Week, p. 11. Wyman, S. K., McClure, C. R., Beachboard, J. B., & Eschenfelder, K. R. (1997). Developing system-based and user-based criteria for assessing federal Websites. Journal of the American Society for Information Science, 34, 78 – 88.