Using the World Wide Web for research: are faculty satisfied?

Using the World Wide Web for research: are faculty satisfied?

Using the World Wide Web for Research: Are Faculty Satisfied? by Susan Davis Herring This survey explored faculty members’ satisfaction toward the We...

105KB Sizes 2 Downloads 48 Views

Using the World Wide Web for Research: Are Faculty Satisfied? by Susan Davis Herring

This survey explored faculty members’ satisfaction toward the Web as a research source. Results indicate that, although faculty members are generally satisfied with the Web, they question the accuracy and reliability of much Web-based information and the sufficiency of Web resources for research. Attitudes also vary by academic discipline.

T

he World Wide Web, the most popular and easy-to-access portion of the Internet, has become a pervasive resource for faculty and students alike on college and university campuses throughout the United States since its development in 1991. Academic librarians know that many faculty members use the Web both for their own research and for teaching. To serve their clientele most effectively in both reference and collection development activities, academic librarians must understand faculty attitudes toward the Web. However, despite a growing body of anecdotal literature on uses of the Internet in general, and on educational use of the World Wide Web specifically, scholarly research on faculty attitudes toward the Web as a research tool appears to be nonexistent. The research described here attempts to fill the current knowledge gap by examining faculty members’ use of and satisfaction with the Web as a research source. The exploratory study investigated faculty attitudes toward use of the Web as a research tool both for the faculty members’ own research and for course-related research conducted by their undergraduate students. This article focuses on the first part of the study: faculty attitudes toward the Web as a resource for their own research. Two research questions are considered: ●



Susan Davis Herring is Associate Professor of Bibliography and Engineering Reference Librarian, M. Louis Salmon Library, University of Alabama in Huntsville, Huntsville, Alabama 35899 ⬍[email protected]⬎.

Have college and university faculty members accepted the Web as a suitable resource for their own research? Does academic discipline affect faculty acceptance of the Web as a resource?

REVIEW

OF THE

LITERATURE

Despite its short history, a substantial body of literature already exists concern-

The Journal of Academic Librarianship, Volume 27, Number 3, pages 213–219

ing acceptance and use of the Internet and the Web in academia. However, no studies specifically focusing on faculty satisfaction with the Web as a research tool have been identified. In an overview of the literature written in 1996, Celina Pascoe, Andrelyn C. Applebee, and Peter Clayton noted that “very little empirical research literature exists on the specific uses that academics are making of the Internet,”1 and Susan S. Lazinger, Judit Bar–Ilan, and Bluma C. Peritz wrote “The number of published studies specifically on faculty use of the Internet to date is surprisingly small.”2 A few surveys of faculty have indicated use patterns and needs, and several studies have examined issues concerning perceptions of the quality and value of Web information. Blaise Cronin and Carol A. Hert view the Web as a “foraging tool” highly suitable for the scholarly research process,3 but other authors have found that the Internet and Web have not been as widely accepted by academia as might be expected. Lazinger, Bar–Ilan, and Peritz, studying faculty at the Hebrew University of Jerusalem, found that, while 80% of the faculty were using the Internet, use rates varied by discipline, with the highest in science, at 91%, and the lowest in law, social work, and the library, at 60%.4 Use was inversely proportional to rank, with the newest faculty in the lowest ranks reporting the highest use. Faculty reported that the greatest benefit of the Internet was found in enhanced communication with colleagues and access to databases and research updates. Numerous authors have focused on the issue of quality and authority in electronic publications. Bertram C. Bruce and Kevin M. Leander, writing on the use of digital libraries and other information technologies in education, identified perceptions of the value of information and questions

May 2001 213

concerning authority as two major problems in Internet use.5 Blaise Cronin and G. W. McKim, in a discussion of the use of the Web in science and scholarship, particularly mentioned questions of legitimacy and authority of Web documents.6 Richard G. Mathieu and Robert L. Woodard noted that the lack of a strong organizational infrastructure, the extreme decentralization of information storage, and the dynamic nature of the Internet create inherent problems in use and verification of data and a thriving environment for inaccurate or false data retrieval.7 Bob Duffy and Jennifer Yacovissi noted “Wander the Web with any thoroughness— or, if you prefer, with the appropriate randomness—and you’ll find the evidence to support any position,”8 adding that trying to use the Web for purely informational purposes is “a sure formula for frustration” because of the questionable quality of free sites. Tom Regan noted that poor quality existed even in sites created and maintained by commercial news organizations.9 Brendan Devlin and Mary Burke, looking at the Internet as a reference tool, noted the difficulty in applying traditional evaluation means to Internet resources, which frequently lack descriptive information such as author or responsible party and are subject to rapid change.10 The authors found few non-commercial reference sites that met the same quality criteria as commercial products. The accuracy of information on the Web was tested by Tschera Harkness Connell and Jennifer E. Tipple, who used the AltaVista search engine to search a sample of 60 reference questions.11 Although less than 10% of the pages retrieved provided wrong answers, 64% of the pages gave no answers to the questions. Around 27% of the pages provided correct or mostly correct answers. The high failure rate raised questions about the Web’s efficiency as a research tool.

METHODOLOGY Information for the study described here was gathered through a mail survey utilizing the End-User Computing Satisfaction instrument (EUCS) developed by William J. Doll and Gholamreza Torkzadeh.12 The EUCS instrument has been used to test user satisfaction with numerous end-user computer programs with good results.13–16 The survey instrument used in this study was a modified form of the EUCS instrument, with minor changes in the

214

The Journal of Academic Librarianship

wording of the questions to focus the attention of the respondents on the system being studied. The instrument included 15 questions that measured five factors: content, accuracy, ease of use, format, and timeliness. Each question was answered on a Likert-type scale of 1 (“almost never”) to 5 (“almost always”). The mean for each question was assumed to be 3 (“sometimes”). Although Likert-type scales lack the measurement consistency of true interval scales, Likert scale data are commonly treated as interval data in statistical analysis and were treated as such in this study. Space was allocated at the end of the questionnaire for comments. Because this study also required some personal data from each respondent, questions identifying academic department, primary subject taught, academic rank, years of teaching experience, level of Web use, and some optional demographic data were printed on the reverse of the EUCS instrument. After pre-testing, survey packets were sent to a sample of 1,129 full-time faculty members at 30 post-secondary institutions in the state of Alabama. The sample included all academic disciplines and both public and private institutions, and was stratified to proportionally represent community and junior colleges, four-year colleges, and universities. By the end of the data collection period, 388 usable survey forms had been returned for a response rate of 34.4%. SPSS 8.0 for Windows was used for statistical analysis. Comments were analyzed to determine opinion patterns.

RESULTS Demographics The 388 respondents were distributed among the three types of institutions in a pattern proportional to the stratified sample and representative of the population distribution. Fifteen percentage of the total (58 individuals) were from junior/ community colleges; 13.1% (51 individuals) were from four-year colleges; and the remaining 71.9% (279) were university faculty members. Academic departments identified by the respondents were grouped into nine departmental categories after the majority of responses had been received, as shown in Table 1. Respondents were asked to select from five categories of faculty rank: instructor, assistant professor, associate professor, and professor, and “other.” Three respon-

Table 1 Academic Departments of Respondents Department

Frequency Percent

Social Science

62

16.0

Science

62

16.0

Language and Literature

55

14.2

Humanities

45

11.6

Administrative Science

44

11.3

Education

42

10.8

Health Science

39

10.1

Engineering/Computer Science

24

6.2

Physical Education

15

3.9

dents marked “other;” of the remaining 385, 83 (21.4%) were instructors, 104 (26.8%) were assistant professors, 91 (23.5%) were associate professors, and 107 (27.6%) were professors. Three hundred fifty-eight respondents indicated the number of years they had been teaching by marking one of five categories. Eleven indicated one to two years of experience, 24 had three to four years, 40 marked five to six years, 61 indicated seven to ten years, and 222 had more than 10 years of teaching experience. Of the 388 respondents, 354 answered the optional question regarding gender and 356 answered the optional question regarding age. Of those, 59.9% (212) were male and 40.1% (142) were female. Ages were normally distributed across the categories. Both the median and mode fell in the 45 to 49 year group. EUCS Data Each respondent was asked to complete the EUCS survey by indicating how often he or she would respond positively to each of 15 question on a scale of 1 (“almost never”) to 5 (“almost always”). The questions were grouped into six factor sections, and the sum of the responses in each section determined individual factor scores, The factor scores were totaled for overall scores. Assuming an expected mean of 3 (“sometimes”) for each question, the means returned were significantly higher than the expected means. The actual factor means and standard deviations are shown in Table 2. T tests were run for all individual ques-

Table 2 EUCS Factor Means N

Expected Mean

Actual Mean

Standard Deviation

Content factor

383

12

13.3446

2.8434

Accuracy factor

382

12

13.4817

2.8951

Format factor

384

6

7.5026

1.3266

Ease of use factor

381

9

10.9475

2.3869

Timeliness factor

382

6

7.3901

1.5462

Overall EUCS score

375

45

52.6933

9.1765

Table 3 T-Test Results for EUCS Factors and Total Scores t-value

Significance (2-tailed)*

Content factor

10.961

.000

Accuracy factor

10.428

.000

Format factor

16.629

.000

Ease of use factor

14.795

.000

Timeliness factor

17.719

.000

Overall EUCS score

15.855

.000

*p ⬍ .05.

tion scores, factor scores, and overall scores, using the expected means as the comparison. All scores showed a significant difference at p ⫽ .05. The mean score of one question, “Does the Web provide sufficient information for your research,” was below the expected mean and resulted in a negative t value of ⫺2.337 (p ⫽ .02). All other scores were above the expected means. Table 3 summarizes the t test results for the factors and overall scores. One-factor analysis of variance tests were run to determine whether any significant differences existed in respondents’ attitudes depending upon the independent variables of institution type, academic department, faculty rank, years of teaching experience, Web use level, gender, or age group. Type of institution was significant for content (p ⫽ .01) and accuracy (p ⫽ .001) factors and overall EUCS scores (p ⫽ .05) for all three institutional groups. In all cases, means for community/junior college respondents were higher than means for the other two institutional groups.

dents indicated use levels. Only 13, or 3.4%, indicated that they did not use the Web. Of the remainder, 161 (43.5%) indicated they used the Web to find information, 120 (30.9%), found useful information and enhanced it, and 77 (19.8%), created information content. One-factor analysis of variance showed a significant difference at p ⫽ .05 between groups for all factors and overall EUCS scores. Post hoc analysis showed a positive correlation between use level and factor scores for all factors. No significant differences between groups were found for years of teaching experience, gender, or age group.

“In all cases, means for community/junior college respondents were higher than means for the other two institutional groups.” Academic department made a significant difference for content (p ⫽ .004), accuracy (p ⫽ .001), and format (p ⫽ .002) factors and for the overall scores (p ⫽ .002). Post hoc analysis showed that significant differences (p ⫽ .05) lay between science and social sciences; and between science, administrative science, education, and health science compared with language and literature. Science faculty had higher scores, and language and literature faculty had lower scores, in all areas of difference. Faculty rank was significant at p ⫽ .01 for accuracy. Post hoc analysis showed significant differences (p ⫽ .05) between instructors and assistant professors and between instructors and professors. In all cases with significant differences, the instructor means were higher than means for other ranks. Faculty use of the Web was measured using a four-level use scale. Respondents selected one of the following categories to best describe their use of the Web: ●

I use the Web to find information;



I find useful information and enhance it (such as by providing links to related information);



I create information content by producing Web documents, images, and so forth; and



I do not use the Web. Three hundred seventy-one respon-

Comments Approximately one quarter of the survey respondents (98) added comments to the survey form. The majority of the comments dealt with respondents’ opinions regarding the Web as a source of research information, primarily regarding questions of accuracy (31.6% of the comments), the content of Web-based information (30.6%), and the need for careful evaluation of sources (18.4%). Others commented on access to the Web, skills needed to search effectively, and the transitory nature of electronic information.

“A recurring theme seemed to be a feeling of ambivalence, as respondents praised the expansion of readily available information while questioning the accuracy, reliability, and value of that information.” Many of the general comments about Web content were critical of the type of information found on the Web. A recurring theme seemed to be a feeling of ambivalence, as respondents praised the expansion of readily available information while questioning the accuracy, reliability, and value of that information.

RESEARCH QUESTIONS Have college and university faculty members accepted the Web as a suitable resource for their own research? As shown by the EUCS survey responses, faculty are generally satisfied with the Web as a research tool. Mean scores on all factors were significantly higher than

May 2001

215

expected. Over 90% of the faculty respondents also indicated that they use the Web, and almost 20% indicated that they create Web content. This pattern of use, in conjunction with the generally high EUCS scores, indicates that faculty members have accepted the Web as a suitable research tool. However, faculty members rated format, ease of use, and timeliness highest on the EUCS instrument, and rated content and accuracy the lowest. The only individual item that received a below average mean score was “Does the World Wide Web provide sufficient information for your research.” These differences indicate less satisfaction with the substance of Web information, as measured by opinions of content and accuracy, than with access and presentation. The comments ranged from extremely enthusiastic to extremely negative, creating an overall impression of ambivalence. Some representative comments are shown below: ●

I have more information than I know what to do with it (University faculty member, Technology).



Risky for real academic research (University faculty member, Biology).



I find the Web to be a good starting point for research but also find lots of problems with accuracy (University faculty member, Theater).



So many sites—so little real organization. Multiple search engines miss many things. If you can find something, or a reliable site to cling to, it’s almost a miracle (University faculty member, Music).



O.K. for student term papers but not a good source for real research (University faculty member, Psychology).



I both love and hate the Web. It has some fabulous information on it; finding it often requires more time than I have to devote to it (University faculty member, Geography).



Excellent source of info (University faculty member, Education).



A medieval rabbi said “The Talmud is a great sea, yielding [sic] pearls of great price and old boots.” A lot of boots and a few pearls here (FourYear College faculty member, Religion).



216

I double-check everything I get off the

The Journal of Academic Librarianship

Web (University faculty member, Computer Information Systems). ●

Web has revolutionized access to macroeconomic data and information about policy (even current news) (Four-Year College faculty member, Economics).



Surfing . . . is risky at best for accurate information. The Web does help to expand our ideas— get the professional & student thinking in many directions (Community/Junior College faculty member, Sociology).

“. . . in general, faculty members from the science disciplines were more positive about and felt more satisfaction with the accuracy and format of the Web than were faculty in the social science or language and literature areas.” Does academic discipline affect faculty acceptance of the Web as a resource? ANOVA data showed that academic discipline made a statistically significant difference for the overall EUCS scores and for content, accuracy, and format factors at p ⫽ .05. These results indicate that, in general, faculty members from the science disciplines were more positive about and felt more satisfaction with the accuracy and format of the Web than were faculty in the social science or language and literature areas. Faculty members in the language and literature fields, on the other hand, tended to be the least satisfied with the Web overall, and, specifically, were less satisfied with its content and accuracy. Other Findings Analysis of the survey responses in terms of other independent variables revealed some interesting findings beyond the original research questions. Statistically significant differences (p ⫽ .01) in EUCS responses existed between respondents from the three types of institutions, with faculty members from community/ junior colleges responding more favorably on content and accuracy factors and on overall satisfaction than did those from four-year colleges or universities. This result was not anticipated, especially since

the 1998 Campus Computing Survey indicated that such institutions are less likely to have Web access.17 However, it is possible that both the different levels of research required by post-secondary and higher education institutions, and the smaller library collections common to community/junior colleges in the state, may make Web-based resources more attractive and satisfactory in the community college environment. Faculty rank also was a significant source of difference for the accuracy factors (p ⫽ .01), with instructors indicating greater satisfaction than did other faculty ranks. This appears to reflect the differences between institutional types mentioned above, since the vast majority of instructors (over 90%) were employed at community/junior colleges. Use level was statistically significant for all EUCS factors and overall EUCS scores, and a significant Pearson’s correlation coefficient was found between use levels and overall EUCS scores (p ⫽ .05). Non-users had significantly lower EUCS scores than did Web users in any of the three use categories (p ⫽ .001). This indicates that use of the Web at any level increased satisfaction and acceptance of the Web as a research tool in this sample population. In addition, the highest percentage of non-users was seen among faculty members from the language and literature category, the same academic discipline that measured the lowest satisfaction levels on the EUCS instrument. The highest percentages of respondents identifying themselves as creating Web content occurred in the science and administrative science areas, both of which had high EUCS satisfaction scores. This reinforces the indication that acceptance of and satisfaction with the Web increase with level of use. Gender and age had no statistical significance for any of the dependent variables, although adoption, acceptance, and use of high technology tools are conventionally associated with younger males. Indeed, earlier studies of Internet users found such a pattern of use.18,19 However, analysis of the data from this study shows no significant differences because of age or gender within this sample.

DISCUSSION The World Wide Web has been widely available to the public for less than 10 years, but, as shown in this study, it appears to have become widely accepted in the academic environment as a research

resource. Yet, despite this level of acceptance, questions remain regarding the accuracy and reliability of the information content available. Nancy W. Ashley, studying the adoption of networked information retrieval systems in an academic setting, noted that many subjects mentioned that many information resources available on the Internet are neither useful to, nor needed by, academics,20 a view implied by many of the respondents to the current survey in their comments. Ashley also found that networked information sources were often seen as incompatible with existing academic social norms, specifically the peer review process, control of intellectual property, and the academic reward system of tenure and promotion. Compatibility with social norms is one of Everett Rogers’s criteria for the adoption of innovation21 and is crucial to widespread acceptance of the Web in academic research. Several respondents to this study made comments related to compatibility with academic social norms. As one university associate professor in English wrote, “Not a copyrighted, refereed source of research information most of the time.” A psychology associate professor wrote, “Material available frequently bypasses peer review process and so is next to useless scientifically,” and a marine scientist wrote, “I am concerned that non-peer reviewed information is easily confused with information that has been peer-reviewed.” Compatibility with social norms in the academic environment is closely related to the question of cognitive authority. As Patrick Wilson has emphasized, cognitive authority in a professional discipline is usually defined by the discipline itself, primarily by “editors of professional magazines or compilers of reference works endorsed by leaders of the profession.”22 This is widely recognized in library collection development, where publications from some editors, publishing houses, or professional societies are assumed to be of high quality, while others are approached with doubt or even disdain. These factors have had considerable impact on the acceptance of the Web. As was found in this study, although faculty members welcome the vast quantity of information available, many question its value, accuracy, and reliability. For generations, the peer review process—that traditional selection process practiced so skillfully by the editors and compilers cited by Wilson— has been the primary means of authentication within scholarly

publishing. The lack of this process in Web publishing has created a dilemma for users of Web information. Without a recognized selection process, carried out by accepted and reputable organizations or publishers, how can the authority, validity, and reliability of information be guaranteed?

“Without a recognized selection process, carried out by accepted and reputable organizations or publishers, how can the authority, validity, and reliability of information be guaranteed?” Several studies have been conducted to determine what criteria individuals use to judge the quality or authority of electronic information. Johan L. Olaisen, studying bankers, financiers, and insurance companies in Norway, found that the primary quality criteria included credibility, trustworthiness, and competence of the source; reliability of the information; relevance of the information to the question; validity of the results; and meaning over time.23 Linda Schamber, studying users of electronic weather information services, found the top criteria to include presentation quality, currency, reliability, verifiability, accuracy, and clarity.24 A small study by Soo Young Rieh and Nicholas J. Belkin, looking at academic use of the Web, identified the major quality criteria as source credibility and truthfulness, content, format, presentation, currency, accuracy, and speed of loading.25 A cluster of quality criteria is common to all of the studies, including source credibility and authority; the accuracy, reliability, and currency of content; and information format and presentation quality. Respondents to the current study mentioned many of these key factors in their comments, particularly those relating to source and content characteristics. As one mathematics professor wrote, “The Web is only as good as its sources. Using sites that I trust the info is good.” Several faculty members who did not use the Web commented on the lack of control and authority. An English professor noted “There is no quality control on the Web. . . . I don’t use it and tell my students not to use it.”

LIMITATIONS

OF THE

STUDY

Self-administered mail surveys have certain implicit limitations that cannot be completely avoided, but several steps were taken to minimize these difficulties, including pre-testing and inclusion of a cover letter explaining the study and a self-addressed, stamped return envelope in the survey packet. Reminders were sent to those who had not responded approximately three weeks after the initial mailing, and duplicate mailings were sent to those who reported that they had not received, or had misplaced, the original packet. It is possible that individuals with a special interest in the topic under study may have been most likely to respond to a mail survey. This would imply that responses would represent the more extreme ends of the spectrum. An attempt was made in the cover letter to encourage non-users to respond to overcome this tendency. Histograms for all EUCS factors and total scores indicated a normal distribution. The geographical limitations of the study mean that the results are not inherently generalizable across all academic institutions. Expansion of the study into other geographic regions will be necessary to fully validate the findings. Also, no effort was made to stratify the sample to represent the distribution of academic disciplines. Therefore, the conclusions relating to disciplines may contain some errors. However, the results are consistent with those found in other studies, notably Lazinger, Bar-Ilan, and Peritz26 and Adams and Bonk,27 which indicates that the study may be more generalizable than suggested by its limited geographic coverage.

CONCLUSION Faculty members participating in this study have accepted the Web as a research tool suitable for their own use. At the same time, the results indicate that faculty members question both the accuracy and reliability of much information found on the Web and do not consider the Web to be sufficient as a sole source to deliver the type or quantity of research information they need. This is important for all those involved in library collection development and financing. It may be especially significant for library administrators and collection development librarians who are dealing with institutional administrators who may attempt to cut library

May 2001

217

budgets on the basis that the Web is making “all” information available on everyone’s desktop “for free.” Libraries must continue to develop their print resources in conjunction with providing access to electronic resources, at least for the foreseeable future.

“. . . faculty members question both the accuracy and reliability of much information found on the Web and do not consider the Web to be sufficient as a sole source to deliver the type or quantity of research information they need.” Although faculty members at all types of institutions studied use the Web as a research resource, faculty members at community/junior colleges are more satisfied with the content and accuracy of Web information than are those at fouryear colleges or universities. This may be because of the types of research conducted at post-secondary institutions, as compared to that done at colleges and universities, or it may reflect a relative paucity of other research resources available at community/junior colleges. Acceptance of the Web and satisfaction with its content also vary by academic discipline. Faculties in the sciences are most satisfied and those in the fields of language and literature are least satisfied. Language and literature faculty also includes the highest percentage of nonWeb-users. This suggests that faculty in this area see a need for improved and expanded high-quality subject-related Web resources. Anyone interested in developing and building digital resources might consider this as a potential area of effort. One major factor in faculty members’ doubts concerning the accuracy and content of Web sites may be the lack of cognitive authority granted to those sites. Numerous respondents commented on their lack of trust in Web-based information. As library professionals select resources for faculty members and researchers in the different academic disciplines, they must recognize the impact that cognitive authority factors have on the acceptance of, and satisfaction

218

The Journal of Academic Librarianship

with, various types of information resources. This is an expansion of traditional collection development practices, requiring a deeper understanding of quality criteria than simply a knowledge of the leading publishers and societies within a research field. Library professionals must work with subject faculty to become familiar with the quality criteria used within the specialized fields. Finally, analysis of the significant differences in attitudes between non-users and users of the Web indicates a positive correlation between Web use level and the users’ satisfaction and acceptance of the Web as a research tool. This has distinct implications for administrators and library professionals interested in the expansion of Web use on campus, suggesting that an attempt should be made to create situations that will encourage faculty to become familiar and comfortable with using the Web. This might be especially important in the language and literature fields, where use levels are particularly low. This exploratory study examined the use of and attitudes toward a relatively new resource. As such, it has established a baseline for understanding current faculty opinions regarding their satisfaction with the Web as a resource and levels of Web use. Further research into faculty opinions of the World Wide Web as a research tool should be carried out both to expand upon the results found in this study and to track changes in opinions toward and use of the Web over time as it becomes a more authoritative and prevalent research tool in the academic world.

NOTES

AND

6.

7.

8.

9.

10.

11.

12.

13.

14.

REFERENCES

1. Celina Pascoe, Andrelyn C. Applebee, & Peter Clayton, “Tidal Wave or Ripple? The Impact of Internet on the Academic,” Australian Library Review 13 (May 1996): 148. 2. Susan S. Lazinger, Judit Bar–Ilan, & Bluma C. Peritz, “Internet Use by Faculty Members in Various Disciplines: A Comparative Case Study,” Journal of the American Society for Information Science 48 (June 1997): 509. 3. Blaise Cronin & Carol A. Hert, “Scholarly Foraging and Network Discovery Tools,” Journal of Documentation 51 (December 1995): 388 – 403. 4. Lazinger, Bar–Ilan, & Peritz, “Internet Use by Faculty Members in Various Disciplines,” pp. 508 –518. 5. Bertram C. Bruce & Kevin M. Leander, “Searching for Digital Libraries in Educa-

15.

16.

17.

18.

tion: Why Computers Cannot Tell the Story,” Library Trends 45 (Spring 1997): 746 –770. Blaise Cronin & G. W. McKim, “Science and Scholarship on the World Wide Web: A North American Perspective,” Journal of Documentation 52 (June 1996): 163– 171. Richard G. Mathieu & Robert L. Woodard, “Data Integrity and the Internet: Implications for Management,” Internet Research: Electronic Networking Applications and Policy 6 (1996): 92–96. Bob Duffy & Jennifer Yacovissi, “Seven Self-Contradicting Reasons Why the Worldwide Web Is Such a Big Deal,” Proceedings of the 17th National Online Meeting (Medford, NJ: Information Today, 1996), p. 82. Tom Regan, “On the Web, Speed Instead of Accuracy,” Nieman Reports 52 (Spring 1998): 81. Online. Expanded Academic Index ASAP. Article A20624184. Brendan Devlin & Mary Burke, “Internet: The Ultimate Reference Tool?” Internet Research: Electronic Networking Applications and Policy 7 (2) (1997): 101–108. Tschera Harkness Connell & Jennifer E. Tipple, “Testing the Accuracy of Information on the World Wide Web Using the AltaVista Search Engine,” Reference and User Services Quarterly 18 (Summer 1999): 360 –367. William J. Doll & Gholamreza Torkzadeh, “The Measurement of End-User Computing Satisfaction,” MIS Quarterly 12 (June 1988): 259 –274. William J. Doll, Weidong Xia, & Gholamreza Torkzadeh, “A Confirmatory Factor Analysis of the End-User Computing Satisfaction Instrument,” MIS Quarterly 18 (December 1994): 453– 461. Anthony R. Hendrickson, Kristy Glorfeld, & Timothy Paul Cronan, “On the Repeated Test-Retest Reliability of the EndUser Computing Satisfaction Instrument: A Comment,” Decision Sciences 25 (July/ August 1994): 655– 665. Jin Taek Jung, Measuring User Success in the Digital Library Environment. Doctoral Dissertation (Philadelphia, PA: Drexel University, 1997). Roger McHaney & Timothy Paul Cronan, “Computer Simulation Success: On the End-User Computing Satisfaction Instrument: A Comment,” Decision Sciences 29 (Spring 1998): 525–536. Campus Computing Project, “Colleges Struggle With IT Planning,” 1998 National Survey of Information Technology in Higher Education. (Encino, CA: Kenneth C. Green). Retrieved May 13, 1999 from the World Wide Web: http://www. campuscomputing.net/summaries/1998/ index.html. Helge Clausen, “Internet Information Literacy: Some Basic Considerations,” Libri 47 (April 1997): 25–34.

19. James Katz & Philip Aspden, “Motivations for and Barriers to Internet Usage: Results of a National Public Opinion Survey,” Internet Research 7(3) (1997): 170 –188. 20. Nancy Winniford Ashley, Diffusion of Network Information Retrieval in Academia. Doctoral Dissertation (Tucson, AZ: University of Arizona, 1995). 21. Everett M. Rogers, Diffusion of Innovations, 3rd ed. (New York: Free Press, 1983). 22. Patrick Wilson, Second-Hand Knowledge: An Inquiry Into Cognitive Authority (Westport, CT: Greenwood Press, 1983), p. 133.

23. Johan L. Olaisen, “Information Quality Factors and the Cognitive Authority of Electronic Information,” in Information Quality: Definitions and Dimensions, edited by Irene Wormell (Los Angeles: Taylor Graham, 1990), pp. 97–121. 24. Linda Schamber, “Users’ Criteria for Evaluation in a Multimedia Environment,” Proceedings of the ASIS Annual Meeting (Medford, NJ: Learned Information Inc., 1991), pp. 126 –133. 25. Soo Young Rieh & Nicholas J. Belkin, “Understanding Judgment of Information

Quality and Cognitive Authority in the WWW,” Proceedings of the ASIS Annual Meeting (Medford, NJ: Learned Information Inc., 1998), pp. 279 –289. 26. Lazinger, Bar–Ilan, & Peritz, “Internet Use by Faculty Members in Various Disciplines.” 27. Judith A. Adams & Sharon C. Bonk, “Electronic Information Technologies and Resources: Use by University Faculty and Faculty Preferences for Related Library Services,” College & Research Libraries 56 (March 1995): 119 –131.

May 2001

219