Human Factors in Human-Computer System Design MARY CAROL DAY AND SUSAN J . BOYCE User Interface Planning and Design AT&T Bell Laboratories Holrndel. New Jersey
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 2. The Discipline of Human Factors . . . . . . . . . . . . . . . 2.1 An Extended Definition . . . . . . . . . . . . . . . . 2.2 A Brief History . . . . . . . . . . . . . . . . . . . . 3 . The Human Factors Specialist and the User Interface . . . . . . . . 3.1 The User Interface and Usability . . . . . . . . . . . . . 3.2 Principles for Developing Usable Systems . . . . . . . . . . 3.3 Information Relevant to User Interface Design . . . . . . . . . 4 . Models of the Software Development Process . . . . . . . . . . . 5 . Human Factors Activities in Human-Computer System Design . . . . . 5.1 Roles of the Human Factors Specialist on Software Development Teams 5.2 Human Factors Activities at Each Phase of Development . . . . . 5.3 Idealized, but Meeting a Real Need . . . . . . . . . . . . 6 . Human Factors Methodologies for Human-Computer System Design . . . 6.1 Task Analysis . . . . . . . . . . . . . . . . . . . . . 6.2 Rapid Prototyping of the User Interface . . . . . . . . . . . 6.3 Usability Testing . . . . . . . . . . . . . . . . . . . . 7 . Designing for User Interface Consistency . . . . . . . . . . . . 7.1 Advantages of Consistency . . . . . . . . . . . . . . . 7.2 Difficulties in Achieving Consistency . . . . . . . . . . . . 7.3 Methods for Enhancing Consistency . . . . . . . . . . . . 8. An Example of Human Factors Activities during Product Development . . 9 . Why Is a Human Factors Specialist Needed? . . . . . . . . . . . 9.1 The Process without Human Factors Expertise . . . . . . . . . 9.2 Constraints Overcome with Human Factors Expertise . . . . . . 10. Cost Justification for Human Factors . . . . . . . . . . . . . . 10.1 Increasing Percentage of User Interface Code . . . . . . . . . 10.2 Lower Development and Support Costs for Vendors . . . . . . 10.3 Greater Usability and Increased Productivity for Users . . . . . 10.4 Greater User Acceptance and Marketplace Success . . . . . . . 11 Conclusions Acknowledgments . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . .
ADVANCES IN COMPUTERS. VOL 36
333
.
. . . . . .
.
.
.
.
. . .
. .
.
. . . . . . . .
334 336 336 342 344 344 349 350 353 357 357 362 369 370 370 381 387 395 396 391 399 402 405 405 407 411 412 412 413 415 416 418 418
Copyright C 1993 by Academic Press. Inc All rights of reproduction in any form reserved
ISBN 0-12-012136-0
334
MARY CAROL DAY AND SUSAN J. BOYCE
1. Introduction The dramatic proliferation of computer systems has been paralleled by an equally striking increase in the number and heterogeneity of computer users. Many of today’s computer users are not computer experts; they are people who are experts in other fields who want to use the computer as a tool to help them perform their tasks. They want a computer system that is easy to learn and easy to use. However, personal experience, the popular media, and academic articles document the continuing difficulty of using many computer systems and their software applications. One reason for this difficulty is the insufficient attention that is given, during the design and development process, to the needs and the abilities of human users, to the specific tasks they perform, and to their environment. The computer system of interest here is actually a human-computer system. The human uses the computer (its hardware and its software) to accomplish certain tasks. The computer hardware and software must be designed to match (and ideally to extend) the physical, perceptual, and cognitive capabilities of the user, so that the user’s task can be accomplished most efficiently and effectively. Therefore, the design of a human-computer system requires understanding the human component of the system as well as the computer components. While it is expected that designers and developers of humancomputer systems must be knowledgeable in computer hardware and software, it is rarely expected that designers should be equally knowledgeable about the human component of the system. In software design and development environments, there is an increasing understanding that the design of the human-computer interface is important, since this interface is the mediator of the flow of information and control between the user and the computer. However, often it is the technology of the human-computer interface (e.g., the software platform, the speech recognition system, the multimedia capabilities) that receives attention, rather than the needs and abilities of the humans who will use the computer and rather than the interaction of the user with the computer from the user’s perspective. In addition, the broader interface to the user, which includes documentation and training materials, is often ignored until the last minute. This chapter focuses on the importance and role of human factors specialists in the design of human-computer systems. Human factors is the study of the interaction between people and technology (including products, equipment, work procedures, and systems) and the application of that knowledge to design. The human factors specialist brings to a design and development team a knowledge of humans (their physical, perceptual, cognitive, and social capabilities and limitations), a knowledge of issues that are important
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
335
in designing human-computer interaction, and experience in methodologies that are used during design to ensure that the final system can be used for its intended purpose by its intended users. Therefore, the human factors specialist contributes to a multidisciplinary design team a component that is often missing--expertise in the human component of the human-computer system. Although the contributions of human factors to the design of human-computer systems have grown rapidly over the past decade, there is still a lack of familiarity, among both computer specialists and the general public, with the discipline of human factors and its place in the design of human-computer systems. This chapter has several purposes: (a) to provide a brief introduction to the field of human factors, also known as ergonomics, (b) to describe the role and importance of human factors specialists in the design and development of human-computer systems, and (c) to describe several of the key methodologies used by human factors specialists. A brief overview of the discipline of human factors is provided first as a background for those unfamiliar with the field. In the next section the relation between user interface design and usability is discussed, followed by an overview of general principles for designing usable systems and the type of information that is relevant for user interface design. Models of the software development process are described next. They provide a framework for a discussion of the roles of the human factors specialist on design and development teams and the specific human factors activities that should be integrated into the development process. Task analysis, user interface prototyping, and usability testing-key methodologies used by human factors specialists-are then described, followed by a discussion of the importance and the complexities of designing for consistency from the user’s perspective. Because the value of having a human factors specialist on the design and development team is often not understood and the associated costs are often regarded as too high, the final sections of the chapter cover the reasons it is beneficial to have a human factors specialist on the team and the benefits (relative to the costs) of investing in human factors activities to ensure excellent user interface design. Many books and chapters (e.g., Helander, 1988; Salvendy, 1987) have summarized the literature on human factors and human-computer interaction; such a literature review is not the purpose of this chapter. Instead, the chapter is written from the perspective of human factors specialists who are practitioners. It highlights issues, approaches, and activities that are important for the design of human-computer systems in design and development organizations in large companies. The approach and methods described here are advocated by human factors specialists in both industry and government.
336
MARY CAROL DAY AND SUSAN J. BOYCE
2. The Discipline of Human Factors The term human factors is used throughout this chapter; however, two other terms, ergonomics and human factors engineering, are often used to refer to the same discipline. In a survey of the use of these three terms (Licht et al., 1989), it was found that there were only slight differences in their definitions, and that the differences were not found consistently. In general, definitions of human factors were somewhat broader than the others, definitions of ergonomics emphasized the study of humans at work, and definitions of human factors engineering emphasized design. Engineering psychology is a fourth term that has a meaning comparable to the other three (Chapanis, 1976). In the United States, human factors is used most often, although ergonomics is becoming much more common. In Europe and other parts of the world, ergonomics is the most frequently used term.
2.1 An Extended Definition 2.1. 1 Definition Human factors is: (a) the study of human capabilities and limitations that are relevant to the design of tools, machines, systems, tasks, jobs, and environments, (b) the application of that knowledge to design, and (c) the use of human factors methodologies during design-with the goal of fostering safe, effective, and satisfying human use (Chapanis, 1991; Christensen et al., 1988; Meister, 1989). Four components of this definition should be noted. First, the starting point of human factors is the human-human capabilities and limitations (including physical, perceptual, cognitive, and social) and human needs. Second, the information about humans that is of interest to human factors is information relevant to design. This is a primary distinction between human factors and the various areas of experimental psychology, because experimental psychology is concerned with human functioning independent of any implications for design. The attention of the human factors specialist is directed to the interaction between the human and the specific tool, system, task, or environment being designed. Third, the goal of human factors is the design of tools, systems, tasks, and environments that are safe, effective, and satisfying. Within this broad goal, one primary objective is to increase the effectiveness and efficiency of human activity by aiding and extending human capabilities. This means that tools should be easy to learn and easy to use, as demonstrated by rapid learning, few errors, and fast error recovery. Productivity should be greater with the tools than without them. A second primary objective is to enhance
H U M A N F A C T O R S IN H U M A N - C O M P U T E R S Y S T E M D E S I G N
337
positive human values, such as safety, minimum stress and fatigue, satisfaction with use, and even pleasure with use. Accomplishing these objectives requires that human capabilities and needs be understood and given priority throughout the design process. Fourth, design typically requires the use of human factors methodologies to acquire information on the interaction between the human and the design object to ensure that the goals of safe, effective, and satisfying use are met. Although this component of the definition of human factors is often not stated explicitly, it is critical for understanding the place of human factors in the design process.
2.1.2 The Concept of System The concept of system is fundamental to human factors. A system is a combination of elements that accomplishes certain goals (Meister, 1989, 1991; Sanders and McCormick, 1987).’ The type of system of interest to human factors is one in which the human is a component of a humanmachine system. The purpose or goals of the system are what the system accomplishes (i.e., its functions). For example, in the case of a humancomputer system, the system’s components are the human, the hardware, and the software. The purpose or goals of the system might be monitoring and directing investments, or monitoring and controlling a communications network; various tasks must be performed to accomplish these goals, such as obtaining information about investments, changing the direction of investments, etc. Figure I depicts a simplified but useful way of viewing a human-computer system. The figure integrates diagrams presented by Chapanis ( 1976) and Norman (1986). In the human-computer system, the human has a goal in mind, as well as specific tasks that must be performed to attain the goal. The human senses and perceives the display, and then interprets and evaluates it relative to task goals. Sensing, perceiving, interpreting, and evaluating depend on both (a) the information presented on the display and the manner in which it is presented and (b) the human’s perceptual and cognitive abilities and processes. Perceptual processes include the ability to scan the display, to locate critical information, and to identify and interpret that information. Cognitive processes might include understanding what is displayed, relating
’
For the purposes of this chapter, we shall not go into detail about the system concept (Bertalanffy, 1968; Meister, 1991) and shall not rigorously pursue the implications of a formal definition for system design. In fact, some could argue (with justification) that we are using the term “system” too loosely. However, the concept is important for highlighting the necessity to consider the human user when designing computer systems.
338
MARY CAROL DAY AND SUSAN J. BOYCE
FIG. 1. A simplified depiction of a human-computer system. Adapted from Chapanis (1976) and Norman (1986).
the displayed information to the intended goal and to past experience, engaging in problem-solving, and then making decisions about what actions to take next. This cognitive behavior may be so well practiced that the decisions are made without conscious thought, or the human may engage in conscious problem solving. After a decision has been reached about the desired action, the human forms an intention, mentally specifies the action, and then executes the action (e.g., moves the mouse to a menu item, scrolls for more information, etc.). This initiates a response by the computer, which results in a change in the display. This interaction between the human and the computer takes place in an environment that is both physical and social. The characteristics of the environment affect human efficiency and performance, so the environment is also of concern to the human factors specialist. Designing a human-computer system that effectively and efficiently fulfills its purpose requires attention not only to the computer hardware and software, but also to the specific tasks that the human will perform with the computer, to the environment in which the task will be performed, and to
H U M A N FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
339
the capabilities, skills, and knowledge that the human will bring to the human-computer system.
2.1.3 Multidisciplinary Knowledge Base and Methods Human factors is a multidisciplinary field. It was initially formed by people trained in various disciplines, such as psychology, physiology, engineering, industrial design, and medicine, who came together to address the issues of designing equipment and systems for human use (Chapanis, 1986; Hirsch, 1989; Simonelli, 1989; Smith, 1988). Data from the Human Factors Society ( 1992), the primary association for human factors professionals, indicate that the field remains multidisciplinary. Table I shows that the members of the Human Factors Society in 1992 had attained their formal education in a variety of academic specialties. Although the disciplinary heritage of human factors is broad, two fields (psychology and engineering) have contributed more than the others. As Meister ( 1 989) noted, psychology has provided the basic concepts and methodologies for research, and engineering has provided the context within which human factors is applied to design. Also, as Table I shows, psychology and engineering are the two disciplines that provide the greatest percentages of human factors professionals. A recent survey by the Committee on Human Factors of the National Research Council also highlights human factors’ multidisciplinary nature (Van Cott and Huey, 1992). All of the respondents to this survey answered yes to the question, “In your current position, are you primarily concerned with human factors-that is, human capabilities and limitations related to the design of operations, systems, or devices?” However, only 56% of these respondents said they considered themselves to be human factors specialists ; TABLEI EDUCATIONAL BACKGROUND OF MEMBERSO F H U M A NFACTORS SOCIETY
THE
Academic Specialty
Percentage
Psychology Engineering Human factors/ergonomics Industrial design Medicine/physiology/life sciences Education Business administration Computer science Safety Other
44.4 19.4 8.4
2.5 3.9 1.8 2.2
1.4 1.5
5.8
340
MARY CAROL DAY A N D SUSAN J. BOYCE
the others called themselves industrial engineers, psychologists, computer scientists, industrial designers, etc., even though they were doing human factors work. The areas of specialization of the highest degree obtained by the respondents covered a broad spectrum, again indicating the multidisciplinary nature of the field. For example, human factors specialists working on human-computer systems obtained their highest degrees in the following areas : human factors, 20% ; experimental psychology, 25% ; other psychology, 9%; computer science, 9%; engineering, 4%; all other areas, 27%.
2.1.4 Types of Activities Two major types of activities are conducted under the rubric of human factors. The first is scientific research conducted to obtain information about human capabilities, limitations, and motivations that are relevant to the design of systems. Such research uses the standard experimental design, data collection, and statistical analysis techniques of the behavioral sciences, as well as techniques that have been developed specifically for human factors research (e.g., Meister, 1985). The second type of activity is practice, i.e., participation in system design, utilizing human factors knowledge and methodologies. The term design is used here to refer to all the activities that are involved in the creation of a product or system, including planning, analysis, design specification, development, and testing. Design involves use of the data available in the human factors scientific and applied literature; however, it is rare that the data are sufficient to provide all of the information needed for a specific design problem. Therefore, design typically requires both extrapolation from the existing scientific literature about humans and human factors issues and the use of human factors analysis and evaluation methodologies throughout the design process (Chapanis, 1992; Meister, 1985; Rouse, 1987; Simonelli, 1989).
2.1.5 Skills of the Human Factors Specialist Although human factors still is multidisciplinary (in the sense that human factors professionals come from other disciplinary backgrounds and human factors professionals borrow useful techniques from other disciplines), over the years a knowledge base and set of methodologies that are distinct to human factors have emerged (Meister, 1971, 1986, 1989). Recently, attempts have been made to specify the core set of knowledge and skills that should be learned during training in the discipline of human factors per se. The Human Factors Society ( 1990) has specified accreditation criteria for gradu-
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
341
ate training programs in human factors; these criteria overlap considerably with guidelines offered for graduate training in engineering psychology (Howell et al., 1987). Three important bodies of knowledge, which are included in the accreditation criteria, are knowledge about human characteristics, research methodologies, and analysis and design methodologies. The characteristics of people that are relevant to human factors include cognitive, perceptual, physical, social, and motivational characteristics. Two broad areas of human functioning that are considered especially important by the Human Factors Society are termed “the human as a processor of information” (e.g., attention, perception and cognition) and “the human as a physical engine” (e.g., biomechanics and anthropometrics). For the human factors specialist working in human-computer design, knowledge of the former area is critical. A broad knowledge of the research methodologies needed to conduct research with people (i.e., research on human characteristics and on humans in interaction with technology) is essential for those working in both the science and practice of human factors. This body of knowledge includes research design, data collection, statistical analysis, and the generation of appropriate conclusions from data. It is important for the specialist to be familiar with a wide variety of research methods, since the method that is most appropriate will vary with the purpose of the research and, especially in a design setting, with cost and schedule constraints. The variety of usability testing methods that are described in Section 6.3 fall into this category of knowledge and skills. Analysis and design methodologies are fundamental for the human factors specialist working in a design and development environment. Examples include mission analysis, function allocation, function analysis, and task analysis (which is covered in Section 6.1). In addition, training in dealing with the trade-offs among functionality, usability, performance, cost, and schedule throughout the design and development process are extremely important for the practitioner. These three sets of knowledge and skills (i.e., knowledge about the properties of people, research methodologies, and analysis and design methodologies) form the technical core of the skills of the human factors specialist. However, additional skills are also important. Computer skills are especially useful for those working in the area of human-computer interaction. Even when the specialist does not use the skills for software development, such skills increase awareness of the problems confronted by software engineers and also improve communication with them. Written and verbal communication skills are essential for both researchers and practitioners. In the design and development environment, the specialist must be able to communicate clearly and effectively with colleagues who have different training and differ-
342
MARY CAROL DAY AND SUSAN J. BOYCE
ent perspectives. For the practitioner, interpersonal and teamwork skills are also critical and are necessary if the technical core of knowledge is to be used effectively. The human factors practitioner should enjoy working on teams. Negotiation skills and the willingness and ability to view issues from multiple perspectives are crucial for effective problem solving when numerous criteria (e.g., functionality, usability, schedule, costs) must be traded off against each other. This set of skills is broad, and is intended to prepare the human factors specialist for research and for the variety of roles that may be required in practice, e.g., in a software development environment. Equally important for the human factors specialist involved in practice, these skills provide a foundation on which to build as technology and its applications, and therefore the specific design issues to be resolved, change.
2.2 A Brief History Human factors as a discipline has developed simultaneously with developments in technology. In fact, Chapanis (1986) describes human factors as “a psychology for our technological society.” Work that was a precursor to human factors appeared during the industrial revolution (Gilbreth, 1911 ; Taylor, 1911). However, human factors as a specific discipline originated in work conducted during World War 11. The field of human factors emerged primarily in response to needs posed by technological advances in military systems (Chapanis, 1986). Chapanis noted that before the technological advances of the 1940s, the work of behavioral scientists had been focused on fitting the person to the job. This was accomplished by using tests to select the right people, and by developing training programs to teach necessary skills. However, during World War I1 there were major advances in technology that made this strategy less successful. In combat information centers, for example, information from multiple sources (including sonar, radar, radio, teletype, and visual reports) was acquired, integrated, and disseminated. With this type of complexity, it was easy to design systems that were too difficult for even the right people with the right training to use. As a consequence, it became obvious that it would be beneficial to focus on fitting the job and the equipment to the human, as well as vice versa: . . . the realization gradually began to form that the best efforts of selection and training specialists were often negated by the way the equipment was designed. The realization came slowly because for years everyone had been conditioned to attribute most accidents to “human error.” It took a long time
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
343
for us to discover that “human error” could be caused by or diminished by the way the equipment was designed (Chapanis, 1986, p. 57).
At the end of the war, engineering psychology laboratories were established and funded by the U S . Army Air Corps (which later became the U.S. Air Force) and the U.S. Navy to continue this type of work. Civilian companies were formed to conduct contract work in engineering psychology. In addition, human factors organizations began to appear in American industry. For example, in 1945 AT&T Bell Laboratories hired John Karlin from a laboratory at Harvard University that had been doing work in communications systems for the World War I1 effort (Hanson, 1983). In 1949, the first textbook in human factors was written by Chapanis, Garner, and Morgan. It was entitled Applied Experimental Psychology : Human Factors in Engineering Design. During the period after the war, human factors as a profession was established. The Ergonomics Society (which was then called the Ergonomics Research Society) was founded in Great Britain in 1949. In I957 the Human Factors Society was formed in the United States, and Division 21 (Society of Engineering Psychology) was organized within the American Psychological Association. During the past four decades, human factors has grown rapidly. It is still well represented in military and government organizations ; in addition, human factors specialists are now working in a variety of different types of companies. The results of a National Research Council survey of human factors specialists (Van Cott and Huey, 1992) show that about 83% of human factors work is currently conducted in six areas: computers (22%). aerospace (22%), industrial processes (1 7%), health and safety (90/0), communications (80/), and transportation (5%). The remaining 17% is spread across many other areas, including energy and consumer products. Membership in the Human Factors Society reflects the growth of the discipline. In 1960 there were about 500 members; in 1980 there were over 3,000 members; in 1990 there were more than 4,500 members. As technology has advanced, the work of human factors specialists has expanded to include work on the new technologies, and has changed to meet the demands of designing usable, useful systems with the new technologies. In the earlier days of human factors, work was often focused on ensuring that the physical design of equipment and systems matched the physical and sensory (e.g., visual, auditory) capabilities of people. However, as technology has become more complex, especially with the rapid development and proliferation of computers, it has become necessary to consider the cognitive processes and capabilities of people during design, as well as their physical and sensory capabilities. Fortunately, the same technology that has made it
344
MARY CAROL DAY AND SUSAN J. BOYCE
essential to consider cognitive processes has also offered the technological possibility of adapting the technology to match the user’s cognitive processes (see Rasmussen, 1988). Work on human-computer systems has been one of the most rapidly growing areas of human factors specialization in recent years. For example, a technical group on Computer Systems was formed in the Human Factors Society in 1972, and was among the first 10 technical groups formed. In 1992 the Computer Systems technical group had 1,134 members and was the largest technical group in the Society (Human Factors Society, 1992).
3. The Human Factors Specialist and the User Interface The human factors specialist has an important role to play in the design of human-computer systems. He or she is an expert on the human component of the human-computer system, and therefore brings a perspective and a body of knowledge and skills that are unique (relative to other team members who are knowledgeable about the computer component of the human-computer system). The primary focus of the human factors specialist is the user interface of the system, and the primary goal is to ensure that the system is usable, by its intended users for its intended purpose. In the first segment of this section (Section 3.1), the importance of the user interface and the concept of usability are discussed. In Section 3.2, a set of general principles for designing usable human-computer systems is described. These principles have been advocated by many human factors specialists during the past decade, and underlie a user-centered design process. Section 3.3 covers the sources of information that are available to and needed by human factors specialists (and other user interface designers) whose goal is to design a usable human-computer system.
3.1 The User Interface a n d Usability 3.1.1
The User Interface
The user interface includes all components of the computer system with which users come into contact while performing their tasks. For a simple computer system it includes: (a) input and output devices (keyboard, mouse, size and image quality of monitor, audible tones, etc.); (b) the softwarebased interface (e.g., the information provided, its display format, and usersystem dialogue) ; (c) instructional materials for use, installation, and maintenance (e.g., written, audio, or video materials, on-line help systems, and face-to-face user training); and (d) even hot line and other forms of
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
345
technical support for the system. It should be noted that the term user interface includes the interface to all people who use the system, including those who install and maintain it.
3.1.2 The Importance of Usability To typical users of a computer, the user interface is a vital component of the entire system. It is their only access to the system; it is the way they learn about it, provide information to it, and receive information from it. Extensive, potentially useful features may be offered by the system. This is the functionality of the system; it is what the system can do, and it is typically the first thing to be considered when designing a new system. However, the value of the functionality, of the features, depends on the user’s ability to use them. Their value depends on the usability of the system. Usability is “the capability to be used by humans easily and effectively” (Shackel, 1984, p. 54). Unfortunately, usability does not always receive-in fact, it rarely receives-the priority given to functionality. The April 29, 1991, cover of Business Week carried the text, “I can’t work this ?#!!@! thing!” The article noted that modern electronics and computer capabilities have dramatically lowered the cost of adding features to machines; as a consequence ever more features are being added. However, many of the features cannot be used easily by the public, and the complexity that results from adding many features may make even the most basic feature difficult to use. The article highlights the fact that consumers want usability. Companies are beginning to hear that message, and they now often use the term “user-friendly” in advertising (although it typically is not made clear what the term means). Furthermore, reviews of software now frequently refer to “ease of use.” For example, Nielsen (1992) cites an unpublished study by Anderson, in which 70 reviews of software products in personal computer magazines were analyzed. There were a total of 784 comments on the usability of the software, averaging 1 1.2 comments per software review. Goodwin (1987) offers many examples that indicate poor usability may seriously compromise the functionality claimed to be offered by a product. One example described by Goodwin (1987) is a study conducted by Eason. Eason studied the use of a banking system in which users query a database by entering a customer’s account number and a code for a specific type of report. He found that users employed familiar codes even when the familiar codes were inappropriate for the users’ tasks and more appropriate codes existed. The users did not explore system capabilities ; they simply learned the minimum amount necessary to accomplish their major tasks. When they were fortunate, they obtained more information than they needed. When
346
MARY CAROL DAY A N D SUSAN J. BOYCE
they were less fortunate, they obtained the wrong information or no information. When the user interface was redesigned to present the codes in a more task-appropriate format, there was an increase in the number of different reports obtained. More functions were used-not because more were available but because they were more easily used to perform the tasks.
3.1.3 Usability Is from the User’s Perspective! When usability is considered, it is only the user’s performance with the system and opinion about the system that counts. It does not matter if the designers think the system is usable, unless the designers are the primary users. In the early history of computers the designers were the typical users. Gaines and Shaw (1986) note that for the first generation of computers (1948-1955) the operators were skilled engineers who were part of the design team; they adapted their behavior to that required by the machine and considered the problems of the interface to be minor compared with all the other difficulties of using computers. Grudin (1990c), also, points out that the first computer users were engineers who had to have a full understanding of the hardware. However, the typical users of computers have changed dramatically since then. With the development of higher-level programming languages, the user became a programmer who no longer had to have expertise in the hardware. Then with the advent of interactive terminals and personal computers, the user population changed even more; nonprogrammers began to use computers in large numbers. The nonprogramming user was referred to as the end user. This terminology is indicative of what is still, perhaps arguably, the perspective of many who design computer applications. The term implies that the end user, who is actually the primary user of the computer (i.e., the person for whom the software is designed), is the last user to be considered: For many users the computer is the terminal or workstation which they are using and that is the central computer as they see it. But only too often these users are seen as “end-users” by designers-and this name may well betray an attitude which causes some of the bad design for users and failures in usability. Designers must see the user as the centre of the computer system instead of as a mere peripheral (Shackel, 1988, p. 59).
Designing for ease of learning and use requires that the user be considered throughout the design process. It is not sufficient simply to want to accomplish usability; usability must be a goal during the development process on a par with other goals (e.g., goals for functionality, performance, reliability, schedule, etc.). This requires that usability be defined analytically, and that specific usability objectives be stated.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
347
3.1.4 A Definition of Usability Four components of the human-computer system influence usability and must be considered during the design process (Shackel, 1984, 1988; Bennett, 1984). These components are the user, the tool (i.e., the computer hardware and software), the task, and the environment (Fig. 2). The interactions among these four components determine whether the computer system is usable, so no single one of them should be considered in isolation. What is easy to use for an expert user may be difficult for a novice; what is easy for a novice may be plodding and inefficient for an expert. User-computer dialogue that is efficient for one task may be cumbersome and time-consuming for another. Because the task is performed in an environment, the characteristics of the environment may well influence the user’s efficiency and performance with the tool. Shackel’s full definition of usability incorporates all four of the components: usability is “the capability in human functional terms to be used easily (to a specified level of subjective assessment) and eflectiuely (to a specified level of performance) by the specified range of users, given specified training and user support, to fulfill the specified range of tasks, within the specified range of environmental scenarios” (Shackel, 1984, pp. 53-54). This definition is extremely specific. The range of users (e.g., their skill level and motivation) is specified. The range of tasks to be performed is specified. The environmental scenarios (e.g., the physical and social context of work and the ways in which the tool will be used in these contexts) are specified. The
n ENVIRONMENT
FIG.2. Another depiction of the human-computer system (after Shackel, 1984)
348
MARY CAROL DAY AND SUSAN J. BOYCE
type and amount of user training and support are specified. Given these specifications, usability for individual users is judged by : subjective assessments by the users (e.g., judgments made by the users about ease of learning, ease of use, and satisfaction with the system) ; and objective measures of performance while using the tool (e.g., time to learn, time to accomplish specific tasks, number of errors). It should be emphasized that these measures are based on the user’s opinions and performance. Although this amount of precision may at first glance seem to be excessive, it is important within a product development environment. Without precise specification, usability is considered by many members of a development team to be not only secondary in importance to functionality, but also to be a soft and fuzzy concept. As a consequence, usability needs may easily be ignored. When precise usability goals are specified, they can be treated as requirements that are as significant as other requirements for the system.
3.1.5 The User Interface and the Rest of the System Although the user interface determines whether a system is easy to learn and to use, usability concerns cannot be restricted to design of the user interface and the user interface cannot be considered in isolation from the rest of the system. Decisions about what information should be provided to the user via the user interface influence the functionality, and therefore the design, of the system per se, and not just the user interface. For example, Malin et al. (1991) conducted case studies of the development of 15 intelligent fault management systems for a variety of NASA’s aerospace programs, including the Space Shuttle, the Space Station, unmanned (sic) spacecraft, and military and advanced aircraft. They noted that often problems perceived to be user interface design problems were actually system design problems, such as unavailable information or misrepresented information, that resulted from a failure to identify the information that needed to be exchanged between the computer and the human during task Performance. Also, decisions about the way the functionality is implemented in software or hardware often severely constrain the user interface. For example, if two applications that use the same data are unable to exchange data, the user may have to enter the data first into one application and then into the other. As Kay (1990, p. 191) noted, the user interface is not a “sandwich spread” that can be applied to poorly designed computer systems; this is like putting “Bearnaise sauce on a hot dog.” The entire system must be designed to provide a user interface that supports users in performing their tasks. This fact has implications for the way in which a human factors specialist is involved in a project. If the user’s needs must be considered throughout the design of the entire system, and not solely during design of the user
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
349
interface, then the human factors specialist must be available throughout the system design process to identify and influence decisions that will affect user interface design.
3.2 Principles for Developing Usable Systems Human factors specialists working on the design and development of human-computer systems have learned that there are a few principles, which if followed throughout design and development, will greatly enhance usability and therefore usefulness (Bennett, 1984; Bury, 1985; Gould, 1988c; Gould and Lewis, 1985; Meister, 1986; Rubinstein and Hersh, 1984; Whiteside et al., 1988). These principles have in common a focus on the user, prototyping and iterative design, and empirical methods for obtaining information from users throughout the design and development process. These principles, as articulated by Gould and his colleagues (summarized in Gould, 1988c), are briefly summarized below. Early and Continual Focus on Users: The user interface designer should have direct contact with users to understand the characteristics of users and their jobs. This contact should begin during system planning and should continue throughout the development process. Behavioral methodologies that are useful in acquiring the necessary understanding include interviews, focus groups, observations at user locations, surveys, task analyses, usability testing, and participative design activities. 0 Early and Continual User Testing: Testing to determine the usability of the user interface-as well as the usefulness of the system-should be conducted from initial exploration of the system concept through delivery of the system to users. User testing can be usefully conducted in a variety of ways, and the type of testing that is most appropriate depends on the question to be addressed and the project constraints. For some issues, laboratory experimentation may be the only or the most appropriate way to obtain useful information; for example, laboratory experimentation may be useful in determining the distinctiveness and pleasantness of auditory alerting patterns or the effectiveness of graphical icons in human-computer interfaces (e.g., Israelski et al., 1989). For other issues, it may be important to conduct user testing at the locations at which the systems will actually be used so that the context of system use can be appropriately understood and considered during design. User interface prototypes (from paper-and-pencil sketches to computer-based interactive simulations) are invaluable in supporting early and continual testing. A user interface prototype should be created early in the design process; the prototype is then shown to users,
350
0
0
MARY CAROL DAY A N D SUSAN J. BOYCE
who are asked for their opinions and preferences. If the prototype or an early version of the system embodies enough functionality, users may be asked to do real work with the system, so that performance data (e.g., time to complete tasks, number of errors) can be collected. Based on the data obtained, the prototype is modified and elaborated throughout the development process, with user performance and opinion data collected after each major revision. Iterative Design: The design of the user interface is modified based on the results of user testing. This testing and modification process is repeated until the system clearly meets users’ needs for functionality and usability. The testing should use prototypes when the “real system” is unavailable, but should use the actual system as soon as it has been implemented adequately for user testing. In addition to testing during development, user testing should be conducted after introduction of the system, so that data are available from extended use and from expert users to improve future releases of the system. Integrated Design: All components of the system that affect usability should evolve in parallel. This means that work on the human-computer interface (including the help system), user documentation, and training should be closely coordinated during design and development.
The process suggested by these principles has been used by many human factors specialists (and other team members) in many companies, yielding systems that are easy to learn, easy to use, contain the right functions for the tasks, and are well liked. Reports of the development of several well known computer systems-including Xerox’s Star system (Bewley et al., 1983; Smith et al., 1982) and Apple’s LISA system (Williams, 1983), among others-have emphasized the use and value of following these principles.
3.3 Information Relevant to User Interface Design Two major sources of information about humans and human-computer interaction are used during design of the user interface to ensure that the system is usable: (1) the literature in psychology and human factors, especially that on human capabilities and human-computer interaction ; and (2) the use of behavioral methodoligies to obtain information about users (their characteristics, skills, motivation), their tasks, and their environments, and to obtain information about the usability of the system being designed. 3.3.1 The Literature The relevant literature includes theory and data on human perceptual, cognitive, and physical capabilities and limitations, and on human-computer
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
351
interaction. It also includes user interface design standards and guidelines, which are often based on a combination of theory, data, practical experience, and common practice. The literature is often consulted when analyzing and evaluating an existing user interface or when creating the initial design of a new user interface. It is beyond the scope of this chapter to provide an overview of the literature in the behavioral sciences and in human-computer interaction that is relevant to user interface design; instead, a few pointers to the literature are provided. Information on basic human capabilities and limitations must be considered when designing human-computer systems. For example, in designing visual displays, characteristics of the human visual system must be understood. When designing the user-system dialogue of a computer interface, the users’ learning, memory, and problem-solving processes must be considered. There are several general references on human performance, perception, and cognition and corresponding human factors design issues (e.g., Sanders and McCormick, 1987; Van Cott and Kincade, 1972; Woodson and Conover, 1964). However, it has often been difficult and/or time-consuming for user interface designers to access the detailed, current information that may be most relevant for their design problems (Van Cott, 1990; Rouse and Boff, 1987a; Boff, 1987). Over the past decade, a multiagency United States Government-supported project was undertaken to improve the accessibility and use of human factors data in system design. The project, called the Integrated Perceptual Information for Designers Project (IPID), has produced several major reference sources. The first reference source was the Handbook of Perception and Human Per$ormance (Boff et ul., 1986), which summarizes a broad range of knowledge about sensory, motor, and cognitive processes. The second product was the Engineering Data Compendium (Boff and Lincoln, 1988; Lincoln and Boff, 1988). In compiling the Compendium, special effort was made to ensure that the reference provides information relevant to design and that the material is as easy to use as possible. Van Cott (1990) provides an overview of this reference source. Work is currently underway to provide the compendium on a compact disc, with visualization tools that will provide actual experience with some of the human perception and performance phenomena (Boff et al., 1991). Literature in human-computer interaction has been growing extremely rapidly. Perhaps the best reference book is the Handbook of HumanComputer Interaction edited by Helander (1988). This book includes many specific chapters that together cover most of the scope of human-computer interaction. For example, headings of sets of chapters are : (a) Models and theories of human-computer interaction, (b) User interface design,
352 (c) (d) (e) (f) (8)
MARY CAROL DAY AND SUSAN J. BOYCE
Individual differences and training, Applications of computer technology, Tools for design and evaluation, Artificial intelligence, and Psychological and organizational issues.
Two recent, edited volumes that grew out of workshops provide a sampling of perspectives on software design (Karat, 1991) and theory in humancomputer interaction (Carroll, 1991). Many journals cover literature relevant to the design of human-computer interfaces. They include Human Factors, Human-Computer Interaction, Ergonomics, Behaviour and Information Technology, International Journal of Man-Machine Studies, and SIGCHI Bulletin. The proceedings of the annual meetings of the Human Factors Society and the Association of Computing Machinery’s Special Interest Group on Human-Computer Interaction (SIGCHI) also contain many relevant papers. Many guidelines and standards documents now exist to aid in user interface design. These are summarized in Section 7 on “Designing for User Interface Consistency.” Although the human factors literature and the literature on humancomputer interaction is extensive, it is rare that a designer can go to the literature and find precise, definitive answers to all the questions that must be answered to complete a design. Design questions are frequently contextspecific, framed by the constraints of the users, the tasks, the environment, the technology, and even the development schedule. It is almost always necessary, during the design process, to conduct analytical and empirical studies to learn more about users and users’ tasks and to assess the usability of designs.
3.3.2 Beha vioral M e tho dologies Behavioral methodologies include both analysis and evaluation techniques. The term behavioral is used to indicate that the human’s behavior (cognitive, perceptual, or physical) is the primary focal point. The methodologies of the social sciences, especially experimental psychology, serve as the foundation for many of the data collection techniques used by human factors specialists (e.g., Anderson and Olson, 1985; Cook and Campbell, 1979; Gould, 1988c; Karat, 1988; Landauer, 1988; Meister, 1985). These techniques include laboratory experimentation, field studies, questionnaires and interviews, verbal reports, focus groups, naturalistic observations, and performance testing. In addition to using these techniques when they are appropriate, human factors specialists have modified many of the methodologies to provide the
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
353
information needed for design in a more time- and cost-effective manner that meets the constraints of fast-paced design and development projects. The particular methodology that should be used at any one time depends on the specific question to be answered and, often, on the time available to obtain the answer. Discussions of the appropriate match between design question and behavioral methodology are presented in Anderson and Olson (1985), Karat (1988), Landauer (1988), and Meister (1985). Two categories of behavioral methodologies are especially useful for the human factors specialist involved in human-computer system design. One is a set of analytical methodologies that can be referred to generically as tusk analysis. These analytical tools enable the designer to understand the user’s activities while performing a task, from the user’s perspective. Task analysis is described in Section 6.1. The second category includes methods for testing and evaluating a system to determine if it is usable. These methods are described in Section 6.3.
4.
Models of the Software Development Process
Most of the human factors activities described in this chapter take place in the context of the software development process. To be most effective, these activities must be appropriately coordinated with the activities of other project team members and be formally integrated into the project plan for the overall system. How that coordination occurs depends on the software development model that is being followed. A brief review of software development models is offered below. This review illustrates that models of software development have been changing in recent years in ways that better accommodate the type of design process that is required for interactive human-computer systems. Not coincidentally, the same time frame has seen the creation of design tools, i.e., user interface prototyping tools, that make the new models both possible and more effective ; these will be discussed later in this chapter. Software development models are used to structure the design and development process; they specify the order of the stages involved in software development and the transition criteria for moving from one stage to the next (Boehm, 1988). The model that is perhaps the most widely used is the linear or “waterfall” model. This model assumes that software is developed in successive stages, beginning with system planning and then proceeding through various stages of requirements, high-level design, detailed design, coding, integration, deployment, and operations and maintenance. (See Boehm [I9881 for an overview of this model and others.) Although there
354
MARY CAROL DAY AND SUSAN J. BOYCE
may be feedback from one stage to the preceding stage, in general it is assumed that work will proceed sequentially from one stage to the next. This type of model is characterized as top-down: the system is specified at a general level first and specifications become increasingly detailed ; when the design is sufficiently precise, it is implemented in code. This type of sequential, noniterative design process has been successful for noninteractive systems, but has been less successful with interactive humancomputer systems (Boehm, 1988; Grudin, 1991 ; Hartson and Hix, 1989). A primary problem with the waterfall model is its requirement that detailed documents be completed (e.g., requirements and design documents) before proceeding to the next stage. These documents are typically text-based documents that describe detailed components of the system, including the user interface. However, it is difficult to write adequate detailed requirements for user interfaces for interactive systems without first developing prototypes of the user interfaces. A user interface designer needs to be able to see the design, to work through user scenarios step-by-step, and to look across various user scenarios to ensure that the design is coherent and appropriately consistent. In addition, the user interface designer needs to collect feedback on the design from users to determine if it meets their needs and is usable from their perspective. Text-based requirements can lead to the generation of a large quantity of code underlying a user interface that is found-after development-to be difficult to understand and to use. The consequence at that point is that either : an inadequate product is delivered, or it takes longer to deliver the product because code must be modified to meet users’ needs. In addition, studies of the process of designing interactive computer software have revealed that a waterfall model is not always followed-ven if it is stated as the appropriate design process (e.g., Hannigan and Herring, 1987; Hartson and Hix, 1989; Johnson and Johnson, 1989). Developers have reported that there is no uniform way of designing software; the process differs from designer to designer, stages in the process are not always discrete and do not always occur in the specified order, and there are iterations between some of the stages, depending on the product, design team, and schedule. Coding is often begun before requirements have been completed, or, as Hannigan and Herring (1987) put it, specifications are “refined, updated, reviewed, changed, disobeyed, etc.” as development proceeds. Models that are more iterative in nature and that involve all members of the design and development team working in parallel hold promise for delivering more usable human-computer systems, as well as for reducing development time and better supporting the actual work habits of designers and developers (e.g., Boehm, 1988; Hartson and Hix, 1989; Winner et al., 1988).
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
355
Winner et al. (1988) describe a general approach to development called concurrent engineering that addresses some of the problems of the waterfall model. Concurrent engineering is a systematic approach to the integrated design of products and their associated processes, including manufacture and support. Key components of concurrent engineering include : 1. Consideration, from the initial stage of planning, of all elements in the system’s life cycle from conception through disposal ; 2. Multidisciplinary teams that identify all issues in a timely manner and that evaluate the impact and risks of various design alternatives ; 3. An early and continuously increasing understanding of user needs throughout the process; and 4. Ongoing dialogue regarding the trade-offs among user needs, cost, schedule, and quality.
The concurrent engineering model assumes that there will be iteration of the design throughout the system development process, with increasing closure as the understanding of all relevant parameters increases. For example, user interface requirements might initially be high-level and general ; as communication among user interface designer, project team members, and potential users continues throughout the design process, the requirements will become more specific and detailed. Boehm’s (1988) spiral model of software development is consistent with the concurrent engineering approach. The spiral model includes prototyping, testing, and iteration as key concepts. There are separate cycles for each phase of product design and development. Boehm’s phases are investigation of the concept, requirements, design, and detailed design. Within each cycle, the same sequence of steps is addressed. The steps involve: 1. Determining the objectives, alternatives, and constraints for the portion of the system being addressed (e.g., performance, functionality, user interface) ; 2. Evaluating the alternative approaches and identifying and resolving risks; 3. Doing the appropriate type of development and testing for this phase (e.g., developing and validating software requirements) ; and 4. Planning for the next phase. For example, if designing a poor user interface is a risk, Boehm (1988) suggests the project team might use the “risk management techniques” of prototyping, task analysis, user scenarios, and user characterization to ensure that a usable user interface is designed. In each successive cycle,
356
MARY CAROL DAY AND SUSAN J. BOYCE
the prototype becomes more detailed, and different specific risks might be identified, which could be resolved through user testing. Hartson and Hix (1989) offer a model specifically for human-computer interface design that deviates even further from a sequential model. Their “star life cycle” places evaluation at the center of the other activities involved in user interface development. The model is called a star because evaluation is placed at the center of the star, with each of the other activities representing the points of the star. The other activities are: (a) (b) (c) (d) (e)
task and functional analysis, requirements and specifications, conceptual design and formal design representation, prototyping, and implementation.
According to the star model, these activities may occur in almost any order and alternation among them may be rapid. However, evaluation occurs between any move from one activity to another. For example, after modifying the prototype, evaluation would be conducted before detailing requirements based on the prototype. The evaluations might be minimal and informal (e.g., a peer review if only minor changes have been made to a prototype) or major and formal (e.g., usability testing with potential users if major changes have been made to a prototype). Concurrent engineering, the spiral model, and the star model vary in their scope, with concurrent engineering pertaining to any type of engineering, the spiral model pertaining to software development, and the star model pertaining specifically to human-computer interface development. However, the models are consistent in emphasizing the importance of clarifying users’ needs through iterative design and testing. Also, the star model of humancomputer interface design is easily embedded within the spiral model, such that iterative prototyping and testing of the user interface occurs within any one general cycle of the spiral model; for example, the user interface prototype may be modified several times during initial planning, several times during the higher-level requirements phase, and several times during the design phases. The growing awareness of the concurrent engineering and spiral models changes the design and development team’s expectations about the value of iteration, making it easier for human factors specialists to integrate iterative prototyping and testing into the overall system development process, regardless of the specific model followed for a design and development project. Even within the outline of a waterfall model, rapid prototyping tools make it possible to design and test user interface prototypes iteratively during the
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
357
requirements phase, i.e., during the same period of time that systems engineers and other team members are creating detailed functional requirements. The resulting user interface prototype may then become part of the package of detailed system requirements that are given to the software developers who implement the requirements in code. (For this model to be successful, it is imperative that the human factors specialist/user interface designer communicate frequently and well with the software developers to ensure that the prototyped user interface can actually be implemented under the various development constraints.) Within the outline of a concurrent engineering and/or spiral model, the iterative user interface design process tends to be somewhat easier, because the need for iteration and clear, continuing communication is better understood by all project team members, who may also be engaging in iterative design of their own components of the system.
5.
Human Factors Activities in Human-Computer System Design
General principles for designing usable human-computer systems, described in Section 3, are carried out within the software development process described in the last section (Section 4). As members of design and development teams, human factors specialists have four major roles to play ; these are described in Section 5.1. Human factors specialists complete these roles by performing the activities described in Section 5.2. These activities cover the entire design and development process, from planning to deployment. If all or most of these activities are performed, the probability of providing usable systems that adequately meet users’ needs is greatly increased.
5.1
Roles of the Human Factors Specialist on Software Development Teams
The human factors specialist should play four primary roles, all of which are interrelated, on human-computer system design teams. These roles are : 1. 2. 3. 4.
Design of the human-computer interface; Tester and evaluator of the user interface; User advocate; and Integral member of the design team.
Because design and evaluation should occur iteratively throughout the design process, these two roles are discussed under the same heading.
358
MARY CAROL DAY AND SUSAN J. BOYCE
5.1.1 Designer and Evaluator of the User Interface One primary responsibility of the human factors specialist should, ideally, be the design of the user interface (Chapanis, 1991). The human factors specialist is often the only member of the design team who brings a knowledge of human capabilities and human-computer interaction, an ability to locate additional information rapidly, and the skills in behavioral methodologies necessary to collect information from users during the design process. This behavioral information4ombined with information from other team members-is essential for creating a user interface that is easy for people to learn and to use. The claim that the human factors specialist should have a primary design responsibility is not meant to imply that the human factors specialist is the only team member with a role to play in design. Graphical designers may contribute significantly to the aesthetic appeal of graphical and multimedia user interfaces ; software engineers and other engineers contribute information about software and hardware alternatives ; technical writers may be responsible for creating instructional materials for use, installation, and maintenance; and training experts may develop training courses. The design role involves both design and testing (or evaluation), conducted iteratively throughout the design process. Testing during design is sometimes referred to as formative evaluation, while testing of a completed design is called summative evaluation. The term usability testing is used in this chapter to refer to both formative and summative evaluation. Ideally, an initial design of a user interface is created, perhaps with a rapid prototyping tool; this design is then demonstrated to potential users and to project team members, whose feedback is used to modify the design. This process is repeated until the user interface design clearly meets users’ needs. The user interface prototype then becomes a major segment of the user interface requirements for the system or, depending on the prototyping tool, is actually incorporated into the system. The process of iterative design and evaluation is widely regarded as critical for good user interface design, and will be emphasized throughout this article. (In Section 6.2, rapid prototyping of the user interface is described in more detail.) As noted previously, the user interface includes not only computer hardware and software, but also instructional and technical support materials. Although the human factors specialist may assist in the design of instructional materials, the most typical human factors role is evaluation of the instructional and support materials to ensure their usability. Technical writers are often members of the project team, and are responsible for creating the instructional and support materials. The role of user interface designer has only recently become common for human factors specialists. The roles most frequently played in the past were
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
359
those of consultant and tester/evaluator. As a consultant, the human factors specialist provides information about human capabilities and user interface design issues to system developers, who have responsibility for both designing and implementing the user interface. The information provided by the human factors specialist may come both from the literature and from application of the behavioral methodologies. As a tester and evaluator, the human factors specialist tests and evaluates the user interface during or after its design. This is a model that has been prevalent in the design of large humanmachine systems for the military (Meister, 1987). Unfortunately, human factors specialists operating as consultants and evaluators often find it difficult to influence design sufficiently to ensure usability, regardless of the size of the system being designed (Grudin and Poltrock, 1989; Meister, 1987; Meister and Farr, 1967). They are frequently brought in too late for both advice and evaluation. After decisions that severely constrain the user interface have already been made about the hardware and software platforms and after a significant portion of the system has already been coded, developers often find it impossible, given schedule constraints, to implement changes that are necessary to ensure that the system is usable. The increasing availability of rapid prototyping tools for user interfaces has greatly enhanced the human factors specialist’s ability to function as user interface designer. Such tools allow the user interface designer to design and test the user interface iteratively, and to specify (precisely) the look and feel of the user interface before implementation begins. In addition, some user interface prototyping tools generate code, and therefore the prototype can actually become the user interface of the final software.
5.1.2 User Advocate Another primary function of the human factors specialist is that of user advocate. The human factors specialist must ensure that the needs of users of the system are given priority throughout its design. As a user advocate, the human factors specialist functions as a champion of “user-centered system design,” i.e., design that is driven by the user’s needs and that always considers the user’s perspective (Norman and Draper, 1986). To ensure that users’ needs are met, the human factors specialist must understand the users (their basic perceptual and cognitive capabilities and their skills that are relevant to the user interface and to the particular tasks), the tasks they will perform with the computer system, and the environments in which the computer system will be used to perform the tasks (Bennett, 1984; Shackel, 1984, 1988). The human factors specialist must ensure that, throughout the design process, users’ needs and concerns are given priority; all the other issues that vie for attention in system design ( e g , cost, schedule,
360
MARY CAROL DAY A N D SUSAN J. BOYCE
performance) should be considered within the context of users’ needs for usability as well as functionality.
5.1.3 Integral Team Member The human factors specialist should function as a full team member on the project team, and should be involved throughout the entire system design and development process, from the planning stages through use of the system by its intended users. (If the project is large, there may be multiple human factors specialists, all of whom should function as full team members.) The design and development of a human-computer system is a multidisciplinary activity that requires specialists with many different skills (Catterall et al., 1990; Fissel and Cecala, 1988; Kim, 1990; Laurel, 1990). A typical project team may include representatives from many organizations, including marketing, systems engineering, human factors, industrial design, graphic design, technical writing, training, development, and operations and maintenance. These representatives bring different areas of complementary expertise to the team. It is important that the relevant expertise be available when it is needed. The expertise of the human factors specialist (as well as that of many other team members) is needed throughout the process. There are two major reasons the human factors specialist needs to be involved throughout the process. First, many design decisions affect the user interface, and information about the impact of these decisions on the interface must be provided in a timely fashion. Second, information from users should be collected throughout the design process-initially to identify their characteristics, skills, needs and tasks and later to assess the extent to which the evolving design meets their needs and is usable.
Timely Information. The user interface is affected by many team decisions made throughout the design and development process. Often the human factors specialist is the team member who is most knowledgeable about the effect of software and hardware decisions on the user interface ; therefore, the human factors specialist should be involved during such decision-making to identify the decisions that will affect the user interface and contribute to the decision-making process. Figure 3 illustrates some of the information and constraints that influence the user interface. The human factors specialist must be able to provide the following types of information at the right time to affect system design : 0
Information about users that supplements information provided by a marketing organization, such as information about users’ tasks, users’ task-relevant skills, and users’ environments. This user information
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
+
Users’environment Users’ skills
Human Factors Data Basic human capabilities Human factors and humancomputer interaction Human Factors Standards/ Guidelines User Feedbsck prototypes Usability testing
e
e
USER INTERFACE
Pedormanm
@
cost
@
SoftwareAnhitecture
+ + @
361
Software Platiorm Hardware Platiorm Ongoing Activities of Other Projea Team Members
FIG. 3. Information and constraints that shape the user interface.
0
0
may influence decisions about system functionality and hardware and software platforms, in addition to the design of the user interface. Analyses of the impact to the user interface of potential decisions about other components of the system (such as software platform, hardware platform, and software architecture) and about components of the project plan (such as schedule, development costs, and overall system costs). Analyses of the advantages and disadvantages of various approaches to implementing the user interface, e.g., character-based versus graphical user interfaces.
Information from Users. Second, feedback from users should be collected throughout the design process. Different information is needed from users at different phases in the design process. For example, information about users’ tasks and environments is required early in the process, but data on users’ responses to user interface prototypes should be obtained throughout design and development. Different behavioral methodologies are required depending on the type of information needed at each phase in the development process. The human factors specialist who fully participates in the project team will be present to provide essential information about users and the user interface at the right time.
362
MARY
CAROL DAY AND SUSAN J. BOYCE
A common problem for human factors specialists-and for the projects on which they work-is that they become involved in the development process too late. Grudin and Poltrock (1989) conducted a survey in seven large corporations to investigate the roles and activities of different professionals involved in user interface design. They found that 100% of the human factors respondents wanted to be involved in projects before implementation (i.e., coding) of the user interface was begun, but only 57% reported being involved in projects this early.* Furthermore, only 27% of the human factors specialists reported that their activities were “always or usually” successful when their involvement began after implementation was complete, i.e., when they were brought in to evaluate the final product. Twenty percent said they were “occasionally” successful, and 53% said they were “rarely or never” successful. The software engineers were even more negative in their evaluations of the effect of the late involvement of human factors specialists. Only 10% of the software engineers said human factors specialists were “always or usually” successful if involved after implementation had begun ; 25% said they were “occasionally” successful, and 66% said they were “rarely or never” successful. The evaluations of the respondents from marketing were similar to those of the evaluations from software engineers. However, respondents reported more successes when involvement began earlier in the development project. When asked how often the projects had successful outcomes when human factors activities began in the middle of development projects, 43% of the human factors specialists responded “always or usually,” 40% responded “occasionally,” and 17% reported “rarely or never.” It might be expected (and has been the experience of many human factors specialists) that involvement at the beginning of a project is even more successful. The Grudin and Poltrock (1989) data confirm that involvement should begin early and be continuous to have maximum impact on the user interface and on usability. Early involvement requires that marketing, systems engineering, and development, as well as human factors specialists, recognize its importance.
5.2 Human Factors Activities at Each Phase of Development Carrying out the roles just described has different implications at each phase of the development process, when different tasks may be completed or different methodologies used. Five general phases of system design are shared by virtually all software development process models, although the Technical writers, who are responsible for user documentation, a component of the user interface, would also benefit from being included earlier in the development process; 87% wanted to be involved before implementation, but only 28Y0actually were.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
363
extent to which these phases occur in parallel or overlap varies among models. These phases are : planning, design, implementation, testing, and deployment. As is highlighted in Boehm’s (1988) spiral model, each of the first four phases may occur iteratively, but at a greater level of detail, as system design and development progress. At each of these phases, there are important activities for the human factors specialist to perform in collecting information from users and in design of the user interface. The major activities are listed in Table 11. As this table highlights, the same general type of human factors activity (e.g., user needs analysis, user interface prototyping) may occur during several of the phases, although the particular methodology or the way it is employed will probably vary depending on many of the specific characteristics of the system and project (e.g., new system or later release, phase of system development, project schedule, homogeneity or heterogeneity of user population, prior knowledge of users, old or new technology, etc.). Because of the variety of system- and project-related factors that influence selection and use of methodology, it is impossible to provide one “recipe” for which specific methodology should be used when and in what manner. This is why skill and training in human factors is critical; one cannot simply “follow the rules” because every new development project requires some wisely considered exceptions to the rules. Thus, the human factors specialist should be familiar with the wide variety of methodologies and their use, so that the right one can be used at the appropriate time in the most useful manner. Each of the major human factors activities listed in Table I1 is described in more detail in Section 6 of this chapter. The human factors specialist, as a primary user advocate, must ensure that data about the users and their needs are available as required throughout the process and must ensure that these data are appropriately translated into design. To be effective, the human factors specialist must not only consider the users’ needs, but must also consider the trade-offs among user needs and TABLEI1 H U M A NFACTORS ACTIVITIESD U R I N G S Y S T E M DEVELOPMENT Phases of system development Planning Design Implementation Testing Deployment
DESIGN AND
Human factors activities User needs analysis/ task analysis
User interface design/prototyping
X
X
X
X X
Usability testing X X
X X
X
364
MARY CAROL DAY AND SUSAN J. BOYCE
other project goals and constraints (such as performance, cost, and schedule). Thus, the human factors specialist must look both toward the user and toward the project team throughout the system development process. Below is a high-level (and unusually complete, relative to “real-world” practice) description of the human factors specialist’s activities at each phase of the system development process.
5.2.1 Planning During the planning phase, the human factors specialist’s primary activities are: 0
0 0
To collect information about users’ characteristics, tasks, environment, and needs; To incorporate this information into potential user models and highlevel user scenarios; and To create preliminary user interface prototypes.
Table I11 outlines these activities, along with some of the behavioral methodologies that may be employed during this phase. TABLE111 PLANNING PHASE-UNDERSTANDING USER NEEDS Human factors activities Looking toward the user ~
0 0 0
0 0 0
~
~~
Looking toward the project team ~~~
Collect information about users-their skills, tasks, and environment Analyze user interfaces of competitive systems Create preliminary user models Create high-level user scenarios Create preliminary user interface prototypes Demonstrate the user interface prototypes to users
~
0 0 0
~
Understand project goals (functionality, schedule, costs, etc.) Include human factors activities in the project plan Inform the project team of implications for users and for the user interface of hardware and software options Demonstrate the prototype to project team members
Behavioral information and methodologies Literature searches (human capabilities and limitations, human-computer interaction), task analysis, competitive usability analysis, user profiles, interviews, focus groups, naturalistic observations, rapid prototyping
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
365
While collecting information on user needs, the human factors specialist obtains information about the users’ skills, the tasks they currently perform with their existing system and will perform with the new system, and their environment (other computer systems in use, work processes and procedures, etc.). This information can be used to create descriptions of the users’ conceptions of the system and their tasks with the system (user models), descriptions of the users’ tasks in relation to their capabilities (task analyses), and preliminary step-by-step descriptions of the way the new system might be used to perform tasks (user scenarios). With this information, along with preliminary information from the project team, the human factors specialist may create preliminary user interface prototypes. These prototypes may be shown to users to collect additional feedback. Another activity that is sometimes performed by human factors specialists is analysis of the user interface of competitive systems. In some organizations, competitive analyses are performed by market researchers or by human factors specialists and market researchers working collaboratively. An analysis of the strengths and weaknesses of existing, competitive designs can be an excellent source of good ideas and can help to prevent serious design problems. Usability testing (described in Section 6.3) can be performed with competitive systems to help specify usability goals for the system being planned. While obtaining information about users and competitive systems, the human factors specialist maintains close contact with the project team. He or she must understand the project goals and must ensure that human factors activities are included in the project plan. As various hardware and software options are discussed (such as the choice of hardware and software platforms), the human factors specialist seeks to understand their implications for users and the user interface, and informs the project team of the implications. The information obtained about the users is shared with the other team members to maintain a continuing focus on the user. The user interface prototypes are demonstrated to team members to improve communication and to ensure that there is a common vision of the system being designed.
5.2.2 Design During the design phase, the human factors specialist continues most of the activities begun during the planning phase, but at a more detailed level (see Table IV). He or she continues to collect information on users’ needs. This may involve collecting more specific information from users about the
366
MARY CAROL DAY AND SUSAN J. BOYCE
TABLEIV DESIGNPHASE-ITERATIVEPROTOTYPING AND USER FEEDBACK Human factors activities Looking toward the project team
Looking toward the user 0 0 0 0 0 0 0 0
Develop user models and user scenarios in more detail Set usability objectives Specify usability test plans Refer to and/or create user interface standards and guidelines Create detailed user interface prototypes Collect feedback on the prototype from users Engage in iterative design and testing Conduct laboratory experiments where necessary
0
0
0 0 0 0 0 0
Demonstrate prototype to project team members Understand project design constraints and provide information on their implications for user interface design Participate in determining trade-offs and in design problem solving Create user interface requirements Hold walkthroughs with project team members Add user scenarios to the system test plan Conduct expert reviews by other user interface designers Coordinate work on all aspects of the user interface
Behavioral information and methodologies Literature searches (human capabilities and limitations, human-computer interaction), task analysis, competitive usability analysis, user needs analysis, user interface standards and guidelines, demonstrations, usability testing, experimental tests, questionnaires, interviews, observations, thinking-aloud techniques, verbal reports
implications of design issues under consideration, and it may involve again consulting the literature on human capabilities and human-computer interaction. More detailed user models and user scenarios are developed to support the design of more detailed prototypes. User interface guidelines and standards are identified or created to support the design and development of a consistent and usable user interface. In addition, usability objectives for the system are specified, and work begins on a usability test plan that will become one component of the final system test plan. At this phase, the iterative process of design and collection of user feedback is critical. As the user interface prototype is developed it is demonstrated to users, and the prototype is then revised based on the user feedback. If the prototype or components of the prototype are functional (i.e., if the prototype can actually be used), users are asked to perform common or critical tasks with the prototype. This may include initial testing to determine if usability objectives are being met. User performance data, such as task
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
367
completion time and errors, are collected, along with users’ opinions and preferences. Portions of the user instructional materials may also be tested. Relevant external and company user interface standards are identified at the beginning of the design phase, and the user interface is designed to be consistent with the standards. Throughout the evolution of the design, compliance with the standards is assessed. If the standards are not sufficiently complete, it may be necessary to supplement them with additional standards that are specific to the product line. It may also be necessary to assess consistency with other systems that have been developed as part of the same “family” of systems or products, to ensure appropriately consistent user interfaces across systems. Communication with other members of the project team remains critical. The evolving prototype is shown to marketing representatives on the team to ensure that the system being developed is what marketing had intended. The prototype is demonstrated to systems engineering, development, and system test members of the project team, both to facilitate communication and, if the prototype itself will not become part of the final system, to ensure that the design can actually be implemented. The human factors specialist works with other members of the project team on a continuing basis to understand hardware and software constraints, other project constraints, and to help in problem solving. Often the human factors specialist works with other project team members in addressing problems that cross the boundaries of functional specialties, as many problems do. A primary deliverable of this phase is often a set of human-computer interface requirements that specify precisely how the human-computer interface of the final system will “look and feel” (i.e., the appearance and the user-system dialogue of the human-computer interface). The requirements may incorporate the prototype along with additional textual or graphical information about user-system dialogue or other information that cannot easily be conveyed explicitly with the prototype. If the rapid prototyping tool used is one that generates code that can be used in the final system, the prototype may be handed off to system developers to become integrated with other system components. As the prototype, which will become part of the human-computer interface requirements, is being developed, reviews of the prototype are held with team members. If a human factors community exists, reviews or “walkthroughs” of the user interface are held with other human factors experts (e.g., Jeffries et al., 1991). In addition, there should be at least one formal review or walkthrough with other members of the project team, especially the system developers, to ensure that the requirements are understood by all and that no major implementation problems are expected.
368
MARY CAROL DAY AND SUSAN J. BOYCE
During this phase, if not before, plans are made for designing all other aspects of the user interface, such as user instructions, training materials, and maintenance instructions. Work on all components of the user interface is coordinated to ensure consistency from the user’s perspective.
5.2.3 Implementation and Testing While the system is being implemented and tested, the human factors specialist continues to work with both users and team members (see Table V). The iterative design process should not be considered complete; the user interface prototype or evolving final system should continue to be tested with users whenever reasonable. The cost of fixing problems identified early is much lower than the cost of fixing problems after development is complete (e.g., Mantei and Teorey, 1988). The human factors specialist works with developers to resolve the problems that inevitably surface during development. As a part of system test, usability testing is conducted to ensure that the final system meets the usability objectives that were set in the design phase. Usability testing should include tests of instructional materials as well as the human-computer interface.
5.2.4 Deployment When the system is introduced to the target users in “alpha” and/or “beta” tests (i.e., limited introductions before the system is made generally available), it is especially important that human factors specialists be TABLEV IMPLEMENTATION AND TESTINGPHASES-ITERATIVE DESIGNAND USER TESTING Human factors activities Looking toward the user 0 0
Continue iterative design and testing as needed Conduct usability testing of all components of the user interface
Looking toward the project team 0 0
0
Work with software developers to resolve problems Participate in system test of the user interface, working through the user scenarios Ensure all components to the interface are consistent
Behavioral information and methodologies Usability testing, questionnaires, interviews, observations, verbal reports
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
369
TABLEVI DEPLOYMENT PHASE-USER FEEDBACK Human factors activities Looking toward the user 0 0
Conduct usability testing at the users’ locations Observe users performing actual work with the system
Looking toward the project team 0 0
Provide results of user testing to project team members Help resolve any major problems identified at this phase Summarize data for use in next product or next release
Behavioral information and methodologies Usability testing, questionnaires, interviews, observations, verbal reports, task analysis
involved in the evaluations (see Table VI). No matter how exhaustive the iterative design process has been, unexpected problems occur when users begin to rely on the system to perform their tasks. Methods of collecting data may include observations of people using the system to perform their daily tasks, performance on a series of tasks selected to uncover hidden problems, and users’ responses to interviews or questionnaires. If problems are identified at this stage that will significantly decrease usability, changes will be required before general deployment. (If the process of iterative design and testing has been followed, such unpleasant surprises should not be numerous, although some will undoubtedly occur.) Even after the system has been in use for some time, additional data on the system’s use and its usability are collected. After a system has been used extensively, users often identify additional or different problems than they noted when they first began to use the system. This information can be fed into the design process for new releases of the system or for new systems.
5.3 Idealized, but Meeting a Real Need The preceding description of human factors involvement in the system design and development process is idealized. This description captures what should happen rather than what typically does happen. Human factors specialists are often not integrally involved from system planning to deployment, just as few development projects proceed as they ideally should (Grudin, 1991; Grudin and Poltrock, 1989). Furthermore, even when human factors specialists are integral team members, as is increasingly the case in many organizations, time is frequently too short to incorporate all the usercentered design activities cited above. Iterative design processes, that would
370
MARY CAROL DAY AND SUSAN J. BOYCE
prevent major surprises for the project team and frustration for users, are still not commonplace. And there is often inadequate recognition by management of the criticality of these processes. However, on the positive side, it is increasingly being recognized that human factors involvement is valuable in ensuring a usable system, and users are clamoring for usable systems. In addition, as noted previously, new models of the software development process reflect the need for more attention to user requirements and for iterative design. In many companies, human factors specialists and other champions of user-centered design are successfully incorporating more and more components of the scenarios previously described into their design and development processes (Bias and Alford, 1989; Fissel and Cecala, 1988; Flamm, 1989; Hawkins, 1989; Rideout et al., 1989; Riley and McConkie, 1989; Vorchheimer, 1989; Whiteside et al., 1988).
6.
Human Factors Methodologies for Human-Computer System Design
In this section, a summary is given of several key methodologies and approaches used by human factors specialists during the software development process: task analysis, rapid prototyping of the user interface, and usability testing. These are not the only methods used in system development environments, but they are key methodologies used by the human factors specialist. Task analysis and usability testing are analysis and evaluation methodologies, respectively, that have their foundations in the behavioral sciences. Rapid prototyping involves the use of rapid prototyping tools to create early designs of user interfaces that can be tested with users and then modified. As mentioned in the previous section, many of these methodologies would be used more than once, in an iterative fashion, in an ideal design environment.
6.1
Task Analysis
One key design principle espoused by Gould and Lewis (1985) and others is an early and continual focus on users and their tasks. Task analysis is one method that has been used successfully to maintain this focus.
6.7.1 Definition and History Task analysis refers to a class of methodologies used to understand the human component in a human-machine system. The basic goal of task
H U M A N FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
371
analysis is to examine task demands and compare these to human capabilities (Drury et al., 1987; Anderson, 1990). Different task analysis methodologies share some common elements. Most focus on a hierarchical description of the user’s activities. The job of the user is decomposed into subcomponents or tasks. A task is defined as a meaningful unit of work performance and consists of a set of related work actions involving the user interacting with a system, other people, or the environment (Phillips et al., 1988). Task analysis proceeds from the decomposition of a user’s job into a list of tasks to a more detailed cataloging of the actions required to complete each task. The final stage of task analysis relates these tasks and subtasks to the capabilities and limitations of the user. Task analysis evolved in the 1950s from similar methodologies, such as time and motion studies, used by industrial engineers. For a good introduction and review of early work see Drury ef a/. (1987). The impetus for task analysis was the development of increasingly sophisticated systems that required human interaction. A better understanding of the capabilities and limitations of users was necessary to aid in system design and the design of training materials and job aids. The basic data collection technique used in early task analysis was observation. A human factors specialist would observe someone performing a job and would write down all the details of the person’s actions. This detailed list of steps required to complete each task, along with the relationship and order of the tasks, would be carefully diagramed. Along with the description of the tasks and subtasks, the human factors specialist would note the human capabilities or limitations that contributed to or inhibited performance at each step of each task. For example, Drury et al. (1987) describe a task analysis for the task of aligning a lamp in the lamp holder used in a photocopying machine. Subtasks such as “insert lamp in holder,” “adjust height,” “adjust tilt,” and “tighten lamp clamp screws” were identified and listed. Along with the listing of the subtasks, Drury et al. listed the problems that users encountered while performing the subtask. For example, to tighten the lamp clamp screws, the lamp holder had to be held in the left hand; however, there was no place to grasp the lamp, and the lamp housing was in the way of the screwdriver needed to tighten the screws. Through this detailed study of users, a clearer picture of the relationship between the human and the equipment could be obtained. This methodology has worked particularly well for mechanical systems where the user’s job is a set of observable actions. In these systems, tasks tend to be linear in nature and relatively easy to diagram. Application of this methodology to interactive computer applications, in which users’ unobservable cognitive processes are often more important to understand and represent than their observable behavior, has been considerably more difficult.
372
MARY CAROL DAY AND SUSAN J. BOYCE
The specific technique used to perform a task analysis depends to a large degree on its goals and on the stage of the project at which the analysis is done. Often, a task analysis is performed before the development of a system has begun as a way of collecting information about user needs and user capabilities to facilitate the design of a system that will be easy to use and will compliment existing systems in the work environment. A human factors specialist might also use a task analysis after a system has been deployed to improve user training or to determine whether additional job aids are necessary. Examples of this latter type of task analysis are widely available in the literature (Drury et al., 1990; Redding and Lierman, 1990). For the remainder of this section, we will focus primarily on the first purpose of task analysis: that is, to obtain information about user needs and user environment before the development of a software system.
6.1.2 Task Analysis for Computer System User Interface Design One of the major difficulties with applying task analysis methodologies to the study of modern computer applications is that many of the tasks users perform are cognitive in nature and have no observable components. In order to apply task analysis to human-computer interaction, it is important to know what the user is thinking as well as what the user is doing. Another problem with the application of traditional task analysis methods to humancomputer interaction is that most usage scenarios are not single-path, linear progressions through observable subtasks. In computer systems, users are often faced with many choices of where to go next from any place in the user interface. The flexible, multipath user interfaces in many computer applications make use of the diagraming methods of traditional task analysis complicated and tedious. For these reasons, new methodologies are evolving for conducting task analyses for human computer interaction. Several formal methodologies, such as Task Analysis for Knowledge Descriptions (TAKD; Johnson et al., 1985). and GOMS (Goals, Operators, Methods, and Selection rules; Card et al., 1983) have been developed over the last decade. For example, GOMS analysis models the knowledge that a user must have in order to carry out tasks on a system. A goal is something the user tries to accomplish. Operators are actions the user executes, and methods are sequences of steps to accomplish a goal. Users employ selection rules to choose the method to accomplish the goal (Kieras, 1988). GOMS and other similar methodologies have highly structured, specific steps and formal systems for the representation of information.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
373
Some human factors practitioners have criticized these methodologies as too time-consuming for use in real development environments or for being appropriate only at certain stages of design for limited purposes (Carroll, 1990; Keene and Johnson, 1987; Phillips et al., 1988). However, the methods continue to be used and to evolve. New attempts to apply task analysis data directly to the design of user interfaces continue to appear (e.g., Dubrovsky, 1989). The next section provides an example of an attempt to use some traditional task analysis methods to gather information about users and their tasks, and to apply that information directly to user interface design of a new application. Some of the tradeoffs often required to use task analysis methods in a competitive product development environment are described.
6.1.3 Steps in a “Practical” Task Analysis The task analysis described in this section was conducted at AT&T Bell Laboratories by R. M. Mulligan and S. J. Boyce in the context of a real development project, within tight constraints imposed by schedule and cost. Unlike textbook descriptions, this task analysis was not formal and complete. Rather than completing an exhaustive catalog of tasks, this practical analysis focused resources on understanding a subset of critical tasks and generalized from them wherever possible. In addition, an attempt was made to understand these tasks within the context of the users’ capabilities and limitations, and the company’s methods, procedures, and broader goals. The example demonstrates the kinds of information that can be obtained from task analysis. These include the details of the task domain, users’ needs (i.e., the information, tools, etc. needed to complete tasks), users’ environments, and users’ capabilities and limitations. The purpose of this task analysis was to provide input to the design of the second release of a large, customer premises-based network management system. The system was intended to be used by business customers to monitor and track problems in their private telecommunications networks. It was designed to consolidate the functions currently provided by several smaller systems. The task analysis was conducted with one company before they received the first release of the software. The goals of the task analysis were to: gain a better understanding of how the first release of the software would fit into the existing customer environment; and obtain information that would be useful for designing the next release of the network management application. The task analysis was divided into three major components: the collection of background information, interviews and observations, and data analysis. Each of these components is described in some detail as follows.
374
MARY CAROL DAY AND SUSAN J. BOYCE
Step 1: Collection of Background Information. The goal of the initial step in the task analysis was to gather background information about the customer’s operations and about the prospective users of the application, their tasks, and the context in which they work. In two meetings-the first at AT&T Bell Laboratories and the second at the customer site-the customer provided information about the number of employees involved in network management, how these employees are organized, and the general work environment. In addition, at this phase of the analysis, a high-level description of the systems and procedures that were currently used to perform network management tasks was obtained. This information was useful for preparing materials for the next stage of the task analysis and also for setting up the schedule for interviews and observations. Step 2: Interviews and Observations. In the second step, the personnel responsible for network management were interviewed and observed performing their jobs at the customer’s network management center. Two interviewers spent three days on the customer premises interviewing seven telecommunications agents who were the prospective users of the application. The interviews began with some questions designed to provide a better understanding of the training and experience of the telecommunications agents. The agents were asked how long they had worked in their current position and how long they had worked with the company. Next, they were asked to describe briefly their major work responsibilities and to rank order, in terms of frequency of use, the existing network management systems they used. To gain a better understanding of the job performed by the telecommunications agent, the agents were asked to estimate the proportion of time spent on a variety of activities, such as interacting with the network management systems, communicating on the phone with their customers, and consulting with other telecommunications agents. These general, background questions provided a thorough understanding of the work environment and the range of tasks the telecommunications agents were expected to perform. In the second part of the interview, the telecommunications agents were asked to generate a list of all the different network management tasks that they typically perform. The agents described each task, including information about whether the task was a high priority (emergency) or a routine task, how often the task was performed, how long the task usually took to perform, and what network management systems were currently used to perform the task. Some of this information had been collected at the beginning of the interview while asking about major job responsibilities, but at this point in the interview many more details about the tasks were added.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
375
After this preliminary task list had been generated, one or two of the tasks from the list were selected for a more detailed analysis. The interviewers selected these tasks based on four criteria : frequency, importance, relevance to the system being designed, and representativeness. Frequency of performing the task was an important criterion since providing a system that made performing the high frequency tasks easier would have a large effect on the user’s satisfaction and efficiency. Tasks that were critically important to maintaining the customer’s telecommunications network were also given high priority in the analysis. Tasks that involved use of features already implemented or planned for the network management system under development were also selected to see how well the current design would accommodate the user’s approach to the task. Finally, the representativeness of the task was considered. That is, tasks that shared many characteristics with other tasks were good candidates for more detailed analysis. The telecommunications agents were asked to provide a detailed description of each selected task. They were asked to keep a series of questions in mind while going through the step-by-step description of the subtasks, such as: What is the motivation for the task? What event triggers your action? What information d o you need to complete the task? Is the information readily available? Does the task require you to make phone calls? D o you need to fill out forms? Do you hand the task off to someone else at some point? How do you know when the task has been completed? One task chosen for this analysis was troubleshooting an alarm from a network management system referred to here as the FNET system. FNET was an existing network management tool that would be replaced by the system under design. Troubleshooting an FNET alarm required users to detect the alarm on the FNET system, investigate the alarm, call the appropriate vendor and begin a detailed process of keeping in touch with the vendor and the affected customers. This task is described in greater detail in the next section on the data analysis phase. The final part of the data collection phase of the task analysis was to observe the telecommunications agents actually performing the tasks. The interviewers spent a day watching the agents performing their jobs and recording these observations. The observation sessions were essentially passive. If the observers had questions, they wrote them down and asked them when they could do so without interrupting or interfering with task completion. The goal of these observations was to record any task-specific information that was overlooked in the interviews and to gain a better understanding of relationships between tasks. Step 3: Data Analysis. The next phase of the task analysis involved summarizing and analyzing the data obtained from the interviews and
376
MARY CAROL DAY AND SUSAN J. BOYCE
observations. First, a comprehensive list of tasks carried out by the telecommunications agents was compiled. In this list, the major categories of work activities were outlined, along with the discrete tasks within each category. For each task, the priority, frequency, duration, and network management systems used to carry it out were also stated. The next step in the data analysis was to summarize and interpret the detailed data that had been collected about a subset of the tasks. The primary audience for these data was the human factors specialists and software developers who would design and build the next release of the network management application. However, several other possible consumers of the data were anticipated, including systems engineers, marketing personnel, product planners, and managers. Because no single mode of task description captured all the relevant aspects of a task for all interested parties, several different techniques were used to represent the details of each task. These techniques are briefly described as follows and illustrated for the “Troubleshooting an FNET Alarm” task in Figs. 4-7. 0
0
First, a hierarchical task diagram was constructed (see Fig. 4). This diagram presents a list of subtasks in graphical form that provides a concise, high-level overview of the task. The left-to-right order of the subtasks in the diagram corresponds to the typical temporal order in which the task is executed. Also, required and optional subtasks are distinguished in the diagram ; optional tasks are connected by dashed lines. Second, the data were represented in a hierarchical task description-a concise textual description of the task and its subtasks in outline form (see Fig. 5). This level of description provides considerably more detail than the hierarchical diagram. For a more detailed graphical representation, critical subtasks were described inflow-chart form (e.g., Fig. 6 ) . These flow-charts were better than the textual hierarchical task descriptions at conveying complex, nonserial processes. Finally, a fourth method was used to capture important subtask parameters that were not included in the other three representations. The task parameters table (e.g., Fig. 7) included task attributes such as the starting conditions and/or the principle inputs to the subtask, the exit conditions and/or principle outputs, its approximate duration, and any human factors issues or other relevant comments.
Representing the subtask information in these different ways provided a clear understanding of the tasks performed by the telecommunications agents and a better understanding of the work environment within which these tasks are
I
TASKD2.2 Troubleshoot an FNET Alarm
W 4 4
Subtask D2.2.1
Subtask D2.2.2
Subtask D2.2.5
Subtask D2.2.6
Detect Alarm
Localize Alarm
Call Technician
~~a~~~
-----.
Required Optional
FIG.4. Hierarchical task diagram
378
MARY CAROL DAY AND SUSAN J. BOYCE
1.2 Task D2.2: Troubleshoot an FNET alarm 1.2.1 Detect alarm By audible alarm By printer noise By visual cue on FNET alarm screen By customer report (rare) 1.2.2 Localize alarm Determine from visual display which node is in alarm state 1.2.3 Verify alarm Wait for another cycle to make sure alarm is not transient 1.2.4 Investigate problem 1.2.4.1 Sectlonallze problem (figure out what Is affected) 1.2.4.2 Check for presence of known causes of typical problems 1.2.5 Call technician 1.2.5.1 Report problem and any diagnostic Information
FIG.5. Hierarchical task description.
performed. For examples of other data representation methods, see McGrew (1991) and Galliers (1985). All the representations of the user’s tasks were then reviewed in light of knowledge of human cognitive capabilities. Various types of problems and mismatches between task demands and user capabilities were identified. Specifically, task flows were analyzed for information bottlenecks (i.e., places where the next stage of actions could not be completed because the user was waiting for the information necessary to continue), unnecessary redundancies, situations where users were overloaded with information, error-prone operations, and operations currently performed manually that could easily be automated. Examples of this last type of problem are tasks in which users were required to type the same information into two different systems at different times. Having the systems automatically share this information makes the task faster, less tedious and less error-prone for the users. The task descriptions were also reviewed for points at which errors seemed likely because of a breakdown in communication between critical users.
Recommendations: Task Analysis Input to User Interface Design. Several different kinds of recommendations were made to the development team for the next release of the software as a result of the task
H U M A N FACTORS I N HUMAN-COMPUTER SYSTEM DESIGN
wait for call from
Qin status?
379
* 1
Enter status mode and check status. Gather more info.
in status
I
Call technician for status
D2.2.9
Escalate one level
Update trouble ticket
FIG. 6. Flow chart for a monitor-escalate cycle.
analysis. Several new features were suggested that had not previously been considered. Changes to the user interface were recommended to enhance the consistency between the new software and other software systems that the telecommunication agents would continue to use. In addition, several recommendations were made to the customer about how to staff the new software system and how best to integrate the new system with the existing systems.
6.1.4 Is Task Analysis Worth It? Task analytic techniques supply information about the work that needs to be done, about the skills, capabilities and needs of users, and about the
Task Parameters Table Sub-task D2.2.1 Detect alarm
w
00 0
Req'd Y
InwVStarf Condition ~~~
ImutlSo~Condition
~~
~
audible alarm. printer noise, visual w e , or customer call
move to FNET system
Comments 8 Problems
Duration ~~
~~
few seconds (longerfor customer call)
D2.2.2 Localize problem
Y
D2.2.2.1 Move to FNET
Y
auditory detection
Alarm Screen displayed
few seconds
D2.2.2.2 Determine locatioi
Y
FNET alarm screen data
locationof alarmed node
few seconds
D2.2.3 Verity problem
N
FNET alarm screen data
alarmstillpresent
about 5 min.
D2.2.4 Investigate problem
N
Could use new Audible Alarm feature to advantage Already done if visually detected
Wait another cycle to make sure alarm is not transient
Gather more info to understand problem and to report to technician
FIG. 7. Task parameters table.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
381
work environment in which tasks will be performed. The method is recognized as an important step in the design of the user interface (e.g. Adams, 1989; Drury et al., 1987; Gould, 1988c; Phillips et al., 1988; Rubinstein and Hersh, 1984). However, task analysis can be time-consuming and costly to conduct, and there is concern among some human factors specialists that information gathered in a task analysis is not easily used in design (Carroll, 1990; Grudin et al., 1987). It is important to reassess continually the costs and benefits of task analysis for user interface design. No single task analysis methodology will be appropriate for every situation. Task analyses can be performed with a varying degree of formality, effort, and cost. Different ways of representing the data are being developed for use in the design of human-computer systems and may be required for the multiple uses of the data. However, the bottom line is that information about users’ needs, environments, and capabilities collected early in the product development cycle is essential for user interface design, and human factors specialists (and others on the project team) should ensure that such information is available.
6.2
Rapid Prototyping of the User Interface
A user interface prototype is a model that simulates the “look and feel” of the user interface of a computer system. It demonstrates the appearance of the user interface and how the user will interact with the system. (In this section, the term user interface is used to refer primarily to the softwarebased interface, rather than to paper-based documentation and training materials.) The term prototype is used to refer to different types of simulations of the real system. Some prototypes are developed to test the basic functionality of the software, while others are developed for performance testing (Wilson and Rosenberg, 1988). User interface prototypes, which are the focus in this chapter, do not have to include all the functionality planned for the final system to be useful. The degree to which the user interface matches the final system depends on the purpose for creating the user interface prototype, the time available, and the kinds of prototyping tools available. Throughout this section, when the term prototype is used, it refers to a user interface prototype. Rapid prototyping refers to developing a prototype that is independent of the system code. Since the prototype is not linked to the system code, it can be created early during the planning stages of the design process (MillerJacobs, 1991). In addition, rapid prototyping usually refers to reliance on a set of rapid prototyping tools that support the easy and fast construction of user interface prototypes. These tools will be discussed in greater detail in a
382
MARY CAROL DAY A N D SUSAN J. BOYCE
later section. Human factors specialists serving as user interface designers use information from the task analysis and other sources as input to the development of the prototype.
6.2.I Reasons for Prototyping the User Interface One primary reason for prototyping the user interface is to use the user interface prototype to collect feedback from prospective users (Benimoff and Whitten, 1989; Diaper, 1990). A user interface prototype can be demonstrated to users to elicit their feedback about the functionality of the system and about the user interface design. User interface prototypes can be created so that end users can actually use the prototype as they would the final system. Data on the usability of the design (time to complete a task, number and type of errors made, etc.) can be collected before the actual system has been built. Prototypes that are incomplete or that don’t match the final system in every way can still be used for the collection of user feedback. Demonstrations of the prototype performed for users by the human factors specialist can elicit many good ideas from users and can uncover potential usability problems. A variety of usability testing methodologies are discussed in a later section. Some human factors specialists have taken the solicitation of user feedback a step further and actively involve users in the design of the user interface. This concept, often called participatory design, began in Denmark and Sweden and is becoming more prevalent in the United States. For overviews of this area, see Greenbaum and Kyng (1991), Bodker and Gronbaek (1991), and Lanning (1991). The basic premise of participatory design is that users should not only evaluate the user interface, but should play an active role in the creative process of designing the user interface. The major reason to involve users to this extent is that users may add a different perspective and new information that the human factors specialist may not realize is missing. Use of this methodology, however, may require extensive training for the prospective user, and the added cost and time may decrease the cost-effectiveness of this technique. Like many other techniques, the value of participatory design must be measured within the broader context of the particular project. In addition to their use in collecting feedback from end users, user interface prototypes are an effective tool to improve communication among project team members. Human-computer system development in a commercial setting often involves the cooperation of representatives from product management, marketing, human factors, engineering, computer programming, documentation, and training. Often each of these team members brings his
H U M A N FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
383
or her own perspective and language to project discussions. Prototyping can ensure that the team is designing and building what product managers and marketing think will sell in the marketplace. Because the prototype is a concrete simulation of the system, it can help avoid misunderstandings because of ambiguities and incompleteness in verbal or written descriptions. Misunderstandings can be caught early in the development process when change is relatively inexpensive. To improve team communication, a prototype need not be as well developed or as complete as one used to collect user feedback. In the extreme case, a prototype of the user interface that consists of paper and pencil drawings or storyboards may be sufficient to increase mutual understanding among team members and improve the efficiency of project meetings. User interface prototypes can also be used as a way of communicating or supplementing requirements. Experts on development processes have noted that there are often major discrepancies between written requirements and the final, coded system; one reason for these discrepancies is that written requirements can be ambiguous (Boehm et al., 1984; Tavolato and Vincena, 1984). When the user interface prototype is used as an official part of the requirements, many ambiguities that exist in a textual document can be eliminated. Even when a complete prototype is unavailable, pictures of key screens accompanied by detailed descriptions of the user-system dialog can be used as user interface requirements that are much more specific than textual requirements, and that result in final systems that match the requirements much better than systems built from text-only requirements. In addition, the user interface prototype gives system testers unambiguous requirements to compare with the developed system to determine whether it looks and works as intended. Another advantage of prototyping the user interface is that it gives the designer an opportunity to try out various alternative designs. Competing designs can be prototyped and then either tested with prospective users or shown to the project team for feedback (Benimoff and Whitten, 1989). Because creation of the prototype is inexpensive relative to the cost of producing the actual system, it is possible to use prototypes to make comparisons that would have been difficult in the past. The design alternative that is easiest to use and best meets the users’ needs can be chosen and implemented in the final system. User interface prototyping helps to ensure consistency in user interface design. When the user interface for a new computer system works in a way that is already familiar to users from their experiences with other computer systems, users find it much easier to learn to use the new system (Polson, 1988). Also, with large software systems it is important that all the features of the system work in a consistent manner so that the user only needs to
384
MARY CAROL DAY AND SUSAN J. BOYCE
learn a single set of user interface conventions to operate the entire system. Rapid prototyping is an important tool for ensuring both of these kinds of consistency by making the user interface explicit early in the design process and by providing a concrete means for communication among team members who are designing the user interface for different parts of the same system. Through careful evaluation of the prototypes, designers are able to catch inconsistencies before they become a part of the system code. In a later section, the importance of consistency and various methodologies for ensuring it are discussed in more detail. Rapid prototyping reduces cycle time and reduces project cost. Tavolato and Vincena (1984) report findings that 67% of development effort is directed toward late-stage activities such as correcting errors that exist in code and adapting the software to meet new requirements. In addition, they found that about half of the errors that are discovered late in development can be traced to failures in the requirements phase. Users’ requirements are better understood and are communicated to the software developers more efficiently through rapid prototyping. This, in turn, reduces the number of errors in the code and the number of new requirements introduced late in the product cycle. Thus, the relatively small cost of investing in rapid prototyping in the beginning of the development process can result in large savings at the end of the development process. Rapid prototyping encourages iteration, expansion of ideas, and risk analysis that are characteristic of new software development models (Boehm, 1988). In a study conducted by Boehm et af. (1984), it was determined that the use of a prototyping approach resulted in 45% less development time than an approach that relied on specifying the design only through requirements and specification documents. These data (and our own experiences) point to significant cycletime improvements when rapid prototyping is used. In addition to the savings previously reported, rapid prototyping improves the quality of the first system that is delivered to the users. Without prototyping, all too often the first release of a software system is expected to be a trial-run of the software. Complaints from customers are then compiled and the second release of the system corrects these problems. This way of operating is costly, since it means making changes to the software product when changes are most expensive, i.e., after development. It is also costly in terms of customer satisfaction and users’ perceptions. The software market is extremely competitive and what often distinguishes among competitors’ products is the users’ perception of the quality of the software, which is due in part to the quality of the user interface. Rapid prototyping is a critical methodology that can help ensure a high quality user interface in the first release, because feedback from customers can be obtained before the product is coded and released.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
385
6.2.2 Rapid Prototyping Tools There are a wide variety of software tools available to aid in the development of user interface prototypes. Prototyping tools have been created for prototyping a wide variety of user interface types, such as graphical, character-based, voice or telephony, and hardware user interfaces. The discussion here is focused primarily on tools for the creation of prototypes for graphical and character-based user interfaces. New tools are being released so frequently that any review of the existing products would soon be outdated. However, the currently available tools can be divided into two categories : tools that generate usable application code and those that do not. The tools described as follows can be used by the human factors specialist who may not have extensive knowledge of a particular programming language. Also, because the tools do not involve “traditional” programming, they allow the human factors specialist to create quickly and modify the user interface. Rapid Prototyping Tools: No Code Generation. Prototypes produced with these tools do not become a part of the user interface of the final system and are discarded after they have been used for demonstration, testing, and requirements purposes. One class of these tools is simple slide-show tools, which usually consist of a drawing package that allows the tooluser to draw the screens of the system being prototyped and provide some mechanism for changing from one screen to the next in a prespecified order. In the final prototype, the user interface designer can flip from one screen to the next easily using a mouse click or a key press. Multipath slide-show tools are a second class of tools that do not generate code, but these tools provide more flexibility than the simple slide-show tools. Many multipath slide-show tools also use a drawing package to design screens. These tools differ from simple slide-show tools, however, by providing some way for the user interface designer to create a branching slideshow. That is, the tool includes some capability for linking screens together so that a mouse click or key press in one field or region of the screen produces a different outcome than a mouse click or key press in a different region of the screen. Through this mechanism, complex user interfaces can be simulated. Using tools such as these, user interface designers can capture not only the look of the screens, but also the user-system dialogue, i.e., how the user interacts with the system. There are several advantages to these kinds of prototyping tools. First, these packages are usually easy to learn and to use. Little or no knowledge of “traditional” programming is required to assemble the prototype. Second, the prototype can be changed quickly by swapping out one screen for
386
MARY CAROL DAY AND SUSAN J. BOYCE
another or by making quick changes to the screens using the drawing tool. This makes these tools very useful for quickly creating and comparing alternative designs. Third, since the tools are based primarily on a drawing program, anything the designer can draw can become a part of the user interface being prototyped. This allows the designer to try out new user interface concepts and prototype user interfaces that would be difficult or impossible to develop. Thus, these tools may be used to identify new user interface concepts that should be incorporated into development tools. These rapid prototyping tools also provide the quickest way to construct a simple prototype for communication of the information a user needs, the information display format, and the user-system dialogue during performance of a task. The primary disadvantage of this class of prototyping tool is that the prototype never actually becomes a part of the final system. The effort spent creating the prototype is not transferred into direct savings in time to code the user interface. However, there is still evidence for a significant advantage in overall cost savings of prototyping compared with not prototyping (Wilson and Rosenberg, 1988). Any mistake made and corrected in a prototype does not have to be corrected in code. Another disadvantage may come from the advantages of some prototypes-their realism. Depending on the capabilities of the tool chosen, a very realistic prototype can be developed. The designer then must be certain to caution users and product management that the existence of the prototype does not indicate existence of the final product. Many human factors specialists have reported situations where users have wanted to purchase a product immediately after seeing a realistic user interface prototype, not realizing that much of the development work (indeed, all of it) must be done before the product is available.
Rapid Prototyping Tools: Code-Generating. The second category of rapid prototyping tool, which includes User Interface Management Systems (UIMSs), interface builders and toolkits, is significantly different from the category described previously, in that the tools in this category allow the user to build interactively the user interface prototype which then becomes the user interface of the production system. For a detailed discussion of these tools and a review of their history, see Hix (1990) and Hartson and Hix (1989). Code-generating tools enable the separation of the user interface from the functional code, which has many advantages for the reuse of code and for iterative design (Gould et al., 1991). The most usable of these tools have a direct manipulation, graphical user interface for building the prototype. One clear advantage of these tools over tools that do not generate code is that the prototype can actually become the user interface for the final
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
387
system. Resources spent creating the prototype can be translated into direct savings in time to code the user interface. Because the user interface code makes up a substantial portion of the total system code, efforts to make the generation of user interface code easier reduce overall development costs. In addition, this reuse of the prototype means that there will be fewer errors translating from the requirements phase to the final system. These tools allow the user interface designer to have almost complete control over the design and implementation of the user interface. One disadvantage of code-generating tools is that, in general, they are more difficult to use than those that do not produce code. Since the codegenerating tools are not only used to prototype the user interface, but also to implement the user interface, they have an added layer of complexity that is absent in non-code-generating prototyping tools; therefore, the user interface prototypes take longer to design. This added layer of complexity may also make the final system run more slowly. Another disadvantage is that code-producing tools come with some constraints in terms of the building blocks of the interface that are supported. For example, some code-producing tools produce windowing systems that comply with a single industry standard, such as OPEN LOOK3; therefore, user interfaces that follow other standards cannot be designed with these tools. Certain types of displays, such as complicated graphs or maps, might be difficult or impossible to prototype using the currently available tools. New code-generating rapid prototyping tools are now introduced frequently, and with each new generation the tools become more flexible and usable. It seems likely that in the future, tools will be available to provide both the speed and flexibility of slide-show prototyping tools and the reuse potential of code-generating tools. Summary. Rapid prototyping has revolutionized the process of user interface design and development. Through the availability of easy-to-use tools, human factors specialists are able to simulate quickly the user interface of a system that can then be shown to users and other team members and quickly refined and redemonstrated. This methodology provides a way to uncover problems in the user interface early in development, when the cost of making changes is low. The development of code-generating prototyping tools has made it possible for the human factors specialist to specify completely the user interface for the system.
6.3
Usability Testing
As noted previously, a system’s usability is “the capability to be used by humans easily and effectively” (Shackel, 1984). Usability testing is a set of
’ OPEN LOOK is a trademark of the Open Software Foundation Inc.
388
MARY CAROL DAY AND SUSAN J. BOYCE
methodologies employed to determine whether a system (or a prototype of a system) is usable. Dumas (1989) points out that most usability testing methodologies share three characteristics : 1. The test is carried out with representatives of the user population as participants ; 2. The test requires that these participants perform typical or critical tasks with the system under study; and 3. Data are collected from these participants. The data collected from usability tests can be used to improve system design, to ensure that predetermined usability goals are met, and to aid in the development of documentation and training materials for the new system. Usability testing should occur at several stages during the software development process, as recommended in the principle of iterative design outlined by Gould (1988~)and others (Bennett, 1984; Karat, 1988).
6.3.1 Reasons for Conducting a Usability Test Early User Feedback. One reason for conducting a usability test is to determine areas for improvement in the design of the user interface (Booth and Marshall, 1989). For this information to be beneficial to the project, the usability test must be conducted early enough in the product development process for suggested changes to be implemented (Carroll and Rosson, 1985). With the advent of rapid prototyping tools, it has become easier to collect user feedback about a design early in the development process. Usability tests conducted at an early stage of system development are ideal for testing specific design questions, such as, “Are these command names optimal?” and “Can users construct queries to the database easily?” These kinds of questions can lead to concrete changes in the user interface. Also, most users do not read user manuals cover-to-cover before using the system. Conducting a usability test allows the human factors specialist to see how far users can get using their pre-existing experience and knowledge. This allows the human factors specialist to collect information about what is and is not intuitive to the user, so the design can be adapted to take advantage of the users’ expectations. Comparisons between Alternative Designs. A second reason for conducting a usability test is to do a comparison between alternative designs of the user interface. As mentioned in the previous section on rapid prototyping, these kinds of comparisons are becoming easier to do, since rapid prototyping tools have greatly reduced the cost of developing multiple prototypes..
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
389
In a usability test designed to compare alternative designs, users might be asked to perform the same set of tasks on more than one prototype while various measures of performance are taken. In this way, the best alternative can be determined so that it can be implemented in the final production system.
Measure against Usability Goals. A third reason for carrying out a usability test is to determine whether a set of predetermined usability goals has been attained. Usability goals are targets for how easily or quickly users should be able to perform specific tasks with the system (Whiteside et a/., 1988). When precise usability goals are specified, they can be treated as requirements that are as significant as other requirements for the system. Usability goals should be stated in terms of what level of performance must be achieved by a specified range of users, given specified training and support, while performing a specified range of tasks within a specified range of environments (Shackel, 1984, 1988). For example, a new word processing package may have as one of its usability goals that the “typical” user should be able to load the software in under 10 minutes or that the user should be able to create and print out a document making fewer than two errors. The purpose of this kind of test is to determine whether the goals have been met and to uncover any usability problems that exist. Usability tests should be conducted as early as possible in the development process, so that any problems uncovered can be addressed easily and with minimum cost. Establishing the appropriate usability goals is the key to this form of usability testing. If the system under design is a second release of an existing system, then the job of setting these goals is straightforward. For example, if it took users 15 minutes to carry out a particular task in the first release, then a redesign of that feature of the system should allow users to complete the same task in significantly less time, perhaps 10 minutes. Similarly, if the to-be-tested system has a competing system already in the marketplace, the usability goals could be set relative to the competition’s system. Setting usability goals becomes somewhat more complicated when it is done for a new system that has no obvious competitors in the marketplace. In this case, there are various sources from which one can obtain information relevant to establishing the goals. First, it is possible that during a task analysis or collection of user needs, certain goals were specified by the users. This is particularly true when designing a computer system that is intended to replace an existing manual process (for example, a system to track inventory) that was previously done using paper forms. The new system should make the tracking of inventory easier, faster, and less error-prone than the old system. Second, some usability goals may be the same across systems that were designed for different functions. For example, the amount of time
390
MARY CAROL DAY AND SUSAN J. BOYCE
it should take to complete a database query may be the same for two different systems. For a more detailed discussion of establishing usability goals, see Whiteside et al. (1988).
6.3.2 Data Collection Methodologies There are a variety of methods that can be used to collect data for determining the usability of a system. Three main categories will be discussed here : verbal reports from users, objective measures of users’ performance, and users’ responses to questionnaires. Verbal reports and questionnaires can be used to collect information about the user’s satisfaction with the system and preferences for design alternatives, as well as information about usability. Which data collection method is appropriate depends on the reason for conducting the usability test. In this section, several data collection methodologies are reviewed, with suggestions for when each methodology might be most appropriately used.
Verbal Reports. The verbal report methodology, when used to uncover usability problems, can be loosely defined as having the user talk about how he or she uses the system. As the user discusses system use, problems with usability are uncovered. The most frequently used approach is to have users perform a task with the system while simultaneously telling the human factors specialist what they are thinking. This methodology is often referred to as “thinking aloud” (Ericsson and Simon, 1980, 1984; Jorgensen, 1989). For example, if the system under study is a word processor and the task is to append a paragraph to an existing file, the user might say, “I will select this menu item to retrieve the existing file. Now I am using the F8 function key to move to the end of the file. . .” while he or she performs these actions. There are many variants on this methodology. One that has been successful is to test users in pairs. Two users perform a task on the system together, and the users talk to each other while performing the task. Since the users explain what they are doing and express opinions about the system, this can be a rich source of data for the human factors specialist. Also, this methodology has the advantage of being somewhat more natural for users than the “thinking aloud” alone method. Sometimes it is useful to collect verbal reports after the user has performed a series of tasks with the new system. This may be accomplished by videotaping the user performing tasks, and then playing the videotape back to the user to elicit comments and explanations of the user’s actions. One advantage of this approach is that data about how long it takes to complete the tasks, the number of errors committed, etc., can be collected uninterrupted by the user’s explanations. The disadvantage of this technique is that
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
391
the user’s interpretations of the videotaped activities may be more affected by forgetting or elaboration than in other techniques. Another way to collect verbal reports from users is to conduct a structured interview. The human factors specialist asks the user a series of questions about use of the system. These questions are designed to encourage the user to talk about the system’s usability. For example, the user might be asked, “What was the hardest aspect of the system to learn?” or “What one thing do you find yourself always having to look up in the manual?” Answers to these questions often indicate where there are usability problems with the system. Some researchers (e.g., Coleman et al., 1985) recommend that more specific interview questions, such as “When is the second mouse button used?” are likely to yield better results than the more open-ended examples given previously. Some combination approach, using both open-ended questions and specific questions, is probably best. It is useful to ask users openended questions about usability because, if they respond, their comments may point out unexpected usability problems. In addition, if the human factors specialist has prior concerns about certain aspects of the user interface, specific questions, such as the one just given about mouse buttons, can be useful in confirming or refuting the specialist’s concerns. If the interviewer is not skilled in interviewing techniques, this technique may produce misleading results. Also, a skilled human factors specialist with training in protocol analysis (Ericcson and Simon, 1984) is needed to analyze the volumes of videotape or audiotape that can be produced using verbal report techniques.
Performance Measures. Performance measures are objective measures, such as time to complete a task and number of errors, that are taken while users perform tasks with the system under study. (See Bennett, [1984], for a detailed discussion of different measures.) The purpose of collecting these objective measures is to compare them with similar measures collected from users using a different system, or to evaluate them with respect to usability goals. Typically, these kinds of empirical usability studies are carried out using a group of people who are as similar to the prospective users as possible in important characteristics. For example, if the product being tested is a word processing package primarily for the home computer market, participants in the study should be people who have home computer systems and have a need to do word processing at home. These users may perform very differently on the tasks than professional word processors, or than people who have never used a personal computer. Hence, one critical aspect of doing this kind of usability test is determining the correct characteristics of the user population that must be represented in the sample of participants tested.
392
MARY CAROL DAY A N D SUSAN J. BOYCE
Determining the correct tasks and the correct measures are also important to ensuring a good usability test. The tasks that the user will perform should be either critical tasks (i.e., ones that are essential for the proper operation of the system) and/or typical tasks (i.e., ones that will be performed frequently by the user and share many similarities with other tasks). A good source of tasks for usability tests are the user scenarios created after a task analysis (Bennett, 1984). Whether the critical measures of interest are time to complete the task, number of errors made, both, or other measures, will depend on the context. For example, if the task is to assemble the hardware of the system and certain errors can result in damage to the hardware, then using number and type of errors as the primary criteria for usability makes sense. For other tasks, time to complete the task may be the most important variable. In addition to simply identifying and tabulating errors, much can be learned from doing a more detailed analysis of the reasons why the errors occurred. This type of analysis, termed failure analysis, is described in more detail by Landauer (1988).
Written Questionnaires and Surveys. Surveys and questionnaires constitute a third methodology employed for usability testing. Often the format of questionnaires and surveys is such that users either choose from a list of multiple choice answers or mark a box indicating the strength of their agreement or disagreement with a statement. Questionnaires are inexpensive and easy to administer to a large group of users, so data can be collected from more prospective users with this methodology than with verbal reports or performance measures. In addition, many items from questionnaires can be used over and over again in different usability tests, thus allowing for cross-product comparisons and the refinement of the questionnaire items themselves. Questionnaire data, however, are limited in the scope of information they provide. Often, questionnaires can be used to find out ifa usability problem exists, but the data don’t provide information about the cause of the usability problem or about how to fix it. For example, the type of information often obtained with a questionnaire is something like ratings on the item “The system is easy to learn,” where “1” means “strongly disagree” and “7” means “strongly agree.” A low rating indicates that improvements to the system are necessary, but provides little guidance concerning what changes should actually be made. This more specific information about improvements can be difficult to collect with a questionnaire, because it requires that the human factors specialist anticipate the usability problems and include items targeted to the solutions on the questionnaire; this means that the questionnaire can become long and tedious for the user to complete. Also, users completing a questionnaire may find it difficult to recall specific
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
393
usability problems. For detailed information about usability problems, structured interview techniques (described in Section 6.3.2) are more useful. Expert Reviews. Detailed review, or “walkthrough,” of the user interface by human factors specialists is another method for uncovering usability problems. When this method is used, a group of human factors specialists is shown a new user interface and is asked to evaluate the interface for usability. The evaluators conduct a detailed analysis of the interface, recording all aspects of the user interface they think are likely to cause usability problems, given their experience with user interface design, user interface design principles, and usability testing. Often one component of the walkthrough involves proceeding step-by-step through task scenarios. In recent years, several different types of walkthrough methods have been described (e.g., Bias, 1991; Desurvire et al., 1991; Jeffries et al., 1991; Lewis et al., 1990a; Nielsen and Molich, 1990; Rowley and Rhoades, 1992; Wharton et al., 1992). These methods differ in substantive ways that are likely to influence their effectiveness, such as the expertise of the evaluators, the use of user interface guidelines, the use of prescribed tasks versus selfguided exploration, and individual versus team evaluation. No studies have been conducted that vary all the relevant parameters to determine what approaches are most useful and cost-effective. However, two studies do point out that having human factors experts conduct the reviews may be especially important. Jeffries et al. (1991) compared expert review (which was called heuristic evaluation in their article) with several other techniques for assessing usability : a review using software guidelines, cognitive walkthroughs, and usability testing. In all cases a new user interface was evaluated to identify usability problems. In the expert review condition, four human factors specialists reviewed the interface individually during whatever time they had available during a two-week period. In the software guidelines condition, a team of three software engineers evaluated the interface using a set of 62 guidelines. Three software engineers also worked as a team using the cognitive walkthrough methodology (Lewis et al., 1990) ; this technique required that the action and feedback of the interface be compared with the user’s goals and knowledge to identify discrepancies between the user’s expectations and the steps required by the interface. In the usability testing condition, a human factors specialist identified usability problems by collecting measures of performance from six users. More than three times as many usability problems were identified with the expert review method than with the other three methods. The expert reviewers found both more problems and more severe problems than were found through use of the other three methods. Benefit/cost ratios based on
394
MARY CAROL DAY AND SUSAN J. BOYCE
the number and severity of problems found per person hour indicated that expert review had a four-to-one advantage over the other methods. That is, when both the time to conduct the review and the number and severity of usability problems were considered, the expert review method identified the largest number of usability problems for the resources expended. The major disadvantage of the expert review method is that, to be most effective, the evaluations must be carried out by skilled human factors specialists. The four evaluators in the Jeffries et al. (1991) study had advanced degrees in the behavioral sciences and years of experience in evaluating interfaces. In many development settings, there is not a large enough group of highly skilled human factors specialists to make this method feasible. Some researchers (Nielsen, 1989c; Nielsen and Molich, 1990) have argued that similar walkthrough techniques can be used by developers, unskilled in human factors or user interface design, who are given a small number of principles for good user interface design. However, a recent study (Karat et al., 1992) suggests that usability testing is more effective than walkthroughs when walkthroughs are conducted by people unskilled in human factors and user interface design. Karat et al. (1992) compared the effectiveness of walkthrough methods using individuals or two-member teams with that of empirical usability testing for identifying usability problems. The two usability walkthrough procedures were designed to maximize their effectiveness; they included separate segments for self-guided exploration of the user interface and for use of prescribed scenarios, and usability guidelines were provided to the participants. Walkthroughs were conducted by individuals or by twomember teams. In the empirical usability test, there were also segments for self-guided exploration of the interface and for working through prescribed task scenarios. During the usability tests, human factors specialists observed the participant performing the tests and logged user comments, user problems, time on tasks, and task success or failure. The evaluators who participated in the walkthroughs and the usability tests were primarily users and developers of graphical user interface systems. Two different graphical user interfaces were evaluated to determine if the results were consistent across systems. The two commercially available packages had integrated text, spreadsheet, and graphics applications; however, they differed in their interface style and the office metaphor that was used. For both graphical systems, the largest number of usability problems were found by empirical usability testing, followed by team walkthrough and then individual walkthrough. The number of problem types found by empirical testing was about twice the number found by team walkthroughs, and three times the number found by individual walkthroughs. Karat et al. (1992) also
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
395
found that empirical testing identified the largest number of unique problems, i.e., problems that were found by only one method. Only about onethird of the problems were found by all three of the methods. When Karat et al. calculated the cost-effectiveness of each method, they found that empirical testing required only about half as much time as the walkthroughs to find each type of usability problem. The differences in the effectiveness of the walkthrough methods in the Jeffries et ul. (1991) study and the Karat et al. (1992) study may be based on differences in user interface expertise and/or the specific procedures used. In the Jeffries et ul. study, experienced human factors specialists conducted the walkthroughs on and off over a two-week period; in the Karat et al. study, the evaluators were not human factors specialists, and the walkthrough took place during a three-hour time period. In spite of the inconsistencies, Karat et al. note that both studies reveal the value of human factors expertise, since the basis of the empirical usability testing results in their study was the expertise required to design and conduct the usability test and to recognize and interpret the usability problems encountered by the users, and the basis of the walkthrough results in the Jeffries et al. study may have been the expertise of the evaluators. The Jeffries et al. (1991) and Karat et al. (1992) studies indicate the value both of walkthroughs by human factors experts and of well-designed and appropriately interpreted empirical usability studies. Each of these methods is valuable, and should probably be used at different phases in product development. Walkthroughs are useful early in development when alternative designs are being compared, and are invaluable throughout design and development for ensuring consistency across different components of a user interface. Empirical usability testing with performance measures is useful for baselining performance, for testing against usability objectives, and for ensuring that the user interface is easy to learn and easy to use.
7 . Designing for User Interface Consistency User interface consistency is widely recognized by human factors specialists as a key component of usability (e.g., Kellogg, 1989; Nielsen, 1989a; Polson, 1988; Rubinstein and Hersh, 1984; Smith and Mosier, 1986; Smith et al. 1982). In general terms, a consistent user interface is one in which a single set of rules for operation (or user-system dialogue) can be applied to perform all user tasks. A consistent user interface has been defined by Blake (1986) as having the following characteristics: 0
Predictable-Users
anticipate what the system will do;
396 0 0 0
MARY CAROL DAY AND SUSAN J. BOYCE
Dependable-The system fulfills the users’ expectations ; Habit-forming-The system encourages development of behavior patterns ; Transferable-Habits developed in one context apply in new situations ; and Natural-The interface is consistent with the users’ world knowledge.
A consistent user interface is consistent with regard to both the look and the feel of the system. That is, consistency in visual and semantic aspects as well as consistency in the syntactic aspects of the system are important. For example, Nielsen (1989b) points out a small visual inconsistency with the Macintosh interface that can be a problem for new users. The Macintosh menu bar consists of a series of pull-down menus that are represented by names across the menu bar. In addition, another menu can be accessed from the menu bar through the Apple logo on the far left. Novice users tend to overlook the Apple logo as a source of a menu, since it is visually different from the words that mark the locations of other menus. Other types of inconsistencies can exist in the syntax of the system. For example, there should be consistent use of the sequence in which commands must be issued, such as whether they are “object then action,” or “action then object” in their format. User interface consistency is important not only within a given system or application, but also across systems and applications. Much of the success of the Macintosh can be traced to the fact that different applications all adhere to a common set of user interface conventions and thereby enhance usability and customer satisfaction. Most users of computer systems use a wide variety of software applications or systems. Many users come into contact with multiple systems and user interfaces within a given day. With the advent of windowing user interfaces, many different applications can appear simultaneously on a user’s workstation. For these reasons, consistency of user interfaces across products can be as critical as user interface consistency within a product.
7.1 Advantages of Consistency The most frequently cited reason for maintaining user interface consistency is that it reduces training time and costs for new systems. Indeed, a series of studies (summarized in Polson, 1988) demonstrates that user interface consistency, both within a given system and across systems, leads to significant reductions in training time. With systems that are internally consistent, users can be taught a single set of user interface rules that can then be applied
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
397
throughout the interface to accomplish all tasks. Consistency between systems results in reduced training time because of the transfer of similar rules from an existing system to a new system. Consistency is not only helpful for the novice user, however. Having only a single set of user interface conventions to remember over time is much easier than trying to map different rules onto different systems, even for expert users. For example, most computer users have experienced the frustration of switching between two keyboards that are slightly different in their key placement. An expert touch typist will incur some decrease in speed and increase in errors when moved to a keyboard that has the delete key in a different location, for example. In addition, expert users become frustrated when moving between two systems (each with windowing environments) where the menu option “Close” in one system reduces the window to an icon, and the menu option “Close” in the other system deletes the window and kills the process. Finally, consistency both within and between systems leads users to feel more in control of the system. This strengthens the user’s feelings of selfconfidence and mastery of the system (Nielsen, 1989b), which in turn enhances the user’s satisfaction with the software.
7.2
Difficulties in Achieving Consistency
If user interface consistency is so important to the usability of the system, why is it so difficult to find examples of consistent systems in the marketplace today? There are several possible answers: sometimes it is difficult to determine what aspects of a system should be internally consistent; sometimes it is difficult to decide what other systems a new system should be consistent with ; and consistency can be expensive to achieve. Grudin (1989), in his article titled “The Case Against User Interface Consistency,” gives a good example of a seemingly inconsistent design that benefits users. The user interface design issue in question is how to decide the best scheme for menu-item defaults. There are at least three ways in which the menu default could be established: 1, the first menu item could always be the default; 2, the most recently selected menu item could be the default; or 3, the most frequently selected menu item could be the default. Grudin provides a context where, within one application, each of these different default schemes might make sense. For example, if the application were a text processor and the user wanted to go through a document and italicize specific words, the user might select the option “Italics” from a menu when he or she is on the first occurrence of a word in need of italics. Then, when the user has moved to the next word to be italicized, it would be best for the option “Italics” to be the default on the menu; that is, the
398
MARY CAROL DAY AND SUSAN J. BOYCE
most recently selected item becomes the default. Elsewhere in the same text processing application when the user wants to cut and paste a section of text, after the user has selected the option “Cut” from a menu it might be best to have the option “Paste” be the menu default, since this would be the most frequently performed action after the cut operation. Finally, suppose the user wanted to change globally a characteristic of the document, such as right justifying the text, but such an action is irreversible once performed. In this case, the option to “Apply Globally” should never serve as the default since accidental invoking of this command could have negative consequences. From this example (and others) Grudin concludes that user interface consistency may be bad for the design of usable computer systems. This example simply highlights the fact that it can be complicated to determine which aspects of a system should follow a set of rules, and which aspects should not. In the case of the word processor example given previously, deciding that all menu defaults work in a consistent manner, across contexts, may have been bad for the usability of the system. However, with a different application, with a different set of contexts, a single rule for menu defaults may make sense, and the predictability of the scheme may have advantages for the user. Grudin’s example shows that achieving consistency is more than deciding on a simple set of rules to follow. Deciding which dimensions of the user interface to hold constant across different tasks and which ones to let vary requires a detailed understanding of the user and the user’s tasks if it is to be done in a way that increases usability. Sometimes it is difficult to determine with which existing systems the system under design should be consistent. This is often the case when the user population already is familiar with multiple user interface conventions, or the user community is familiar with a user interface that is nonoptimal. In some cases, users are highly familiar with an old user interface that was not developed with ease of use as a goal and that is difficult for new users to learn. For example, many products adopted the Control-K sequences from the Wordstar word processing package because of their familiarity and not because they were easy to learn or easy to use. The designer in this case is faced with a dilemma: make the new system consistent with the old to keep training costs down for the existing users, or make the new system inconsistent, but easier to learn and use, realizing there will be a one-time cost involved in retraining old users. Whether to be consistent with existing interfaces is a complex decision requiring the consideration of many factors. The collection of performance measures from a usability test can help answer some of these questions. Finally, since it can be difficult to determine which dimensions of a user interface should be consistent and which systems should be consistent with
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
399
one another, achieving the proper user interface consistency can be expensive. However, as the usability of the user interface becomes more important to users and purchasers of computer systems, investment in consistency will become the prudent choice for manufacturers of computer systems.
7.3
Methods for Enhancing Consistency
User interface consistency can be enhanced in several ways : by following industry, national and international standards on user interface design, by following published guidelines, through expert reviews of rapid prototypes, and by using UIMSs and development toolkits that foster consistent user interface design.
7.3.1 Standards Several national, international, and private standards bodies exist to create standards for user interface design for computer systems. Below is a listing of these standards bodies and a brief description of the issues each addresses. National standards are organized through the American National Standards Institute (ANSI), which is a voluntary, consensus-based organization. It is nonprofit and nongovernment, although it does interact with the United States government on some standards issues. ANSI does not develop the standards itself but manages private sector standards activities by accrediting Standards Development Organizations. The ANSI Standards Development Organizations involved with computer system user interface standards are as follows: 0
0
0
0
X3V1.9. This group is managed by the Computer and Business Equipment Manufacturers Association. Among the issues covered by this group are icons used in graphical user interfaces, standardized terms and definitions for objects and actions used in text and office systems, symbols on equipment, and standards for keyboards and keypads. HFSHCI. This group is sponsored by the Human Factors Society and deals with issues of screen design, menu design, graphical user interface window control, and command languages. P1201.2. This IEEE-sponsored group is working on issues of interoperability between X-Window Graphical User Interface Systems. TlM1.5. This is an ANSI-sponsored standards body devoted to standards in the telecommunications industry. This subgroup is responsible
400
MARY CAROL DAY A N D SUSAN J. BOYCE
for standards for user interfaces in switching and other telecommunications equipment. In addition to the standards activities at the national level, there are also international standards organized by ISO, the International Standards Organization. I S 0 is a cross-disciplinary industrial standards organization. Membership is voluntary and members are usually representatives from national standards bodies within each country. The committees working on issues relevant to user interface design are as follows: TC159/SC4. This committee works on both hardware and software user interface issues. The hardware subcommittees have standards on video display terminal (VDT) design and other hardware standards for office environments. The software subcommittee works on issues of screen design and dialog design for both character-based and graphical user interfaces. The HFSHCI committee provides input to this body from the United States. 0 JTCl SC18/WG9. This is the international parallel to the X3V1.9 committee referred to previously and as such, this committee also works on issues of icons, objects and actions, keyboards and keypads.
In addition to these standards bodies, there are several trade associations within the United States that have developed their own standards independent of ANSI or ISO. The most well known of these are UNIX International and the Open Software Foundation (OSF), both of which have developed standards for an open, standard UNIX4 operating environment and standard graphical user interfaces (OPEN LOOK and OSF/Motif,’ respectively). This listing demonstrates that finding the right standard to follow for a given project might not be an easy feat. The standards bodies have tried to minimize the degree of overlap of topics they cover by having members who have cross membership in several of the organizations. However, there seems to be no systematic way in which the different organizations have divided up the topics, so finding a standard on a particular issue requires that all the standards be searched. Also, in most of the committees, the standards are still being developed, so no standards document may yet be available for a given design problem. Reliance on the existing standards, although a valuable start, is not enough to ensure that the user interface of a system under design will be consistent with existing systems. Many of these standards bodies are relatively new UNIX is a trademark of Unix System Laboratories. OSF/Motif is a trademark of the Open Software Foundation.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
401
and the standards, by necessity, are written at a level of detail that cannot completely specify the design of a new system. Other methodologies for ensuring consistency must be used in addition to published standards.
7.3.2 Guidelines A number of sets of guidelines for user interface design have been published that are useful resources for ensuring consistency. Many of these are compendiums of good user interface design principles that have been established either through empirical studies or through the consensus of experts (e.g., Brown, 1988; Smith and Mosier, 1986; Galitz, 1989; Shneiderman, 1987; Heckel, 1991 ; Rubinstein and Hersh, 1984; Gardiner and Christie, 1987). Although these documents are not considered standards because they have not been sanctioned by the standards organizations, they are in wide use in military and industrial computer system design environments and therefore are often regarded as de facto standards. In addition to these general guidelines, it is often necessary to develop specific guidelines for the user interface of a large software system where many designers are working in parallel. In these cases, usually a standards or guidelines document is selected and then adapted to make it more specific to the product under development. This document then serves to guide future design decisions on the software system. Large companies often develop their own internal standards and guidelines documents to ensure that all products offered from that company have a “signature” look and feel. Sometimes, as for the Apple Human Interface Guidelines (1987), these internal standards become industry standards.
7.3.3 Expert Review of User Interface Prototypes The best way to ensure user interface consistency within an application is to construct a user interface prototype and have experts review the prototype for consistency and compliance to the standards. At AT&T Bell Laboratories, on a large software development project with many human factors specialists-each responsible for a different aspect of the user interface-an iterative design approach was used. The user interface for a portion of the total interface was prototyped, and then put through an expert review with the rest of the user interface designers serving as the review panel. Then the prototype was revised and evaluated for internal and external consistency again. The review panel was knowledgeable about both the relevant standards and the entire user interface for the system. Through this process of iteration, the team was able to ensure consistency on critical aspects of the user interface across the work of different designers. This same iterative
402
MARY CAROL DAY A N D SUSAN J. BOYCE
design of a prototype could be used with other usability testing methodologies to ensure consistency.
7.3.4 UIMSs and Development Toolkits A relatively new way of ensuring consistency within and across user interfaces is by using UIMSs and toolkits that produce user interfaces that conform to a given standard. Many such tools exist. Developer’s Guide, an interface builder developed by SUN Microsystems, for example, produces user interfaces that comply with the OPEN LOOK standard (which comes from the UNIX International standards body previously mentioned). Although these tools are beneficial for ensuring that aspects of the standards are enforced in the user interface design, they cannot be solely relied on to produce consistent user interfaces. For the same reason that the standards alone are not enough, neither will these tools in and of themselves ensure that the user interface is both internally and externally consistent. Instead, these tools can aid in the design of a user interface that is compliant with the standards. The resulting interface should be tested with users or a panel of experts or checked against existing guidelines to ensure that the proper aspects of consistency are maintained throughout the interface.
8. An Example of Human Factors Activities during Product Development In this section, a now classic example of the use of human factors methodologies during the development of a service is described. The goal is to show how human factors fits into the overall development cycle and to show the type of value the human factors activities add to the final service. The example, the Olympic Message System, was a service developed by IBM. Gould et al. (1987) provide a detailed explanation of their human factors activities during the design of the Olympic Message System (OMS). This system was designed to allow Olympic athletes to send and receive voice messages among themselves. Additionally, the service allowed people from around the world to send voice messages to the athletes. The system had a touch-tone user interface for the athletes and also had an operator interface for users calling from rotary dial phones. The operator interface was necessary because most of the world does not have access to touch-tone phones. To retrieve a message, an Olympian would dial the phone number of the service, and then enter on the phone keypad his or her country code (USA for example), his or her last name, and a password. If a new message was present, it would be played. After listening to the message, the Olympian
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
403
could replay the message or send a message to someone. A friend or family member who wanted to send a message to an athlete would call the National Olympic Committee office in Los Angeles (the location of the Olympics). A staff member there, using a touch-tone phone, would connect the caller to the OMS, enter the information required, and then allow the caller to leave a voice message. The touch-tone prompts were programmed in 12 different languages. Kiosks with touch-tone phones and directions for use were placed around the Olympic villages. Predevelopment User Scenarios ldentifv Functional Conflicts. At the beginning of the project, the human factors specialists designed some detailed user scenarios. The following scenario depicts an Olympian retrieving a message : Olympian : (Dials 740-4560) OMS : Please keypress your three-letter Olympic country code. Olympian: U S A OMS : United States Please keypress your last name. Olympian : J 0 N E S OMS : John Jones Please keypress your password. Olympian: 4 0 5 New messages sent by Message Center. OMS : “John, good luck in your race. Dad.” End Message. Press 1, listen again; 2, leave a message; 3, hang up. Olympian: 3 OMS : Good-bye. Scenarios such as this one were the first level of specification for the system. These scenarios included a definition of the functionality required (the ability for athletes to be able to send and receive messages within a single phone call, for example), as well as the definition of the user interface. The scenarios helped to identify conflicts in design of the system and allowed for comments and criticism by the design team. This was beneficial early in the project, since nothing had yet been built. During this early phase of design, the designers decided to exclude a message verification function based on feedback received on the user scenarios. This feature would have allowed message-senders to learn if and when their messages had been heard by the recipients. This feature was deemed too complicated and unnecessary based on reviews of the user scenarios.
404
MARY CAROL DAY A N D SUSAN J. BOYCE
User Guides Become System Requirements. Shortly after the development of the user scenarios, user guides were written. One user guide was written for the athletes, and a second was written for friends and relatives. Each of these user guides had to be short, about a single page, to be practical. The user guides were reviewed and tested by many people and rewritten. This process was iterated to produce user guides that were easy to use, yet brief enough given the nature of the application. The user guides became the definitive documents for the messaging system over the course of the project. Prototypes Used to Refine the Design through Testing and Iteration. A few weeks after the project was started, an early prototype of the system was completed. This prototype was used to collect information about the usability of the system. Laboratory personnel and visitors to the laboratory served as participants in the usability study. These users were appropriate for collecting the type of information needed at this early stage of design. During this phase, rough spots in the prototype were worked out, and major problems with the user interface design were identified and reworked. The designers learned that four alternatives on an audio prompt were too many, for example. This had some major implications for the organization of the first user interface. Testing with this early prototype was also useful for beginning to define the help system. When participants were confused or stuck, they were asked what they wished they knew at that point to complete the task. After implementing these changes, the next step was to demonstrate the prototype system to a diverse group of people, focusing on users outside the United States. Since many of the system’s users would have no computer experience at all, it was important to test the user interface with participants who would more closely match the user population. Through these tests the designers learned to keep the functionality to a minimum to keep the system simple. For example, they realized it was necessary to eliminate the capability for friends and family to review and edit their voice messages before they were sent. Another source of input to the design were the Olympic athletes themselves. By including an athlete on the design team and by interviewing many others, the designers could ensure that the final system would fit into the athletes’ environment during the games. For example, the designers received insights into how the athletes spent their time during the period while they were at the games; this information was useful in determining where to place the kiosks. The designers also learned about the content of typical messages the athletes were likely to receive, which was useful for estimating the length of messages.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
405
Various other tests were conducted with the system. These included tests to ensure that changes to the user interface were beneficial, tests designed to stress the system’s capacity, and additional tests to uncover problems with design. Through this process of test and redesign the system became more and more usable. Successful System Is the Result. The final system was considered a success. Over 40% of the athletes used the system, and over half of the messages the athletes received came from friends and family who had not come to the Olympics. During the design of the Olympic Messaging System, many human factors methodologies were used, including the creation of user scenarios, prototyping, and usability testing by the collection of verbal reports and performance measures. The iterative approach and the continual focus on the end users helped to ensure that the final user interface was as usable and useful as possible.
9. Why Is a Human Factors Specialist Needed? In preceding sections, the role and activities of a human factors specialist on a project team were described, along with some of the primary methodologies they use. However, in the real world of system design, the need for a human factors specialist is often not recognized. Sometimes the concept of user-centered design is foreign, and involving potential users of the system during the design process is not given priority. Fortunately, this situation is becoming less common. More frequently, the need to consider the system’s users is recognized, but other members of the project team (e.g., systems engineers and developers) feel they can adequately represent the users and design the user interface; therefore the project team does not need a human factors specialist.
9.1 The Process without Human Factors Expertise Only limited data are available about the design process in software development environments. However, the available data show that systems programmers and developers, working as both designers and implementors of the user interface, typically do not engage in the user-centered design activities that are necessary to ensure a usable user interface-ven when they recognize the importance of these activities. Their use of data and guidelines on human factors and human-computer interaction is limited, their knowl-
406
MARY CAROL DAY A N D SUSAN J. BOYCE
edge of users and users’ tasks is limited, and iterative design and user testing are infrequent. For example, Hannigan and Herring (1987) studied the experience of designers of office systems across five major manufacturers in six European countries. They found that, although many designers described manmachine (sic) interface design as a major part of their jobs, the designers made little use of human factors data or user interface design information. Hannigan and Herring also found that the knowledge available about users’ tasks was not adequate; it was often second- or third-hand information gleaned from informal contact with sales or support staff, or it was based on the designers’ experience with the product during development and their guesses about its use. The designers knew that users’ needs should be considered during the design process, but they didn’t understand exactly what type of information to collect or how to collect it; thus users’ needs did not receive the same type of formal consideration given to technical issues. Usability tended to be considered after development rather than during development, and it was often assessed on an ad hoc basis through individual designer experience or casual contacts. The findings of Hannigan and Herring are consistent with those of other researchers. Johnson and Johnson ( 1989) interviewed three system designers and developers in-depth about their design experiences. They found that the designers had little involvement with users, did not know how users carried out their tasks, and sometimes did not even know what the users’ tasks would be. Two system designers said explicitly that task and user information were regarded as an adjunct to the design process rather than a starting point. However, all felt that information about how users performed their tasks would be useful. Rosson et al. (1988) interviewed 22 designers, 17 working on projects in IBM, and 5 working on projects in other organizations. They found that most designers reported some form of user testing, but the type and timing of testing varied among projects. For most projects, testing occurred either during design or after design, but not both. Although many of the projects used an incremental design process that would accommodate collecting user feedback early and continuously, user feedback was nor collected on an ongoing basis. These reports of design practice are consistent with the findings of an earlier, frequently cited study of designers’ reports of the major steps of design (Gould and Lewis, 1985). Gould and Lewis asked 447 designers to list the five or so major steps that should be included in developing and evaluating a new computer system for “end users.” Even when liberal scoring criteria were used, only 40% of the respondents mentioned empirical measurement (i.e., user testing) and only 20% mentioned iterative design.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
407
Thus, principles of good user interface design are not widely practiced by system designers and developers, even when they report that the use of such principles would be valuable in helping them select among design alternatives and in ensuring that users’ needs are met.
9.2 Constraints Overcome with Human Factors Expertise There are several reasons why it is unrealistic to expect system analysts and software developers who are not trained in human factors and humancomputer interaction to give priority to the relevant behavioral literature and to empirical behavioral methods.
9.2.I
Primary Responsibilities
When a software developer is responsible for both the design and the development of the user interface, the developer typically views his or her primary task as implementation. That is, the developer’s main task is to implement the system in code and to ensure that it meets performance, reliability, and schedule objectives. The developer is evaluated for accomplishing these primary objectives, and so the developer, naturally, concentrates on them. Rarely are precise usability objectives stated ; correspondingly, rarely does usability receive the same degree of attention that is awarded to the other objectives. In contrast to the software developer, the primary task of the human factors specialist, functioning as user interface designer, is to ensure that the interface is usable. The human factors specialist thus serves as a champion for user-centered design activities and as a user advocate on the project team. The constraints of schedule, hardware and software platform, and cost are well known to members of the development team. The human factors specialist highlights the constraints imposed by the goal of designing a usable system. The human factors specialist is likely to set specific, measurable usability goals at the beginning of the project, thus making them an objective to be attained along with all the others.
9.2.2 Conceptual Model of the System Second, because the developer’s primary task is to implement the software, the developer must have a conceptual model of the system that is based on knowledge of the system’s software and its architecture (Gaines and Shaw, 1986; Gentner and Grudin, 1990; Gillan et al., 1992). This engineering conceptual model must reflect the underlying mechanisms through which the system’s functionality is provided. Gentner and Grudin (1990) have
408
MARY CAROL DAY A N D SUSAN J. BOYCE
argued that often the ideal interface, from a developer’s perspective, is one that offers direct access to the control points of the mechanism, i.e., an interface that reflects the underlying structure of the software. They offer an example of a package of graphical tools that omitted some useful features from the interface. The developer explained that these features had initially been planned for the interface, but the current interface had been useful for debugging the interface and therefore the developers had decided it was acceptable for the final product. Thus, an interface useful for the developers was deemed to be also appropriate for users who would be performing different tasks. While this might seem unusual, comparable experiences are frequently reported by human factors specialists working with developers on software development projects. From the user’s perspective, the ideal interface is one that is easily understood and that allows the user to perform tasks in the most straightforward manner. The typical user has little interest in the system’s underlying mechanism. The user’s conceptual model of the system is likely to be based on a knowledge of the task to be performed, some general notions of computer systems based on past experience, and the “system image” that is provided through the user interface itself. The more the user interface is based on the user’s tasks and the user’s previous experience, the easier it will be for the user to learn and use it. Thus, the perspective of the developer is likely to be quite different from that of the user. Even if the developer attempts to understand and design the system from the user’s perspective, elements of the engineering model may creep into the user interface simply because the developer is so knowledgable about the underlying system. In contrast, the human factors specialist is further removed from the underlying software. The human factors specialist’s major role is to understand how the user will use the system, and then to design a user interface that supports task performance. The human factors specialist focuses on how the user will view the system (i.e., on what the user’s conceptual model of the system will be) and attempts to ensure that the user interface provides an easily understood model, perhaps by using familiar metaphors such as the desktop or chalkboard (Carroll et al., 1988). The human factors specialist is aided in understanding the user by employing behavioral methodologies to collect information and feedback from the users. Given the differences in the roles and experiences of the software developer and the human factors specialist, it might be expected that their conceptual models would differ. This difference in models was confirmed by Gillan et al. ( 1 992), who experimentally investigated differences between the cognitive models of the human-computer interface that are held by software development experts and human factors experts. Software experts tended to organize
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
409
their concepts of the human-computer interface on dimensions related to technology and implementation, as well as user characteristics, whereas human factors experts organized their concepts more consistently according to user characteristics. These differences reflect the fact that, by job definition, software developers must be concerned with technology and implementation, whereas, by job definition, the human factors specialist can focus more directly on the user. Gillan et al. (1992) noted that the developer’s cognitive model may represent a compromise between knowledge about the way the human-computer interface will be coded and its functionality, which is why it may be difficult for a developer-who is also responsible for design-to keep the engineering model totally out of the user interface.
9.2.3 Use of the literature Third, it is not easy for people who are not human factors specialists to access and use the relevant human factors literature. Studies of the design process have shown that designers often do the best they can with the information they have immediately available, the information they can rapidly acquire, or the information they perceive to be of high value (Boff, 1987; Meister, 1987; Meister and Farr, 1967). Boff (1987) listed several “chokepoints” to the acquisition of existing, relevant information for system design. The chokepoints included:
I . Ignorance of the existence of useful technical data; 2. Not understanding the value (i.e., importance) of specific technical information to the design problem ; 3. Difficulty in accessing the relevant information; and 4. Not knowing how to apply the information to the design problem. Evidence for the strength of these chokepoints comes from Allen’s (1977) finding that 92% of the technical information used by engineers is already available in their personal files or with colleagues at the time the information is needed. If the technical data in question are data relevant to humancomputer interaction and if the designers are not trained in human factors, they are likely to encounter each of the chokepoints. They will not know what useful data exist, will not understand the value of the data, will not know how to access the data easily and rapidly, and will not know how to apply them to design problems. In contrast, the human factors specialist has expertise in user interface design issues and in the relevant behavioral literature (i.e., the literature in human factors and in human-computer interaction). He or she recognizes the value of behavioral sciences information, knows how to access it, and knows how to apply it. Thus, there are fewer chokepoints in the identification
41 0
MARY CAROL DAY AND SUSAN J. BOYCE
and use of such knowledge for the human factors specialist than for others on the project team.
9.2.4 Knowledge and Use of Behavioral Methodologies Fourth, designers who have not been trained in behavioral methodologies are unlikely to regard them as critical tools in their workkespecially when the use of such methods is not patently supportive of their primary goals. Furthermore, there are a large number of behavioral methodologies, and the specific question, the users and their context, and the development context determine which technique is most appropriate for each individual situation. Choosing a methodology that is adequate yet cost-effective requires much skill, and developers, experts in other areas, are unlikely to know which specific methodology should be used for which questions. In sum, they are less likely to use the behavioral methods, and, if they do, are less likely to use them most effectively. The human factors specialist has skills in the behavioral, empirical methods necessary for collecting and interpreting information about users and users’ tasks and for collecting feedback from users throughout system design and development. The specialist knows what methods to use, when to use them, and how to use them in the most time- and cost-effective manner.
9.2.5 The Development Environment Finally, software development environments, in general, are not yet supportive of user-centered design and development processes (Grudin, 1990a, 1990b; Poltrock and Grudin, in preparation). The critical importance of the user interface may not be recognized, or if it is recognized, it may not receive the necessary formal attention in the design and development process (e.g., setting specific usability targets, collecting user feedback). In addition, the development processes of the organization may not readily support rapid user interface prototyping and iterative design based on user feedback. Although rapid prototyping tools support fast iterations, many system developers, their supervisors, and their top management are unfamiliar with these tools and their value. In addition, there are multiple barriers between users and developers that prevent obtaining the necessary user feedback. For example, marketing representatives may not know who the actual users of a product will be; contacts may be easier to obtain with “customers” who are managers or information system specialists than with potential users; and marketing specialists may be reluctant to let members of the design and development team meet with users (Grudin, 1991).
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
41 1
It is more difficult for a human factors specialist to overcome these types of organizational constraints than to overcome some of the other problems. However, an important step is having a champion for the user, for usercentered design, and for a usable user interface on the project team. An even more significant step would be convincing top managers to become champions as well (Perrow, 1983; Riley and McConkie, 1989). Having a human factors specialist on the team will certainly not immediately surmount all the barriers to user-centered design cited previously. However, it does confront three of the problems head-on by satisfying the need for a project team member who has usability as a primary objective, who can access and rapidly understand the relevant behavioral literature, and who knows and can effectively use behavioral methodologies. By enlisting others on the project team as champions for the user as well, the human factors specialist and all other team members can work together to change the organizational culture so that it is easier to create systems that meet users’ needs.
10. Cost Justification for Human Factors Managers of human-computer system development projects must make decisions about what activities are important enough to be included in a system development cycle that is increasingly pressured by competition to be shorter and less costly, while better meeting users’ needs. The argument for inclusion of human factors activities is strong. The major points of the argument are the following: 0
0
0
0
The code devoted to user interfaces is becoming an increasingly large percentage of the total code; therefore, techniques that speed the design and development of this code may significantly decrease the overall time and costs required for system design and development. The use of human factors techniques can reduce both development costs and support and maintenance costs. These are major benefits for the company developing and marketing the system. The use of human factors techniques in system design can increase system usability, which reduces training costs, increases productivity, and increases user satisfaction. These are major benefits for the users and companies who purchase the system. The usability of a product plays a critical role in users’ acceptance of the system and therefore in the system’s success or failure in the marketplace.
Each of these points is elaborated in the following sections.
41 2
MARY CAROL DAY AND SUSAN J. BOYCE
10.1 Increasing Percentage of User Interface Code User interface code is becoming an increasingly large percentage of the total code for many human-computer systems. Smith and Mosier (1985) conducted a survey of 201 people concerned with the design of information systems. Their data indicated that about 30-35% of the total lines of code were devoted to the user interface. Rosenberg (1989) analyzed the percentage of software dollars devoted to designing, implementing, and testing user interfaces built in different technologies. He estimated that approximately 30% of software dollars were spent on the user interfaces of software products that were mainly text-based but delivered on workstations with user dialogues based on icon and mouse manipulation. He also noted that the development costs for the user interface increase proportionally as the user interface design incorporates more direct manipulation dialogue and graphical displays. The increase in user interface code for graphical, direct-manipulation interfaces is documented in MacIntyre et d ’ s (1990) calculation of the percentage of code needed to implement basic Macintosh user interfaces on two different programs; it was 47% for one program and 60% for the other. In a more recent study of user interface programmers, Myers and Rosson (1992) found that an average of 48% of the code in software applications is devoted to the user interface. In addition, a large proportion of time throughout system design and development is allocated to the user interface : 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. In sum, user interface development is costly, and changing the user interface code after initial implementation adds even greater costs. Techniques that reduce the costs of designing and developing the user interface have potential for significantly reducing the overall development costs of a system.
10.2 Lower Development and Support Costs for Vendors The use of human factors methodologies can reduce development costs, and can reduce the support and maintenance costs required when serious problems are discovered in the usability of the system after its introduction to the public. Development costs are reduced through better identification of users’ needs early in system development, through iterative prototyping and testing to ensure that the user interface meets users’ needs, and through avoiding late, costly changes to user interface code (Boehm, 1988; Mantei and Teorey, 1988; Tavolato and Vincena, 1984). Boehm (1988) has generated a “top-ten’’ list of prioritized software risks. On his list, developing the wrong software functions is third and developing
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
41 3
the wrong user interface is fourth. The results of Myers and Rosson’s (1992) survey of user interface programmers are consistent with Boehm’s ranking of software risks. In response to a survey question about the most difficult aspects of development of an application’s user interface, programmers cited issues of user interface design most frequently. These issues included getting information from users about what they want, predicting users’ requirements, designing for both the naive and the experienced user, achieving consistency, and understanding and conforming to guidelines. Typical human factors methodologies can reduce the difficulty and the risk. Boehm cites task analysis, mission analysis, user surveys, user scenarios, and prototyping, as critical “risk management techniques” that support a designer in obtaining the information necessary to avoid developing the wrong user interface, as well as the wrong functionality. Mantei and Teorey (1988) have attempted to provide a realistic estimate of the amount of system maintenance time saved by designing a system “to match the thinking behavior and limitations of the users” (p. 435). They estimate that design changes made to a functioning prototype before system release will cost one-fourth of what the same changes would cost if made to a released system. Therefore, they offer the rule of thumb that, to avoid the heavy costs of changes after release, a prototype should always be used when the cost of the prototype is less than one-fourth of the total project cost. Note that the costs of making changes to a user interface prototype created with a rapid prototyping tool are even less than the costs of making changes to a functional prototype that has already been coded. In addition to reducing initial development costs, human factors techniques may prevent the need to provide extensive support after system deployment to users who find systems difficult to use. For example, many companies offer help lines to users of their products. Calls prompted by usability problems may be significantly reduced by designing the system to be easier to learn and to use. Mantei and Teorey (1988) also note that employees may attempt to sabotage a system that is difficult to use and causes them extreme frustration ; such sabotage may involve entering inaccurate data or reporting false system failures. These problems can prove costly to the company that developed the system, as well as to the company that uses the system.
10.3 Greater Usability and Increased Productivity for Users The application of behavioral principles to system design increases the usability of the system, which has been documented in many human factors reports (e.g., Bennett, 1984; Good et al., 1984; Gould et al., 1987; Landauer, 1988). The usability improvements result in reduced learning time for users,
41 4
MARY CAROL DAY AND SUSAN J. BOYCE
fewer errors, and faster use. The benefits for the Olympic Message System (Gould et af., 1987) were described previously in this chapter. Another example is that of the SuperBook6 text browser, designed to support the rapid location of information. Three short, empirical design-and-evaluation cycles reduced average information retrieval times by over 50%, and increased success rates by about 25% (Landauer, 1991). The costs of learning time and errors (Karat, 1990a, 1990b; Mayhew, 1990; Mantei and Teorey, 1988) can be calculated fairly simply. These calculations show that the cost savings of a well-designed system accumulate rapidly when there are many users of the system. Even if the number of users is not large, when critical functions are performed by a system (such as in air traffic control centers, nuclear power plants, or communications networks), a single error can result in major financial-as well as humanlosses. Mantei and Teorey (1988) offer the following example of the cost savings of a decrease in learning time. Assume that the learning time for a new system is cut by one-fourth when it is well designed. Assume that 50 employees per year are trained on the system, and learning time is typically two weeks of classes. For hourly employees, who earn $15.00 an hour, the savings in education costs to the business would be $15,000. (Savings = Employees (50) x Training Time Saved (20 hours) x Wage ($15) = $15,000.) If salaried employees who earn $40.00 an hour required training, the cost savings would be $40,000. Large cost savings may also be realized by decreasing the time it takes users to perform tasks, i.e., by increasing their productivity through enhanced system usability. On two separate application projects, Karat (1990a, 1990b) documented a decrease in user time-and therefore company costs-as a result of iterative prototyping and testing. One software application was a small development project, but the application is used on a daily basis by 23,000 IBM marketing personnel (Karat, 1989a). The second application was a larger development project that supports a business task performed occasionally by 240,000 IBM employees. For the first application, three usability tests were conducted; the first was a field prototype test, the second a laboratory prototype test, and the third a laboratory test of production code on a test system. For the second application, an observational field test of the current process was conducted, and then three usability prototype tests were conducted in a laboratory. Karat calculated the costs of the prototyping and usability testing. She then calculated the time savings in task performance that resulted from the usability improvements from the first prototype to the final design. The dollar value of the savings was Super Book is a trademark of Bell Communications Research Inc.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
41 5
calculated for the number of people who would use the application and the frequency of use. For example (but without going into detail), the usability improvements resulted in a savings of 4.67 minutes in task time for the first application. This time savings, multiplied by the user population and their hourly costs for only the first three uses of the application, resulted in a total savings of $41,700. A similar procedure was used for the second application. The cost for the usability work was then compared with the cost savings. There was a 2 : 1 dollar savings-to-cost ratio for the first application, and a 100 : 1 savings-to-cost ratio for the second application. Karat notes that these calculations provide conservative estimates of the benefits of the behavioral techniques for improving usability. Documented case histories like Karat’s are still relatively few, but data in the human factors literature that show differences in performance time for different design alternatives are rapidly increasing (Mayhew, 1990). These data can be used to help select the best (i.e., fastest to use and easiest to learn) user interface designs, and to lend concrete support for the value of such designs.
10.4 Greater User Acceptance and Marketplace Success As noted earlier in this article, usability is becoming a critical issue for users that influences their purchasing decisions and their acceptance and use of a product. The usability of a product plays a large role in the value of the product’s functions, since the functions that cannot be easily used may not be used or may be used only with frustration. When only one product offers a particular functionality, users may be willing to purchase the product solely for the function and to overlook usability problems. However, when more than one product offers the same functionality, purchasing decisions may be made on the basis of usability. A case study from Xerox offers a striking example of the importance of usability for success in the marketplace, and of the way a focus on usability can turn a struggling product line into a successful one. Wasserman (1989) details how a Xerox design strategy based on “operability” (i.e., usability) was essential to reversing a 50% loss of market share suffered by Xerox between 1976 and 1982. In the 1970s, Xerox had adopted a market strategy of implementing the complex features characteristic of high-performance, production machines into the copying machines used by casual users. However, the casual users knew neither how to operate nor how to service the machines, and they rejected them. Wasserman describes the adoption of a “user-oriented design” strategy led by human factors specialists and industrial designers. This strategy cmbodied many of the key behavioral design principles used by human factors specialists: starting from the needs of the user and the user’s conceptual model, analyzing the tasks to be performed
41 6
MARY CAROL DAY AND SUSAN J. BOYCE
by the user, adapting the machine to the user and not vice versa, iterative design and user testing with prototypes, assisting the user in constructing a mental model of the machine and task, and so forth. The strategy was highly successful. In 1983, a product of the strategy, the 1075, was introduced and became one of the most successful products that Xerox had ever designed. The impact on market share was dramatic. By 1986 Xerox had recovered from a 42% market share in 1982 to about 55% of the worldwide market share in 1986. In summary, the benefits of including human factors in the software lifecycle are becoming better documented and more obvious. Short-term benefits include a reduction in development costs and time, as iterative prototyping and testing with users ensure that appropriate, usable functionality is implemented. Longer-term benefits for the vendor company include increased sales or revenue and decreased service and maintenance costs. Longer-term benefits for users and their companies include decreased training and support costs, fewer errors, and increased productivity. The responsibility for including human factors activities uppropriutefy in the lifecycle falls on the human factors specialist, other team members, and their management. The human factors specialist must consider the costs of the various behavioral methodologies that might be used, and choose the most cost-effective approach to obtaining useful behavioral information. The project manager and team members must support the use of human factors methodologies when they are truly value-added, and assist the human factors specialist in incorporating the methods into the overall software development process.
11. Conclusions Human factors specialists are important contributors to the design and development of human-computer systems. The primary technical foundation for the contribution is knowledge of human capabilities and limitations (and the ability to obtain such information rapidly) and skill in behavioral analysis and evaluation methodologies. This body of knowledge and skills is used as human factors specialists function as user interface designer and evaluator, user advocate, and integral project team member. (Some of these roles may be shared with others on the design team, especially as multidisciplinary teams adopt an iterative, user-centered design process.) The human factors specialist tries to ensure that the human component of human-computer systems is considered throughout the design process, i.e., that the users’ characteristics, tasks, and environments are in focus throughout the design process. The final product should then be a tool that
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
41 7
extends and enhances users’ effectiveness, that builds on the users’ existing skills, and that is designed so that the user’s attention can remain fixed on the task and not on the computer-based tool. As technology changes, so loo will the focus and methodologies of the human factors specialist. Multimedia, virtual reality, natural language systems, and computer-supported cooperative work (to name just a few) are changing the content and nature of work and requiring the adaptation of behavioral methodologies, just as did the movement from design of hardware to that of software. In addition, human factors will continue to be a multidisciplinary field, and new disciplines may enter to make contributions ; for example, specialists in theatre and narrative are already becoming involved as interfaces become more realistic and interactive. But regardless of the discipline of origin, the starting point of the human factors specialist is the human and the focus is on ensuring that the user interface provides the functionality of the tool in the easiest to learn, easiest to use, safest, and most satisfying manner. The effective integration of human factors specialists into the design and development process does not occur without planning and a supportive context. Some of the factors that are important for the best use of human factors expertise are the following: 0 0
0
0
Management understanding of the user-centered design process and support for the involvement of human factors specialists. Integration of human factors activities into the project plan and into the design and development process. Research conducted by human factors specialists (and others) on the best ways to use new technologies: This research should be done outside of the main product development process because, during the development of a specific product, there’s rarely time for the appropriate studies. It should, however, be directly relevant to anticipated system development so that the research findings can be incorporated rapidly during the development process. Tools that enable rapid prototyping of the human-computer interface and iterative design and testing cycles, and, when the prototyping tools permit, incorporation of the prototype into the final system. Human factors specialists who have a solid foundation in behavioral methodoligies and are willing and able to use and adapt techniques that are appropriate for the particular system development context.
Much progress has been made in each of these areas, but much work remains. As users expect and demand more usable systems and as companies see the competitive advantage of usable systems, there will be greater
41 8
MARY CAROL DAY A N D SUSAN J. BOYCE
emphasis on ensuring that the human user is not left on the periphery (or at the end of the process as the “end user”) during the design and development process. The benefits of including people trained to deal with the human component of human-computer systems may well become as obvious as the need to include people trained in the computer component of such systems. ACKNOWLEDGEMENTS We would like to thank a number of our colleagues who carefully read the drafts of this manuscript and offered useful suggestions: Mark Altom, Nick Benimoff, Helen Fairbrother, Walter Hawkins, Richard Jordan, Ginny Ju, Sandra McNabb, Bob Mulligan, Paul Newland, and Kevin Stone. Harry Blanchard deserves special thanks for his detailed comments on the first draft of the manuscript, Bob Mulligan for his contribution to the section on task analysis, and Paul Newland for his help with the references (as well as for his support throughout this project). Thanks also to our many colleagues in the User Interface Planning and Design Department with whom we have struggled and developed in our understanding of how to be effective as user advocates and team members in the design of human-computer systems. We would also like to thank Michael Gravelle of CSERIAC (Crew System Ergonomics Information and Analysis Center), Wright-Patterson Air Force Base, Ohio, who provided a literature search when we began work on this project.
REFERENCES Adams, J. A. (1989). “Human Factors Engineering.” Macmillan, New York. Allen, T. J. (1977). “Managing the Flow of Technology: Technology Transfer and the Dissemination of Technological Information within the R & D Organization.” MIT Press, Cambridge, Massachusetts. Anderson, N. S., and Olson, J. R., eds. (1985). “Methods for Designing Software to Fit Human Needs and Capabilities.” National Academy Press, Washington, D.C. Anderson, R. 1. (1990). Task Analysis: The Oft Missing Step in the Development of ComputerHuman Interfaces; its Desirable Nature, Value, and Role. In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 1051-1054. Elsevier Science Publishers, New York. Andriole, S. J. (1990). Command and Control Information Systems Engineering: Progress and Prospects. In “Advances in Computers” (M. C. Yovits, ed.), pp. 1-98. Academic Press, San Diego, California. Apple Computer, Inc. (1987). “Apple Human Interface Guidelines: The Apple Desktop Interface.” Addison-Wesley, Reading, Massachusetts. Asahi, T., and Miyai, H. (1990). Usability Testing Method Employing the ‘Trouble Model’. Proc. of the Human Factors Society 34th Annual Meeting, Vol. 2, pp. 1233-1237. Human Factors Society, Santa Monica, California. Aucella, A. F., and Ehrlich, S. F. (1986). Voice Messaging: Enhancing the User Interface Based on Field Performance. Proc. CHI ‘86 Human Factors in Computing Systems, pp. 156-161. ACM, New York. Baecker, R. M., and Buxton, W. A. S. (1987). “A Historical and Intellectual Perspective. Readings in Human-Computer Interaction : A Multidisciplinary Approach.” Morgan Kaufmann, Los Altos, California.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
41 9
Baker. C., Eike, D. R., Malone, T. B., and Peterson, L. (1988). Update of DOD-HDBK-761: Human Engineering Guidelines for Management Information Systems. Proc. of the Human Factors Society 32nd Annual Meeting. Vol. I , pp. 335 339. Human Factors Society, Santa Monica, California. Benimoff, N. I., and Whitten, W. B., I1 (1989). Human Factors Approaches to Prototyping and Evaluating User Interfaces. AT&T Tech J 68(5), 44-55. Bennett, J. L. (1984). Managing to Meet Usability Requirements: Establishing and Meeting Software Development Goals. In “Visual Display Terminals: Visability Issues and Health Concerns” (J. Bennett, D. Case, J. Sandelin, and M. Smith, eds.), pp. 161-184. PrenticeHall, Englewood Cliffs, New Jersey. Bennett, J. L., Conklin, P., Guevara, K., Mackay, W., and Sancha, T. (1990). HCI Seen From the Perspective of Software Developers. In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.). pp. 1039-1042. Elsevier Science Publishers, New York. BertalanfTy, L. von (1 968). “General System Theory: Foundations, Development, Application.” Brdziller, New York. Bewley, W. L., Roberts, T. L., Schroit, D., and Verplank, W. L. (1983). Human Factors Testing in the Design of Xerox’s 8010 ‘Star’ Office Workstation. Proc. of the CHI ‘83 Conference Human Factors in Computing Systems 72-71. ACM. New York. Bias, R. G. (1990). Cost-Justifying Human Factors Support: Pay Me Now or Pay Me LaterBut How Much? Proc. of the Human Factors Socieiy 34th Annual Meeting. Vol. 2, pp. 832-833. Human Factors Society, Santa Monica, California. Bias, R.G. (1991). Walkthroughs: Efficient Collaborative Testing. IEEE Software 8(5),94-95. Bias, R. G . , and Alford, J. A. (1989). Factoring Human Factors in IBM. 1989 IEEE Int’l Conference on Systems. Man and Cybernetics. Vol. 3. pp. 1296- 1300. Blake, T. (1986). “Introduction to the Art and Science of User Interface Design.” Intuitive Software and Interactive Systems, California. Boar, B. H. (1983). “Application Prototyping: A Requirements Definition Strategy for the 80s.” Wiley, New York. Bodker, S., and Gronbaek, K. (1991). Cooperative Prototyping: Users and Designers in Mutual Activity. Ini. J . Man-Mach. Stud. 34(3), 453-478. Boehm, B. (1988). A Spiral Model of Software Development and Enhancement. IEEE Comp. 21(3), 61-72. Boehm, B. W., Gray, T. E., and Seewaldt, T. (1984). Prototyping Versus Specifying: a Multiproject Experiment. IEEE Truns. Softw. Eng. SE-10(3), 290-303. Boff, K. R. (1987). The Tower of Babel Revisited: On Cross-Disciplinary Chokepoints in System Design. In “System Design : Behavioral Perspectives on Designers, Tools, and Organizations” (W. B. Rouse and K. R. Boff, eds.), pp. 83-96. Elsevier Science Publishers, New York. Boff, K. R. (1988). The Value of Research is in the Eye of the Beholder. Human Factors Soc. Bull. 31(6), 1-4. Boff, K. R. (1990). Integrating Ergonomics into System Design. CSERIAC Gateway 1(2), 1-3. Boff, K. R., and Lincoln, J. E., eds. (1988). “Engineering Data Compendium: Human Perception and Performance.” (4 volumes) Armstrong Aerospace Medical Research Laboratory, Wright-Patterson Air Force Base, Dayton, Ohio. Boff, K. R., Kaufman, L., and Thomas, J. (1986). “Handbook of Perception and Human Performance, Volumes 1-2.” Wiley, New York. Boff, K. R.. Monk, D. L., Swierenga, S. J., Brown, C. E., and Cody. W. J. (1991). ComputerAided Human Factors for Systems Designers. Proc. of the Human Factors Society 35th Annual Meeting, Vol. I , pp. 332--336.Human Factors Society, Santa Monica, California.
420
MARY CAROL DAY AND SUSAN J. BOYCE
Booth, P., and Marshall, C. J. (1989). Usability in Human-Computer Interaction. In “An Introduction to Human-Computer Interaction” (P. Booth, ed.), pp. 103-136. Lawrence Erlbaum Associates, Hillsdale, New Jersey. Brown, C. M. (1988). “Human-Computer Interface Design Guidelines.” Ablex, Norwood, New Jersey. Bury, K. F. (1985). The Interactive Development of Usable Computer Interfaces. In “Human-Computer Interaction-Interact ’84” (B. Shackel, ed.), pp. 343-348. Elsevier Science Publishers, New York. Butler, K. A. (1985). Connecting Theory and Practice: A Case Study of Achieving Usability Goals. Proc. CHI ’85 Human Factors in Computing Systems, pp. 85-88. ACM, New York. Card, S . , Moran, T., and Newell, A. (1983). “The Psychology of Human-Computer Interaction.” Lawrence Erlbaum Associates, Hillsdale, New Jersey. Carroll, J. M., ed. (1987). “Interfacing Thought: Cognitive Aspects of Human-Computer Interaction.” Bradford Books/MIT Press, Cambridge, Massachusetts. Carroll, J. M. (1989). Evaluation, Description, and Invention: Paradigms for Human-Computer Interaction. In “Advances in Computers,” Vol. 29 (M. C. Yovits, ed.), pp. 47-77. Academic Press, New York. Carroll, J. M. (1990). Task-Analysis: The Oft missing Step in the Development of Computer-Human Interfaces. In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 1051-1054. Elsevier Science Publishers, New York. Carroll, J. M., ed. (1991). “Designing Interaction: Psychology at the Human-Computer Interface.” Cambridge University Press, New York. Carroll, J. M., and Campbell, R. L. (1986). Softening Up Hard Science: Reply to Newell and Card. Human Computer Interaction 2(3), 227-294. Carroll, J. M., and Rosson, M. B. (1985). Usability Specifications as a Tool in Iterative Development. In “Advances in Human-Computer Interaction,” Vol. 1 (H. R. Hartson, ed.), pp. 1-28. Ablex, Norwood, New Jersey. Carroll, J. M., Mack, R. L., and Kellogg, W. A. (1988). Interface Metaphors and User Interface design. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 67-85. Elsevier Science Publishers, New York. Catterall, B. J., Harker, S . , Klein, G., Notess, M., and Tang, J. C. (1990). Group HCI Design: Problems and Prospects. SIGCHI Bulletin 22(2), 37-41. Chapanis, A. ( 1976). Engineering psychology. In “Handbook of Industrial and Organizational Psychology” (M. D. Dunnette, ed.). Rand-McNally, Chicago. Chapanis, A. (1986). A Psychology for our Technological Society: or A Tale of Two Laboratories. In “One Hundred Years of Psychological Research in America” ( S . H. Hulse and B. F. Green, Jr., eds.), pp. 52-70. Johns Hopkins University Press, Baltimore, Maryland. Chapanis, A. (1990). The International Ergonomics Association: Its First 30 Years. Ergonomics 33(3), 275-282. Chapanis, A. (1991). To Communicate the Human Factors Message, You Have to Know What the Message Is and How to Communicate It. Part 1 . Human Factors SOC.Bull. 34(1 l), 1-4. Chapanis, A. (1992). To Communicate The Human Factors Message, You Have to Know What the Message is and How to Communicate It. Part 2. Human Factors SOC.Bull. 35(1), 3-6. Chapanis, A., Gamer, W. R., and Morgan, C. T. (1949). “Applied Experimental Psychology: Human Factors in Engineering Design.” Wiley, New York. Chignell, M. H., and Waterworth, J. A. (1991). WIMPS and NERDS: An Extended View of the User Interface. SIGCHI Bulletin 23(2), 15-21. Christensen, J. M. (1987). The Human Factors Profession. In “Handbook of Human Factors” (G. Salvendy, ed.), pp. 3-16. Wiley, New York.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
421
Christensen, J. M., Topmiller, D. A., and Gill, R. T. (1988). Human Factors Definitions Revisited. Human Factors SOC.Bull. 31(10), 7-8. Cohill, A. M. (1989). The Human Factors Design Process in Software Development. In “Designing and Using Human-Computer Interaction and Knowledge Based Systems” (G. Salvendy and M. J. Smith, eds.), pp. 19-27. Elsevier Science Publishers, New York. Coleman, W. D., Williges, R., and Wixon, D. R. (1985). Collecting Detailed User Evaluations of Software Interfaces. Proc. ofthe Human Factors Society 29th Annual Meeting, Vol. I , pp. 240-244. Human Factors Society, Santa Monica, California. Cook, T. D., and Campbell, D. T. (1979). “Quasiexperimentation: Design and Analysis Issues for Field Settings.” Rand-McNally, Chicago. Corlett, E. N. (1988). Cost-Benefit Analysis of Ergonomic and Work Design Changes. In “International Reviews of Ergonomics: Current Trends in Human Factors Research and Practice.” Vol. 2 (D. J. Osborne, ed.), pp. 85-104. Taylor and Francis, London. Curtis, B., Krasner, H., and Iscoe, N. (1988). A Field Study of the Software Design Process for Large Systems. Comm. A C M 31, 1268-1287. Damodaran, L. (1983). User Involvement in System Design. Data Processing 25(6), 6-13. Day, M. C. (1989). Designing the Human Interface: An Overview. AT&T Tech. J . 68(5), 2-8. Denning, P. J. (1991). Technology or Management. Comm. A C M 34(3), 11-12. Desurvire, H., Lawrence, D., and Atwood, M. (1991). Empiricism Versus Judgment: Comparing User Interface Evaluation Methods on a New Telephone-Based Interface. SIGCHI Bulletin 23(4), 58-59. Diaper, D. (1990). Simulation: a Stepping-stone Between Requirements and Design. In “Simulation and the User Interface” (M. A. Life, C. S. Narborough-Hall, and W. 1. Hamilton, eds.), pp. 59-71. Taylor and Francis, London. Diaper, D., Gilmore, D., Cockton, G . . and Shackel, B., eds. (1990). “Human-Computer Interaction-Interact ’90.” Elsevier Science Publishers, New York. Didner, R. S. (1988). A Value-Added Approach to Information Systems Design. Ifuman Factors SOC.Bull. 31(5), 1-2. Doppelt, F. F. (1987). Introduction and overview. In “System Design: Behavioral Perspectives on Designers, Tools, and Organizations” (W. B. Rouse and K. R. Boff, eds.), pp. 1-6. Elsevier Science Publishers, New York. Dowell, J., and Long, J. (1989). Towards a Conception for an Engineering Discipline of Human Factors. Ergonomics 32(11), 1513-1535. Draper, S. W., and Norman, D. A. (1985). Software Engineering for User Interfaces. IEEE Trans. Soffw. Eng. SE-11(3), 252-258. Drury, C. G., Paramore, B., Van Cott, H. P., Grey, S. M., and Corlett, E. N. (1987). Task Analysis. In “Handbook of Human Factors” (G. Salvendy, ed.), pp. 370-401. Wiley, New York. Drury, C. G., Prabhu, P., and Gramapadhye, A. (1990). Task Analysis of Aircraft Inspection Activities: Methods and Findings. Proc. of the Human Factors Sociery 34th Annual Meeting, Vol. I , pp. 1181-1 185. Human Factors Society, Santa Monica, California. Dubrovsky, V. J. (1989). Simplified Task Analysis and Design for End-User Computing: Implications for Human/Computer Interface Design. SIGCHI Bulktin 20 (3), 80-85. Dumas, J. S. (1989). Stimulating Change Through Usability Testing. SIGCHI Bul/etin 21( I ) , 37-44. Egan, D. E., Remde, J. R., Gomez, L. M., Landauer, T. K., Eberhardt, J., and Lochbaum, C. D. (1990). Formative Design-Evaluation of SuperBook. A C M Trans. Info. Sys. 1 , 30--57. Ericsson, K. A,, and Simon, H. A. (1980). Verbal Reports as Data. Psycho/. Rev. 87(3), 215-251. Ericsson, K. A,, and Simon, H. A. (1984). “Protocol Analysis: Verbal Reports as Data.” MIT Press, Cambridge, Massachusetts.
422
MARY CAROL DAY A N D SUSAN J. BOYCE
Fissel, J., and Cecala, A. (1988). Current Issues in Software Prototyping for Complex Systems. Proc. of the Human Factors Society 32nd Annual Meeting, Vol. I , pp. 367-369. Human Factors Society, Santa Monica, California. Flach, J . M. (1989). An Ecological Alternative to Egg-Sucking. Human Factors Soc. Bull. 32(9), 4-6. Flamm, L. E. (1989). Usability Testing: Where Does it Belong in Software Design? 1989 IEEE Int’l Conference on Systems, Man and Cybernetics, Vol 1. pp. 235-236. IEEE, New York. Flores, F., Graves, M., Hartfield, B., and Winograd, T. (1988). Computer Systems and the Design of Organizational Interactions. ACM Trans. Office Info. Sys. 6(2), 153-171. Francas, M., Goodman, D., and Dickinson, J. (1985). A Reliability Study of Task WalkThrough in the Computer/Communications Industry. Hum. Factors 27(5), 601-605. Gaines B. R., and Shaw, M. L. G. (1986). From Timesharing to the Sixth Generation: The Development of Human-Computer Interaction, Part 1. Int. J . Man-Mach. Stud. 24( I), 1-27. Galitz, W. 0. (1989). “Handbook of Screen Format Design.” QED Information Sciences, Inc., Wellesley, Massachusetts. Galliers, R. D. (1985). An Approach Information Needs Analysis. In “Human-Computer Interaction-Interact ’84” (B. Shackel, ed.), pp. 619-628. Elsevier Science Publishers, New York. Gardiner, M. M., and Christie, B. (1987). “Applying Cognitive Psychology to User Interface Design.” Wiley, New York. Gentner, D. R., and Grudin, J. (1990). Why Good Engineers (Sometimes) Create Bad Interfaces. Proc. CHI ‘90 Human Factors in Computing Systems, pp. 277-282. ACM, New York. Gilbreth, F. B. (1911). “Motion Study.” Van Nostrand, New York. Gillan, D. J., and Breedin, S. D. (1990). Designer’s Model of the Human-Computer Interface. Proc. CHI ‘90 Human Factors in Computing Systems, pp. 391-398. ACM, New York. Gillan, D. J., Breedin, S. D., and Cooke, N. J. (1992). Network and Multidimensional Representations of the Declarative Knowledge of Human-Computer Interface Design Experts. Int. J . Man-Mach. Stud. 36, 587-615. Good, M. D., Whiteside, J. A., Wixon, D. R., and Jones, S. J. (1984). Building a User-Derived Interface. Comm. ACM 27(10), 1032-1043. Good, M., Spine, T. M., Whiteside, J., and George, P. (1986). User-derived Impact as a Tool for Usability Engineering. Proc. CHI ’86 Human Factors in Computing Systems, pp. 241-246. ACM, New York. Goodwin, N. C. (1987). Functionality and Usability. Comm. ACM 30(3), 229-233. Gould, J. D. (l988a). Designing for Usability: The Next Iteration is to Reduce Organizational Barriers. Proc. of the Human Factors Society 32nd Annual Meeting, Vol. 1, pp. 1-9. Human Factors Society, Santa Monica, California. Could, J. (1988b). From the President. Human Factors SOC.Bull. 31(4), 4-5. Could, J. D. (1988~).How to Design Usable Systems. In “Handbook of Human-Computer Interaction” (M.Helander, ed.), pp. 757-789. Elsevier Science Publishers, New York. Could, J. D., and Lewis, C. H. (1985). Designing for Usability: Key Principles and What Designers think. Comm. ACM 28(3), 300-31 I . Could, J. D., Boies, S. J., Levy, S., Richards, J. T., and Schoonard, J. (1987). The 1984 Olympic Message System: A Test of Behavioral Principles of System Design. Comm. ACM 30(9), 758-769. Could, J. D., Boies, S. J., and Lewis, C. (1991). Making Usable, Useful, Productivity-Enhancing Computer Applications. Comm. ACM 34( I), 75-85. Gray, W. D., John, B. E., Stuart, R., Lawrence, D., and Atwood, M. E. (1990). GOMS Meets The Phone Company : Analytic Modeling Applied to Real-World Problems. In “HumanComputer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 29-34. Elsevier Science Publishers, New York.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
423
Greenbaum, J., and Kyng, M. (1991). “Design at Work: Cooperative Design of Computer Systems.” Lawrence Erlbaum Associates, Hillsdale, New Jersey. Gronbaek, K. ( 1989). Rapid Prototyping with Fourth Generation Systems-An Empirical Study. Office Technology and People 5(2), 105- 125. Grudin, J. (1989). The Case Against User Interface Consistency. Comm. ACM 32(10), 1164- 1173. Grudin, J. (1990a). Obstacles to User Involvement in Interface Design in Large Product Development Organizations. Int. J . Man-Mach. Stud. 34(3), 435-452. Grudin, J. (1990b). Obstacles to User Involvement in Software Product Development, with ’90” (D. Diaper, Implications for CSCW. In “Human-Computer Interaction-Interact D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 219-224. Elsevier Science Publishers, New York. Grudin, J. (1990~).The Computer Reaches Out: The Historical Continuity of Interface Design. Proc. CHI ‘90 Human Factors in Computing Systems, pp. 261-268. ACM, New York. Grudin, J. (1991). Systematic Sources of Suboptimal Interface Design in Large Product Development Organizations. Human Computer Interaction 6(2), 147-196. Grudin, J., and Poltrock, S. (1989). User Interface Design in Large Corporations: Coordination and Communication Across Disciplines. Proc. CHI ’89 Human Factors in Computing Systems, pp. 197-203. ACM, New York. Grudin, J., Ehrlich, S. F.. and Shriner, R. (1987). Positioning Human Factors in the IJser Interface Development Chain. Proc. CHI ’87 IIirmtm Furturs hi Compufmg Syslons. pp. 125-131. ACM, New York. Grudin, J., Carroll, J., Ehrlich, S., Grisham, M., Hersh, H., and Seybold, P. (1988). Integrating Human Factors and Software Development. Proc. CHI ’88 Human Factors in Computing Systems, pp. 157-159. ACM, New York. Hall, G. S. (1920). Psychology and Industry. Pedogogical Seminar 27, 281-293. Hammond, N., Jorgensen, A,, Maclea, A., Barnard, P., and Long, J. (1983). Design Practice and Interface Usability: Evidence from Interviews with Designers. Proc. CHI ’83 Human Factors in Computing Systems, pp. 40-44. ACM, New York. Hannigan, S., and Herring, V. (1987). Human Factors in Office Product Design-European Practice. In “Cognitive Engineering in the Design of Human-Computer Interaction and Expert Systems” (G. Salvendy, ed.), pp. 225-232. Elsevier Science Publishers, New York. Hanson, B. L. (1983). A Brief History of Applied Behavioral Science at Bell Laboratories. Bell Syst. Tech. J . 62(6), 1571-1590. Hartson, H. R., and Hix, D. (1989). Toward Empirically Derived Methodologies and Tools for Human-Computer Interface Development. Int. J . Man-Mach. Stud. 31(4), 477-494. Hawkins, W. H. (1989). Where Does Human Factors Fit in R & D Organizations? 1989 IEEE Int’l Confirenee on Systems, Man and Cybernetics, Vol. I , pp. 222-224. IEEE, New York. Heckel, P. (1991). “The Elements of Friendly Software Design, The New Edition.” Syblex, San Francisco, California. Helander, M., ed. (1988). “Handbook of Human- Computer Interaction.” Elsevier Science Publishers, New York. Hewett, T. T., and Meadow, C. T. (1986). On Designing for Usability: An Application of Four Key Principles. Proc. CHI ‘86 Human Factors in Computing Systems, pp. 247-252. ACM, New York. Hirsch, R. S. (1989). The Past and Future of Human Factors: A Personal View. In “Ergonomics: Harness the Power of Human Factors in Your Business” (E. T. Klemmer, ed.), pp. 161-178. Ablex, Norwood, New Jersey. Hix, D. (1990). Generations of User-Interface Management Systems. IEEE Software September, 77- 87.
424
MARY CAROL DAY AND SUSAN J. BOYCE
Hix, D., and Casaday, G. (1990). Report of the Working Group on Interface Design Decisions and Representation. SIGCHI Bull. 22(2), 34-36. Howarth, I. (1987). Psychology and Information Technology. In “Information, Technology and People” (F. Blackler and D. Oborne, eds.), pp. 1-19. MIT Press, Cambridge, Massachusetts. Howell, W. C., Colle, H. A,, Kantowitz, B. H., and Wiener, E. L. (1987). Guidelines for Education and Training in Engineering Psychology. Amer. Psychol. 42(6), 602-604. Human Factors Society, (1990). Human Factors Society Accreditation Self-study Report Guide. Human Factors Society, Santa Monica, California. Human Factors Society. (1992). “Directory and Yearbook.” Human Factors Society, Santa Monica, California. Israelski, E. W., Angiolillo-Bent, J. S., Brems, D. J., Hoag, L. L., Roberts, L. A., and Wells, R. S. (1989). Generalizable User-Interface Research. AT&T Tech. J . 68(5), pp. 31-43. Jeffries, R., Miller, J. R., Wharton, C., and Uyeda, K. M. (1991). User Interface Evaluation in the Real World: A Comparison of Four Techniques. Proc. CHI P I Human Factors in Computing Systems, pp. 119-124. ACM, New York. Johnson, H., and Johnson, P. (1989). Integrating Task Analysis into Systems: Surveying Designers’ Needs. Ergonomics 32(1 I), 1451-1467. Johnson, H., and Johnson, P. (1990). Designers-Identified Requirements for Tasks to Support Task Analyses. In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 259-264. Elsevier Science Publishers, New York. Johnson, P., Diaper, D., and Long, J. (1985). Tasks, Skills and Knowledge: Task Analysis for Knowledge-Based Descriptions. In “Human-Computer Interaction-Interact ’84” (B. Shackel, ed.), pp. 499-503. Elsevier Science Publishers, New York. Jorgensen, A. H. (1989). Using the Thinking-Aloud Method in System Development. In “Designing and Using Human-Computer Interfaces and Knowledge Based Systems” (G. Salvendy and M. J. Smith, eds.), pp. 743-750. Elsevier Science Publishers, New York. Karat, C.-M. (1989a). Interactive Usability Testing of a Security Application. Proc. of the Human Factors Society 33rd Annual Meeting, Vol. 1, pp. 273-277. Human Factors Society, Santa Monica, California. Karat, C.-M. (1990a). Cost-Benefit Analysis of Iterative Usability Testing. In “HumanComputer Interaction-Interact ’90’ (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 351-356. Elsevier Science Publishers, New York. Karat, C.-M. (1990b). Cost-Benefit Analysis of Usability Engineering Techniques. Proc. of the Human Factors Society 34th Annual Meeting, Vol. 2, pp. 839-843. Human Factors Society, Santa Monica, California. Karat, C.-M., Campbell, R., and Fiegel, T. (1992). Comparison of Empirical Testing and Walkthrough Methods in User Interface Evaluation. Proc. CHI ’92 Human Factors in Computing Systems, pp. 397-404. ACM, New York. Karat, J. (1988). Software Evaluation Methodologies. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 891-903. Elsevier Science Publishers, New York. Karat, J. (1989b). The Relation of Psychological Theory to Human-Computer Interaction Standards. In “Designing and Using Human-Computer Interfaces and Knowledge Based Systems” (G. Salvendy and M. J. Smith, eds.), pp. 582-588. Elsevier Science Publishers, New York. Karat, J., ed. (1991). “Taking Software Design Seriously: Practical Techniques for Human-Computer Interaction Design.” Academic Press, San Diego, California. Karat, J., and Bennett, J. L. (1990). Supporting Effective and Efficient Design Meetings. In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 365-370. Elsevier Science Publishers, New York.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
425
Karat J., and Bennett, J. L. (1991). Using Scenarios in Design Meetings-A Case Study Example. In “Talking Software Design Seriously : Practical Techniques for Human-Computer Interaction Design” (J. Karat, ed.), pp. 63-94. Academic Press, San Diego, California. Karat, J., and Dayton. T. (1990). Taking Design Seriously: Exploring Techniques Useful in HCI Design. SIGCHI Bulletin 22(2), 26-33. Karlin, J. E. (1977). The Changing and Expanding Role of Human Factors in Telecommunications Engineering at Bell Laboratories. Paper presented at the Eighth International Symposium on Human Factors in Telecommunications, Cambridge, England, September 1977. Kay, A. (1990). User interface: a personal view. In “The Art of Human-Computer Interface Design” (B. Laurel, ed.), pp. 191-207. Addison-Wesley, Reading, Massachusetts. Keene, M., and Johnson, P. (1987). Preliminary Analysis for Design. In “People and Computers 111” (D. Diaper and R. Widner, eds.). Cambridge University Press, Cambridge, United Kingdom. Kellogg, W. A. (1989). The Dimensions of Consistency. In “Coordinating User Interfaces for Consistency” (J. Nielsen, ed.), pp. 9-20. Academic Press, Boston, Massachusetts. Kieras, D. E. (1988). Towards a Practical GOMS Model Methodology for User Interface Design. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 135-157. Elsevier Science Publishers, New York. Kieras, D. E. (1990). An Overview of Human-Computer Interaction. J. Washington Acad. Sci. 80(2), 39-70. Kim, S. (1990). Interdisciplinary Cooperation. In “The Art of Human-Computer Interface Design” (B. Laurel, ed.), pp. 31-44. Addison-Wesley, Reading, Massachusetts. Klemmer, E. T., ed. (1989). “Ergonomics: Harness the Power of Human Factors in Your Business.” Ablex, Nonvood, New Jersey. Lachman, R. (1990). Computer Workstations: Explorations in Human and Machine Cognition. Behaviour, Res. Meth. Instr. Computers 22(2), 202-207. Landauer, T. K. (1985). Psychological Research Methods in the Human Use of Computers. Proc. CHI ‘85 Human Factors in Computing Systems, pp. 41-45. ACM, New York. Landauer, T. K. (1987). Relations Between Cognitive Psychology and Computer System Design. In “Interfacing Thought: Cognitive Aspects of Human Computer Interaction” (J. M. Carroll, ed.), pp. 1-25. Bradford/MIT Press, Cambridge, Massachusetts. Landauer, T. K. (1988). Research Methods in Human-Computer Interaction. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 905-928. Elsevier Science Publishers, New York. Landauer, T. K. (1991). Let’s Get Real: A Position Paper on the Role of Cognitive Psychology in the Design of Humanly Useful and Usable Systems. In “Designing Interaction: Psychology at the Human-Computer Interface” (J. M. Carroll, ed.), pp. 60-73. Cambridge University Press, New York. Lanning, T. R. (1991). Let The Users Design! In “Taking Software Design Seriously: Practical Techniques for Human-Computer Interaction Design” (J. Karat, ed.), pp. 127-1 36. Academic Press, San Diego, California. Laurel, B. (1990). “The Art of Human-Computer Interface Design.” Addison-Wesley, Reading, Massachusetts. Lewis, C., Polson, P., Wharton, C., and Rieman, J. (1990a). Testing a Walkthrough Methodology for Theory-Based Design of Walk-Up-and-Use Interfaces. Proc. CHI ’90 Human Factors in Computing Systems, pp. 235-242. ACM, New York. Lewis, J. R., Henry, S . C., and Mack, R. L. (1990b). Integrated Office Software Benchmarks: A Case Study. In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 175-181. Elsevier Science Publishers, New York.
426
MARY CAROL DAY AND SUSAN J. BOYCE
Licht, D. M., Polzella, D. J., and BolT, K. R. (1989). Human Factors, Ergonomics, and Human Factors Engineering: An Analysis of Definitions. (CSERIAC-89-01). CSERIAC, Wright-Patterson Air Force Base, Dayton, Ohio. Life, M. A,, Narborough-Hall, C. S., and Hamilton, W. I. (1990). “Simulation and the User Interface.” Taylor and Francis, London. Lim, K. Y., Long, J. B., and Silcock, N. (1990). Integrating Human Factors with Structured Analysis and Design Methods: An Enhanced Conception of the Extended Jackson System Development Method. In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp, 175-181. Elsevier Science Publishers, New York. Lincoln, J. E., and Boff, K. R. (1988). Making Behavioral Data Useful for System Design and Applications: Development of the Engineering Data Compendium. Proc. of the Human Factors Society 32nd Annual Meeting, Vol. 2, pp. 1021-1025. Human Factors Society, Santa Monica, California. Long, J., and Dowell, J. (1989). Conceptions of the Discipline of HCI: Craft, Applied Science, and Engineering. In “People and Computers V” (A. Sutcliffe and L. Macaulay, eds.), pp. 9-32. Cambridge University Press, Cambridge, Great Britain. Lund, M. A. (1985). Evaluating the User Interface: The Candid Camera Approach. Proc. CHI ’85 Human Factors in Computing Systems, pp. 107-113. ACM, New York. Lundell, J., and Notess, M. (1991). Human Factors in Software Development: Models, Techniques, and Outcomes. Proc. CHI PI Human Factors in Computing Systems, pp. 145-151. ACM, New York. MacIntyre, F., Estep, K. W., and Sieburth, J. M. (1990). Cost of User-Friendly Programming. J . Forth Appl. Res. 6(2), 103-1 15. Malin, J. T., Schreckenghost, D. L., Woods, D. D., Potter, S. S., Johannesen, L., Holloway, M., and Forbus, K. D. (1991). Making Intelligent Systems Team Players: Case Studies and Design Issues. Vol. 1 : Human-Computer Interaction Design. NASA Technical Memorandum 104738, National Aeronautics and Space Administration, Lyndon B. Johnson Space Center, Houston, Texas. Mantei, M. M., and Teorey, T. J. (1988). Cost/Benefit Analysis for Incorporating Human Factors in the Software Lifecycle. Comm. A C M 31(4), 428-439. Mayhew, D. J. (1990). Cost-Justifying Human Factors Support: A Framework. Proc. of the Human Factors Society 34th Annual Meeting, Vol. 2, pp. 834-838. Human Factors Society, Santa Monica, California. McGrew, J. F. (1991). Tools for Task Analysis: Graphs and Matrices. In “Taking Software Design Seriously: Practical Techniques for Human-Computer Interaction Design” (J. Karat, ed.), pp. 287-314. Academic Press, San Diego, California. Meister, D. (1971). “Human Factors Theory and Practice.” Wiley, New York. Meister, D. (1982). The Role of Human Factors in System Development. Appl. Ergonom. 13(2), 1 19- 124. Meister, D. ( 1985). “Behavioral Analysis and Measurement Methods.” Wiley, New York. Meister, D. (1986). “Human Factors Testing and Evaluation.” Elsevier Science Publishers, New York. Meister, D. (1987). Systems Design, Development, and Testing. In “Handbook of Human Factors” (G. Salvendy, ed.), pp. 17-42. Wiley, New York. Meister, D. (1989). “Conceptual Aspects of Human Factors.” Johns Hopkins Press, Baltimore, Maryland. Meister, D. (1991). “Psychology of System Design.” Elsevier Science Publishers, New York. Meister, D., and Farr, D. E. (1967). The Utilization of Human Factors Information by Designers. Hum. Factors 9, 71-87.
H U M A N FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
427
Melkus, L., and Torres, R. (1988). Guidelines for the Use of a Prototype in User Interface Design. Proc. of the Human Factors Society 32nd Annual Meeting, Vol. 1, pp. 370-374. Human Factors Society, Santa Monica, California. Miller-Jacobs, H. H. (1991). Rapid Prototyping: An Effective Technique for System Development. In “Taking Software Design Seriously: Practical Techniques for Human-Computer Interaction Design” (J. Karat, ed.), pp. 273-286. Academic Press, San Diego, California. Mulligan, R. M., Altom, M. W., and Simkin, D. K . (1991). User Interface Design in the Trenches: Some Tips on Shooting From the Hip. Proc. CHI PI Human Factors in Computing Systems, pp. 145-151. ACM, New York. Myers, B. A,, and Rosson, M. B. (1992). Survey on User Interface Programming. Proc. CHI ’92 Human Factors in Computing Systems, pp. 195-202. ACM, New York. Nickerson, R. (1981). Why Interactive Computer Systems are Sometimes Not Used by the People Who Might Benefit from Them. Int. J . Man Mach. Stud. 15(4), 469-483. Nielsen, J., ed. (1 989a). “Coordinating User Interfaces for Consistency.” Academic Press, New York. Nielsen, J. (1989b). Executive Summary : Coordinating User Interfaces for Consistency. In “Coordinating User Interfaces for Consistency” (J. Nielsen, ed.), pp. 1-7. Academic Press, New York. Nielsen, J. (1989~). Usability Engineering at a Discount. In “Designing and Using Human-Computer Interaction and Knowledge Based Systems” (G. Salvendy and M. J. Smith, eds.), pp. 394-401. Elsevier Science Publishers, New York. Nielsen, J. (1992). The Usability Engineering Life Cycle. Computer March, 12-22. Nielsen, J., and Molich, R. (1990). Heuristic Evaluation of User Interfaces. Proc. CHI ’90 Human Factors in Computing Systems, pp. 249-256. ACM, New York. Norman, D. A. (1981). The Trouble with UNIX: The IJser Interface is Horrid. Datamation, pp. 139- 150. Norman, D. A. (1986). Cognitive Engineering. In “User Centered System Design” (D. A. Norman and S. W. Draper, eds.), pp. 31-65. Lawrence Erlbaum Associates, Hillsdale, New Jersey. Norman, D. A,, (1988). “The Psychology of Everyday Things.” Basic Books, Inc., New York. Norman, D. A,, and Draper, S. W., eds. (1986). “User Centered System Design.” Lawrence Erlbaum Associates, Hillsdale, New Jersey. Norman, K. L. (1991). Models of the Mind and Machine: Information Flow and Control between Humans and Computers. In “Advances in Computers,” Vol. 32 (M. C. Yovits, ed.), pp. 201-254. Academic Press, New York. Nussbaum, B., and Neff, R. (1991). I Can’t Work This Thing! Business Week, April 29, 1991, 58-66. Palanivel, T., and Helander, M. (1991). Human Factors Issues in Dialog Design. In “Advances in Computers,” Vol. 33 (M. C. Yovits, ed.), pp. 115-171. Academic Press, New York. Perlman, G. (1988). Software Tools for User Interface Development. In “Handbook of HumanComputer Interaction” (M. Helander, ed.), pp. 819-833. Elsevier Science Publishers, New York. Perrow, C. (1983). The Organizational Context of Human Factors Engineering. Admin. Sci. Quarterly 28, 521 -541. Phillips, M. D., Bashinski, H. S., Ammerman, H. L., and Fligg, Jr., C. M. (1988). A Task Analytic Approach to Dialogue Design. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 835-857. Elsevier Science Publishers, New York. Polson, P. G. (1988). The Consequences of Consistent and Inconsistent User Interfaces. In “Cognitive Science and Its Application for Human-Computer Interaction” (R. Guindon, ed.), pp. 59- 108. Lawrence Erlbaum Associates, Hillsdale, New Jersey.
428
MARY CAROL DAY AND SUSAN J. BOYCE
Poltrock, S. (1989). Innovation in User Interface Development: Obstacles and Opportunities. Proc. CHI ‘89 Human Factors in Computing Systems, pp. 197-203. ACM, New York. Poltrock, S., and Grudin, J. (In preparation). Participantbobserver Studies of Interface Design and Development. In “Human-Computer Interface Design : Success Cases, Emerging Methods, and Real World Context” (C. Lewis, P. Polson, L. Gugerty, T. McKay, and M. Rudisill, eds.). Rasmussen, J. (1988). Information Technology: A Challenge to the Human Factors Society? Human Factors SOC.Bull. 31(1), 1-3. Redding, R. E., and Lierman, B. (1990). Development of a Part-Task CBI Trainer Based Upon a Cognitive Task Analysis. Proc. of the Human Factors Society 34th Annual Meeting, Vol. 2, pp. 1337-1341. Human Factors Society, Santa Monica, California. Reisner, P. (1990). What is inconsistency? In “Human-Computer Interaction-Interact ’90” (D. Diaper, D. Gilmore, G. Cockton, and B. Shackel, eds.), pp. 175-181. Elsevier Science Publishers, New York. Rideout, T. B., Uyeda, K. M., and Williams, E. L. (1989). Evolving the Software Usability Engineering Process at Hewlett-Packard. 1989 IEEE Int ’1 Conference on Systems. Man and Cybernetics, Vol. I , pp. 229-234. IEEE, New York. Riley, C. A,, and McConkie, A. B. (1989). Designing for Usability: Human Factors in a Large Software Development Organization. 1989 IEEE Int ’I Conference on Systems. Man and Cybernetics, Vol. 1, pp. 225-228. IEEE, New York. Rosenberg, D. (1989). A Cost Benefit Analysis for Corporate User Interface Standards: What Price to Pay for a Consistent “Look and Feel”? In “Coordinating User Interfaces for Consistency” (J. Nielsen, ed.), pp. 21-34. Academic Press, Boston. Rosson, M. B, Maass, S., and Kellogg, W. A. (1988). The Designer as User: Building Requirements for Design Tools from Design Practice. Comm. ACM 31(11), 1288-1298. Rouse, W. B. (1987). Much Ado about Data. Human Factors SOC.Bull. 30(9), 1-3. Rouse, W. B., and Boff, K. R. (1987a). Designer Tools and Environments: State of Knowledge, Unresolved Issues, and Potential Directions. In “System Design : Behavioral Perspectives on Designers, Tools, and Organizations” (W. B. Rouse and K. R. Boff, eds.), pp. 43-63. NorthHolland, Amsterdam. Rouse, W. B., and Boff, K. R. (1987b). Workshop Themes and Issues: The Psychology of System Design. In “System Design : Behavioral Perspectives on Designers, Tools, and Organizations” (W. B. Rouse and K. R. Boff, eds.), pp. 7-17. Elsevier Science Publishers, New York. Rowley, D. E., and Rhoades, D. G. (1992). The Cognitive Jogthrough: A Fast-Paced User Interface Evaluation Procedure. Proc. CHI ’92 Human Factors in Computing Systems, pp. 389-395. ACM, New York. Rubenstein, R., and Hersh, H. (1984). “The Human Factor.” The Digital Press, Burlington, Massachusetts. Salvendy, G. (1987). “Handbook of Human Factors.” Wiley, New York. Salvendy, G., and Smith, M. J., eds. (1989). “Designing and Using Human-Computer Interfaces and Knowledge Based Systems.” Elsevier Science Publishers, New York. Sanders, M. S., and McCormick, E. J. (1987). “Human Factors in Engineering and Design.” Sixth Edition. McGraw-Hill, New York. Sankar, C. S., and Hawkins, W. W. (1991). The Roles of User Interface Professionals in Large Software Projects. IEEE Trans. Prof. Commun. 34(2), 94-100. Shackel, B. (1984). The Concept of Usability. In “Visual Display Terminals: Visibility Issues and Health Concerns” (J. Bennett, D. Case, J. Sandelin, and M. Smith, eds.), pp. 45-87. Prentice-Hall, Englewood Cliffs, New Jersey.
HUMAN FACTORS IN HUMAN-COMPUTER SYSTEM DESIGN
429
Shackel, B. (1988). Interface Design for Usability. In “User Interfaces: Gateway or Bottleneck?” (T. Bernold, ed.), pp. 59-70. Elsevier Science Publishers, New York. Shneiderman, B. (1987). “Designing the User Interface.” Addison-Wesley, Reading, Massachusetts. Simonelli, N. M. (1989). Product Design and Human Factors Diversity: What You See Is Where You Come From. In “Ergonomics: Harness the Power of Human Factors in Your Business” (E. T. Klemmer, ed.), pp. 88-122. Ablex, Norwood, New Jersey. Smith, D. C., Irby, C., Kimball, R., Verplank, B., and Harslem, E. (1982). Designing the STAR User Interface. Byte 7(4), 242-282. Smith, K. U. (1988). Origins of Ergonomics and the International Ergonomics Association. Human Factors Soc. Bull. 31(1), 2-5. Smith, S. L. (1986). Standards Versus Guidelines for Designing User Interface Software. Behauiour and Information Technology 5( l), 47-61. Smith, S. L. (1988). Standards Versus Guidelines for Designing User Interface Software. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 877-889. Elsevier Science Publishers, New York. Smith, S. L., and Mosier, J. N. (1985). The User Interface to Computer-Based Information Systems: A Survey of Current Software Design Practice. In “Human-Computer Interaction-Interact ’84” (B. Shackel, ed.), pp. 637-641. Elsevier Science Publishers, New York. Smith, S. L., and Mosier, J. N. (1986). Guidelines for Designing User Interface Software. Technical Report ESD-TR-86-278, MITRE Corp., Bedford, Massachusetts. Tavolato, P., and Vincena, K. (1984). A Prototyping Methodology and Tool. In “Approaches to Prototyping” (R. Budde, K. Kuhlenkamp, L. Mathiassen, and H. Zullighoven, eds.), pp. 434446. Springer-Verlag, Berlin. Taylor, F. W. (1911). “The Principles of Scientific Management.” Harper, New York. Thortrup, H., and Nielsen, J. (1991). Assessing the Usability of a User Interface Standard. Proc. CHI ’91 Human Factors in Computing Systems, pp. 335-341. ACM, New York. Turner, P. A,, ed. (1983). Human Factors and Behavioral Science. Bell Syst. Tech. J . 62(6), Part 3. Van Cott, H. P. (1990). Human Performance Principles, Data and Data Sources. In “MANPRINT: An Approach to Systems Integration” (H. R. Booher, ed.), pp. 475-492. Van Nostrand-Reinhold, New York. Van Cott, H. P., and Huey, B. M. (1992). “Human Factors Specialists’ Education and Utilization: Results of a Survey.” National Academy Press, Washington D.C. Van Cott, H. P., and Kincade, R. G. (1972). “Human Engineering Guide to Equipment Design.” U.S. Government Printing Office, Washington, D.C. Vicente, K. J. (1990). A Few Implications of An Ecological Approach to Human Factors. Human Factors Soc. Bull. 33( I l), 1-4. Virzi, R. (1990). Streamlining the Design Process: Running Fewer Subjects. Proc. of the Human Factors Society 34th Annual Meeting, Vol. 1, pp. 291-294. Human Factors Society, Santa Monica, California. Vorchheimer, B., ed. (1989). Designing the Human Interface. AT&T Tech. J . 68(5). Wasserman, A. S. (1989). Redesigning Xerox: A Design Strategy Based on Operability. In “Ergonomics: Harness the Power of Human Factors in Your Business” (E. T. Klemmer, ed.), pp. 7-44. Ablex, Norwood, New Jersey. Wharton, C., Bradford, J., JeKries, R., and Franzke, M. (1992). Applying Cognitive Walkthroughs to More Complex User Interfaces : Experiences, Issues, and Recommendations. Proc. CHI P2 Human Factors in Computing Systems, pp. 381-388. ACM, New York. Whiteside, J., Bennett, J., and Holtzblatt, K. (1988). Usability Engineering: Our Experience and Evolution. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 791-817. Elsevier Science Publishers, New York.
430
MARY CAROL DAY AND SUSAN J. BOYCE
Wick, D. T. (1988). Integrating Human Engineering into the Design of the User Interface of a Large Scale System. Proc. of the Human Factors Society 32nd Annual Meeting, Vol. 2, pp. 1126-1 130. Human Factors Society, Santa Monica, California. Williams, G. (1983). The Lisa Computer System. Byte 8(2), 33-50. Wilson, J., and Rosenberg, D. (1988). Rapid Prototyping for User Interface Design. In “Handbook of Human-Computer Interaction” (M. Helander, ed.), pp. 859-875. Elsevier Science Publishers, New York. Winner, R. I., Pennell, J. P., Bertrand, H. E., and Sluarczuk, M. M. G . (1988). The Role of Concurrent Engineering in Weapons System Acquisition. Report R-338, Institute for Defense Analysis. Woodson, W. E., and Conover, D. W. (1964). “Human Engineering Guide for Equipment Users.’’ University of California Press, Los Angeles, California.