ARTICLE IN PRESS
Int. J. Human-Computer Studies 60 (2004) 101–115
Using task analysis to improve usability of fatigue modelling software Michael Paradowski*, Adam Fletcher The Centre for Sleep Research, The University of South Australia, Level 5, The Basil Hetzel Institute, The Queen Elizabeth Hospital, Woodville Road, Woodville SA 5011, Australia Received 16 September 2003; accepted 17 September 2003
Abstract The design of any interactive computer system requires consideration of both humans and machines. Software usability is one aspect of human–computer interaction that can benefit from knowledge of the user and their tasks. One set of methods for determining whether an application enables users to achieve their predetermined goals effectively and efficiently is task analysis. In the present study, a task analysis was applied to the graphical user interface of fatigue modelling software used in industry. The task analysis procedure allowed areas of usability improvement to be identified and then addressed with alternate interface prototypes. The present method of task analysis illustrates a practical and efficient way for software designers to improve software usability, user effectiveness and satisfaction, by involving users in the design process. r 2003 Elsevier Ltd. All rights reserved.
1. Introduction The design of computer-based systems that are used by people is a complex endeavour (Williges, 1987). Systems that cannot be used intuitively often lead to an increase in the rate of error and a decrease in user acceptance (Johnson et al., 2000). Therefore, the field of human–computer interaction (HCI) has emerged to address the need for computers to meet, and indeed be designed to meet, user needs (Diaper et al., 1998). *Centre for Applied Behavioural Science, The University of South Australia, Level 5 The Basil Hetzel Institute, The Queen Elizabeth Hospital, Woodville Road, Woodville S.A., 5011, Australia. Tel.: +61-882226624; fax: +61-8-82226623. E-mail address:
[email protected] (M. Paradowski). 1071-5819/$ - see front matter r 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijhcs.2003.09.004
ARTICLE IN PRESS 102
M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
A commonly accepted criterion for assessing if an application can be used intuitively is software usability. This rather broad concept encompasses areas such as efficiency of attaining solutions, rate of learning how to use the application, and the likelihood of error (Johnson et al., 2000). It is generally regarded as unproductive to present a user with a very powerful program (high utility), which is very difficult to use (low usability). High usability improves user acceptance and facilitates productive use of the software, regardless of a programs utility. There is a strong case that the workflow of a software application should be designed with the user and their task in mind (Card et al., 1983; Nielsen, 1993). Indeed, design of software should take into consideration the overall environment, or system of which the software is one component (Diaper et al., 1998). This system is composed of the people who use the software as well as the sequence and nature of task demands. Furthermore, it is thought that better software systems may be designed with the information requirements of the human operator in mind (Richardson et al., 1998). To this effect, considerable literature has focused on the need to provide the software designer with techniques and methodologies to facilitate better graphical user interface (GUI) design (Shepherd, 1998). A broad category of such techniques is that of task analyses. Task analyses focus on the cognitive activities, as well as behaviours, which occur within human–computer systems (Militello and Hutton, 1998). Task analyses provide a means by which to assess how well a GUI meets the cognitive, information and other requirements of the user (Ainsworth and Marshall, 1998). As such, the method provides a means by which to ensure that a system fully supports users and enables them to achieve their predetermined goals effectively and efficiently (Ainsworth and Marshall, 1998). Some commentators have emphasized the fact that inevitably, the design or appraisal of any interface for a complex system will involve some form of task analysis (Ainsworth and Marshall, 1998). Despite considerable investigation of task analysis techniques within the research community, there is some evidence that task analysis techniques are not widely used in applied settings (Richardson et al., 1998). This may be due to the perception that task analyses are resource intensive and add an additional economic burden at the outset of the design process (Militello and Hutton, 1998). However, proven overall benefits in terms of rate of user acceptance and speed of application use have been demonstrated as a result of incorporating a task analysis in the software design process (Mantei and Teory, 1988). Consequently, many variations of task analyses have emerged in an attempt to transfer the potential benefits of these techniques from the research community to the operational community (Militello and Hutton, 1998). Work in this area has had considerable focus on providing analysts and designers with tools that are aimed at improving usability and also utility of software (Militello and Hutton, 1998; Shepherd, 1998). Indeed, texts that concentrate on usability ‘‘success stories’’, such as Wiklunds (1994) ‘Usability in Practice’, make much reference to the use of task analysis. A specific example used by Wiklund (1994) is the application of task analysis to an existing cruise booking software system; the procedure highlighted tasks that users deem important but that designers had neglected (James, 1994).
ARTICLE IN PRESS M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
103
In this way, the nature of the specific tasks that users can, and want to, perform strongly influence the utility of the application. Thus, it is fundamental that a usability analysis be based on, or arrived at, in light of assessment of the utility or intended utility. Shackel (1991, p. 24) incorporates this notion in his definition of usability, as the ‘ycapability (in human functional terms) to be used easily and effectively by the specified range of users, given specified training and user support, to fulfil the specified range of tasks, within the specified range of environmental scenarios. Intuitively, the design of a software system that takes into account the existing tasks that users are faced with will be superior to a design that does not. For example, it is valuable to know how users have gone about achieving tasks in the past (in the absence of software support) and what information is most important to them in the process. Consequently, various task analysis techniques have been developed to determine user requirements. In an assessment of the effectiveness of human-factors techniques in system design processes, Mantei and Teory (1988) found that task analyses can lead to a reduction in training costs and also to reduced errors. Such task analyses may be employed at the outset of what may be termed a user-centred design process (Diaper et al., 1998). Alternatively, a task analysis is often applied to an existing software-user system to quantify shortcomings and inefficiencies. This study focuses on the second of these applications. Specifically, the present paper details an application of task analysis to an existing software program with the aim of determining areas for improvement. The version of task analysis employed in the study represents a feasible and accessible method for software developers to gauge an application for sources of potential improvement. As such, it is aimed at a technical audience with little attention to human factors. The technique employed has been referred to as a Cognitive Walkthrough (Lewis et al., 1990; Abowd, 1995). The main purpose of this technique is to identify the steps that a user takes to achieve a certain goal and to identify any inefficiency in this process. Potential areas of improvement may emerge in terms of usability and rate of learning, of an application (Johnson et al., 2000). The software application that was the focus of the usability analysis was Fatigue Audit InterDyne (FAIDt), which assesses patterns of work hours in order to predict levels of fatigue. The fatigue model incorporated in FAIDt is the result of extensive developmental and validation studies on the effects of work-related fatigue (see Fletcher and Dawson, 1997; Dawson and Fletcher, 2001; Fletcher and Dawson, 2001; http://faid.interdynamics.com). The software has been adopted by numerous industries and government regulators, as one component of overall fatigue management systems. Specifically, operational planners have started to use FAIDt as an integrated part of their existing roster systems. Significantly, it was recognized that this type of user possesses significant task domain expertise and the process of using the software should be integrated into their existing workflow. The task analysis in the reported study was based on interviews with existing users. The aims were to quantify the tasks that users undertake when using the application as well as the exact steps they take. It was performed in order to better understand
ARTICLE IN PRESS 104
M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
the specific aspects of the software that received frequent use, and correspondingly, which areas were underdeveloped or difficult to use and navigate. The results of the task analysis were interpreted in terms of terms of three abstracted levels of interface design. This framework, illustrated in Fig. 1 is adapted from Moran’s (1981) Command Language Grammar framework for examining interaction within human–computer systems. Following such a framework would facilitate concrete formulation of recommendations for GUI improvement at each of the three levels. Analysis at the level of syntax was considered of primary importance. As seen on Fig. 1, syntax refers to the basic rules of communication. Consequently, communicated in the syntax of a GUI are things such as which parts of the interface the user may click on to have an effect, which parts of the GUI require a keyboard entry to be made, and how to move through the various ‘stages’ of analysis. Efficient syntax will ensure that communication is error free and that the user understands at all times what is happening within the application. Faulty syntax will almost certainly annoy and frustrate users and reduce the application’s FATIGUE MANAGEMENT TASK
GOAL
LEVEL
SOLUTION
Use of Software
SOFTWARE APPLICATION SEMANTIC LEVEL
Information Displayed - What - How - Where - When
User Interacts via the Interface
Information Output
USER INTERFACE (GUI) SYNTACTIC LEVEL
User Action - Key press - Mouse Click - Data Entry
System Response
Feedback
Fig. 1. Levels of human–computer interaction applied to interface design. Based on Moran’s levels of interaction (1981).
ARTICLE IN PRESS M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
105
productivity (Williges et al., 1987). In providing a discrete description of the steps involved when a user works toward a desired goal, a task analysis can provide systematic evidence as to whether each task performed on the interface is being completed effectively or not (Ainsworth and Marshall, 1998). The task analysis employed in the reported study was an analysis of the syntactic level the current system. The deficiencies at this syntactic level of interaction were then mapped upward through the levels of abstraction, to improve the fit between the users’ existing conceptual model of the system and the GUI design at the semantic and ultimately, the broad task level. An added motivation for the present study is found in the applied nature of the software application FAIDt. For example, in Australia there have been strong recommendations for the use of such modelling software to test certain work rosters for fatigue (House of Representatives Standing Committee on Communication, Transport and The Arts, 1999). This is in response to the perceived benefits to worker safety arising from fatigue management practices that assess hours-of-work for their contribution to fatigue. Given that FAIDt is in use within industry, and that it could play a significant role in accident prevention, it was deemed important to minimize user error as well as overcome any resistance to the use of the software in the workplace.
2. Method 2.1. The software application The ‘stand-alone’ application Fatigue Audit InterDyne (FAIDt) is capable of predicting fatigue scores based on hours of work. The application has been created by the Australian company InterDynamics. It incorporates a unique model of fatigue, developed in recent years (Fletcher and Dawson, 1997; Dawson and Fletcher, 2001; Fletcher and Dawson, 2001) and licensed from the University of South Australia. The application calculates a Fatigue Score along with other performance indicators for employees based on several inputs; the primary one being the hours of work. The software has been designed with one main task in mind—the analysis of work hours for levels of work-related fatigue—which made it a particularly attractive subject for a usability study. FAIDt Users are trying to reach the same goal, therefore the path should be made as streamlined as possible. 2.2. Participants Four organizations participated in the study. BAE Systems (Flight Training section), and Airlines of South Australia (ASA) were two organizations from the Australian national aviation sector. BAE systems have an international workforce of over 100,000 employees, while ASA employ approximately 50 people in South Australia. National Rail Corporation, one of Australia’s largest rail-based freight carriers, employs over 1200 people; while Australian Railroad Group, Australia’s largest private rail operator employs over 1000 people throughout the nation.
ARTICLE IN PRESS 106
M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
The primary FAIDt user at each organization’s Adelaide branch was consulted. Of the people consulted, all but one were observed using FAIDt while in the course of normal operations, the other was consulted via written correspondence. All interviewees were male and occupied senior roster planning or managerial positions. All users were considered to possess intermediate computer proficiency, defined as (at least) a working knowledge of the Microsoft Office suite of products. All interviewees were found to use FAIDt at least once per week for approximately 30 minutes each use.
3. Procedure and instrument Task analyses were performed on user data, gathered during the user interviews. The task analysis began with a statement of the user-defined task. A complete list of actions needed to perform the task was then produced. Users were asked to first explain their primary goal when using the software. Users were then asked to go about using the software in their usual way. Furthermore, they were asked to ‘think aloud’ and verbalize what it was they were doing at each stage of use. The experimenter observed and interjected only when engaged directly by the participant or when clarification was deemed absolutely necessary. These interviews lasted between 30 and 50 minutes; they adhered to an interview pro-forma that was designed to maximize the amount and quality of information obtained (this is presented in Appendix A). In this way, notes were taken as to the steps (successful or unsuccessful) taken by typical users of the software in their organizational setting. The task analysis involved carefully quantifying the number and type of steps the user employed in communicating via the interface in order to accomplish a specified task. The steps employed by each user were compared with those of all other users. The purpose was to arrive at a prototypical (generalized) task procedure. Steps common to the majority of users were included in the prototypical procedure. This was deemed important and an adequate method to employ based on the highly stereotyped nature of the software. That is, FAIDt supports one primary task that users can attempt to achieve in more, or less, effective ways. This (prototypical) task was able to be described in concrete steps based on the most common steps that users made during the walkthrough component of the interview. An analysis of the steps of the prototypical procedure then followed. This involved four questions being asked for each of the steps. (1) Is the user trying to produce whatever effect the action actually has? (2) Can the user notice that the correct action is currently available? (3) Once the user finds a perceived correct action at the interface, will they know it is the right one for the effect they are trying to produce? (4) After the action is taken, does the user get adequate feedback assuring them that the goal has been achieved? (Abowd, 1995).
ARTICLE IN PRESS M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
107
The first question was addressed based on user response to each action that they made. For example, at times users reversed an action immediately after performing it. The second question is related to the first. Answers to this were effectively inferred based on the time taken between actions and the number of reversals. The third question is related to indecision regarding a desired action. The fact that subjects spoke aloud regarding their intentions and perceptions made it possible to answer this question. For the final question, user reaction and vocalization facilitated to uncover the effectiveness of communication facilitated by the GUI feedback. A problem report sheet was generated for all steps that failed to answer a question in the affirmative.
4. Results 4.1. User-defined tasks Users were asked what their primary goals are when using the FAIDt application. This ‘organizational goal’ information was recorded. Furthermore, in conjunction with observation of, and discussion with users, an operationalized task was defined. The stated goals of participants when using FAIDt differed only marginally. Common responses were ‘‘to determine which potential rosters produce satisfactory levels of fatigue’’, and ‘‘to ensure that a potential roster does not include any shifts with dangerously high fatigue levels’’. Operationalized tasks were formulated as, ‘‘to visually inspect Fatigue Score Plots for abnormalities and Fatigue Scores over 80’’, and ‘‘to produce hard copies of Fatigue Score Plots to be kept on record’’. 4.2. Verbalized problems All users expressed some degree of difficulty when using FAIDt. The existing method for the (essential) function of inputting roster data, was deemed unsatisfactory by all users. Specific comments concerning usability issues identified were as follows. (a) The GUI was ‘‘not as intuitive’’ as most spreadsheets, (b) the method for date and time entry was seen as ‘‘cumbersome’’, (c) users failed repeatedly at using the ‘add’ input box to append shift times to the roster (see Fig. 2), (d) it was deemed that the ‘print’ function did not yield adequate hard-copy output, (e) a desire to be able to save the completed fatigue audit was expressed. 4.3. Prototypical task As expected, the task analysis provided a means of quantifying how users go about analysing rosters for fatigue using FAIDt. The prototypical FAIDt task was deduced from the interviews and observations of users. This, step-by-step task procedure is presented in Table 1.
ARTICLE IN PRESS 108
M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
Fig. 2. The method for appending shifts to the roster was seen as difficult.
Task analyses were carried out on this task procedure. A problem report sheet was generated for the prototypical task procedure based on the results of the task analysis. Steps that produced negative answers to any of the 4 questions of the Cognitive Walkthrough are summarized in Table 2. The steps identified in Table 1 relate primarily to syntactic and semantic problems with the user interface. An example of failure to generate affirmative answers to the Cognitive Walkthrough questions is provided in Fig. 3. In the figure, users click > OKo when prompted in (a) and the next screen they see is (b). The second (output) screen provides no information as such, only a selection of apparently ambiguous icons. Users made several selections before finding the effect on the interface that they were looking for (the Fatigue Plot screen, accessed via the icon immediately to the right of the word ‘Individuals’). 4.4. GUI improvements Based on the usability issues identified, alternate interface prototypes were designed. The method employed was to use a common development language to build mock-up interfaces. These interfaces illustrate changes to the interface design that would alleviate a particular usability problem that had been identified. Fig. 4 presents an example of such a GUI prototype. This alternate design incorporates a change to the method for input of data, identified as an issue. This design formed the
ARTICLE IN PRESS M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
109
Table 1 Step-by-step prototypical task Prototypical task To perform a fatigue audit of a work roster in order to confirm adequate fatigue levels. Procedure steps: Start FAID
[Input screen]
1. Click on > R1o 2. Click on > Pasteo to copy a roster (from Excel or roster creation Software) into the FAID table 3. Click > Analysiso Tab
[Analysis screen]
4. 5. 6. 7. 8.
[Output screen]
9. 10. 11. 12. 13.
Click > Start Dateo field Enter a [Start Date] using the keyboard Click > Periodo field Enter a [Period] of time for the analysis (in weeks) Click > Analyseo icon Click Click Click Click Click
> Fatigue Score Ploto Icon to view graphical output > View Plot Full Screeno to enlarge the output > Fileo in the upper Menu Bar > Printo to send the screen output to the printer > Backo to return to examining the fatigue score plots
Steps 10–12 would be repeated for each employee in the roster. Table 2 GUI issues uncovered by the task analysis Step No. Action
Problem
1
> R1o
Function is not immediately clear. Item is superfluous and should be replaced.
4
> Start dateo
It is not obvious how to gain access to this field. An example of lack of consistency in the GUI syntax.
5
[Start Date]
Ambiguous because FAID needs 7 days work history before actual analysis begins. A potential source of error.
9
> Fatigue Score Plot Viewo
Should be shown automatically as primary output. Is not differentiated from a range of similar icons despite its importance.
> Printo
No icon. The menu allows a ‘screen print’ which displays irrelevant information. Hard-copy output is a vital goal. A prominent print icon that yields detailed information is needed.
11
basis for an actual re-design of the software interface (seen in Fig. 4b), which received positive feedback from users. Further changes to the interface and functionality of FAIDt were also made by the developers of the software, in line with the results of the usability study.
ARTICLE IN PRESS 110
M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
Fig. 3. A GUI screen sequence, (a) is followed by (b) when the user clicks OK. The interface screen at (b) failed the Cognitive Walkthrough questions.
ARTICLE IN PRESS M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
111
Fig. 4. GUI Prototype mock-up design addressing data input issues uncovered (a), and subsequent release version of the software (b).
ARTICLE IN PRESS 112
M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
5. Discussion As expected, the task analysis and cognitive walkthrough identified concrete recommendations for improvement of the FAIDt user interface at the syntactic level. As such, this case study has demonstrated one systematic approach for improving software usability. Analysis and use of the task analysis, which was directed at the semantic level of Moran’s (1981) framework, has allowed for the identification of areas of improvement of the application’s syntactic structure. Furthermore, in terms of this framework as represented in Fig. 1, improvements at the syntactic level may indeed result in improved or strengthened semantic content. This in turn, ultimately leads to improvements in functionality and overall workflow as represented at the task level. Through improvement of syntax, the application will be easier for new users to learn and will perform consistently as expected. By ensuring that the rules of interaction are internally consistent as well as consistent in regard to other software applications, a maximally productive human–computer interaction is made possible (Williges et al., 1987). Specifically, examination of the task analysis and use of the Cognitive Walkthrough have led toward identification of areas of inconsistent GUI syntax. Important aspects of the interface, such as the opening screen, and the first presentation of outputs, have been found to possess shortcomings. It is intended that by adjusting the design of the user interface based on the results of the task analysis problem report sheet, user error and frustration will be reduced. Additionally, through improved consistency, the application learning time can also be reduced, thereby promoting adoption of the FAIDt software. This is seen as an important goal, especially in light of the current work-safety regulatory climate in Australia, which may extend internationally. In addition to the focus areas for usability improvement identified on the problem task analysis report sheet, examination of the prototypical task can lead to improvements in interface design. For example, the finding that producing printed output was a significant priority for users, subsequent designs may aim at improving this process. In fact, the design of FAIDt has since incorporated a print icon and a new section of the GUI featuring sophisticated printing options. Furthermore, the discovery arising from user consultations that appending to the roster was too ‘cumbersome’ has seen a redesign of this method. The present usability study was able to make concrete suggestions for improving the usability of a popular fatigue modelling software program. Several changes to the software have been made based directly on the findings and prototypes generated from this study. However, in terms of methodology, it would appear that the four questions asked of the Cognitive Walkthrough had a high degree of overlap in the type of problem they uncovered. Future work might consider a revision of these questions based on principles of effective communication and feedback. Furthermore, in cases where software might feature broader task support and consequently, much more distinct user tasks with little in common, the formulation
ARTICLE IN PRESS M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
113
of a prototypical task procedure becomes difficult. Each individual task would become the subject of a Cognitive Walkthrough, with the results of the analyses then requiring formulation into a concise set of recommendations. The task analysis employed within this study has demonstrated an effective method of software usability analysis. Such a low cost, non-resource-intensive method may be employed at any stage of the software development cycle. This represents a step toward the human factors-centred design of software applications.
Acknowledgements Michael Paradowski would like to thank Professor Drew Dawson and in particular Dr. Adam Fletcher for their advice and assistance. He would also like to thank the entire staff at InterDynamics Adelaide for their cooperation and support. Thanks also go to Dr. Doug Seeley for his valuable guidance. Support: The research that forms the basis for this paper was conducted as part of a scholarship awarded to M. Paradowski by The TQEH Research Foundation, NWAHS, South Australia.
Appendix A The user interview guide is shown in Table 3. Table 3 FAID information version no. How long have you been using FAIDt? How often do you use the application? How have you found learning the application? Organizational needs Why does your company use FAIDt? FAID goal What is your aim when using FAIDt? User Who is using the software? Do you use the help screen? How often? Are instructions ok? Do they need to refresh their skills each time they use the application? How quick is their job turnover (and therefore training) rate? Interface Can you walk through a typical use of the program? [Actions performed are recorded in order on separate form] What do you like about FAIDt? Dislike? Annoying?
ARTICLE IN PRESS 114
M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
Table 3 (continued) Which buttons do you use most often? Which functions are used most often in combination? Do you know what the clear icon is/looks like? Do you use the clipboard icon? Do you enter input data easily? Often? Using which functions? Outputs How do you sort your output? Which view is best for your purposes? What is the end output? Printed page? Do you read the changes off the screen? Print out altered roster? Does the Graphic/ textual output provide all that you need? Do you alter Risk Targets? Do you implement these changes in work practice? Do you use FAIDt to increase minimal fatigue scores as well as eliminate risk levels?
References Abowd, G., 1995. Performing a cognitive walkthrough, Introduction to software engineering, In: Abowd, G. (Ed.), Lecture notes for CS 3302 Georgia Tech College of Computing: Atlanta, Georgia. URL:http://www.cc.gatech.edu/computing/classes/cs3302/docum ents/cog.walk. Ainsworth, L., Marshall, E., 1998. Issues of quality and practicability in task analysis: preliminary results from two surveys. Ergonomics 41 (11), 1607–1617. Card, S.K., Moran T. P., Newell, A., 1983. The Psychology of Human–Computer Interaction: Applying Psychology to Design. Lawrence Erlbaum Associates, New Jersey, pp. 403–424 (Chapter 12). Dawson, D., Fletcher, A., 2001. A quantitative model of work-related fatigue: background and definition. Ergonomics 44 (2), 144–163. Diaper, D., McKearney, S., Hurne, J., 1998. Integrating task and data flow analyses using the pentanalysis technique. Ergonomics 41 (11), 1533–1582. Fletcher, A., Dawson, D., 1997. A predictive model of work-related fatigue based on hours of work. Journal of Occupational Health and Safety—Australia and New Zealand 13 (5), 471–485. Fletcher, A., Dawson, D., 2001. A quantitative model of work-related fatigue: empirical evaluations. Ergonomics 44 (5), 475–488. House of Representatives Standing Committee on Communication, Transport and The Arts, 1999. Beyond the Midnight Oil: An inquiry into managing fatigue in transport. Commonwealth of Australia, Canberra. James, J.S., 1994. American airlines. In: Wiklund, M.E. (Ed.), Usability in Practice: How Companies Develop User-friendly Products. AP Professional, Boston, MA, pp. 359–388 (Chapter 12). Johnson, C.M., Johnson, T., Zhang, J.J., 2000. Increasing productivity and reducing errors through usability analysis: a case study and recommendations. Journal of the American Medical Informatics Association Symposium 394–398. Lewis, C., Polson, P., Wharton, C., Reiman, J., 1990. Testing a walkthrough methodology for theorybased design of walk-up-and-use interfaces. Proceedings of the ACM CHI’90 Conference, Seattle, WA, 1–5 April, pp. 235–241. Mantei, M.M., Teory, T.J., 1988. Cost/benefit analysis for incorporating human factors in the software lifecycle. Communications of the ACM 31 (4), 428–439. Militello, L.G., Hutton, R.J.B., 1998. Applied cognitive task analysis (ACTA): a practitioner’s toolkit for understanding cognitive task demands. Ergonomics 41 (11), 1618–1641. Moran, T.P., 1981. The command language grammar: a representation for the user interface of interactive computer systems. International Journal of Man–Machine Studies 15, 3–50. Nielsen, J., 1993. Usability Engineering. Academic Press, Inc, San Diego, CA.
ARTICLE IN PRESS M. Paradowski, A. Fletcher / Int. J. Human-Computer Studies 60 (2004) 101–115
115
Richardson, J., Ormerod, T.C., Shepherd, A., 1998. The role of task analysis in capturing requirements for interface design. Interacting with Computers 9, 367–384. Shackel, B., 1991. Usability—context, framework, definition, design and evaluation. In: Shackel, B., Richardson, S. (Eds.), Human Factors for Informatics Usability. Cambridge University Press, Cambridge UK, pp. 21–37. Shepherd, A., 1998. HTA as a framework for task analysis. Ergonomics 41 (11), 1537–1552. Wilklund, M.E., 1994. Usability in Practice: How Companies Develop User-friendly Products. AP Professional, Boston, MA. Williges, R.C., 1987. The use of models in human–computer interface design. Ergonomics 30 (3), 491–502. Williges, R.C., Williges, B.H., Elkerton, J., 1987. Software interface design. In: Salvendy, G. (Ed.), Handbook of Human Factors. Wiley, New York, pp. 1416–1449.