The Journal of Systems and Software 45 (1999) 233±240
Usability inspection in contract-based systems development ± A contextual assessment Torbj orn N aslund b
a,1
, Jonas L owgren
b,*
a Astra Arcus, S odert alje, Sweden School of Art and Communication, Malm o University College, Sweden
Received 1 April 1998; accepted 22 October 1998
1. Introduction With the proliferation of computers and computing, there is an increasing interest in that computers be easy to learn, easy to use, and more generally, serve as appropriate tools for the tasks they are applied to. In other words, usability is expected. This is the domain of human±computer interaction (HCI), a rapidly growing ®eld in terms of scienti®c research as well as commercial markets. HCI started out as a largely experimental discipline, growing out of experimental psychology and oriented towards describing and understanding the interaction between human and computer. However, more and more eort was directed at bringing this knowledge to bear in developing systems that would be better for the user. Usability-oriented systems development is now one of the main directions within HCI. Methods and techniques are the main vehicles for disseminating the HCI knowledge to systems development practice and thus facilitating the development of more usable systems. A fundamental assumption in our work is that methods are best understood in context. Assessing a normative method description is of rather limited interest compared to studying how the method is in fact used in real-life systems development projects. This means that we rely on ®eld research, and more importantly, that our studies address the use of methods in their context, i.e., industrial systems development. It also means that our observations and conclusions are not con®ned to re®nements of a method per se but also concern how the method is used by the actors in the systems development process. However, the method
*
Corresponding author. Stallberget 3, 57595 Eksj o, Sweden. E-mail:
[email protected] 1 E-mail:
[email protected]
provides the focal point for the study. Our ®ndings and suggested improvements are most relevant for contexts where the particular method is used. 2. Usability inspection There is a wide variety of usability-oriented systems development methods and techniques, based on dierent perspectives of usability (see LoÈwgren (1995) for a survey). One common trait of most methods is the recognition that iterative development processes are necessary. On a very general level, the ``heart'' of usability-oriented systems development can practically always be described as a design-evaluation cycle. A concept or a prototype is designed, evaluated formatively with respect to usability and the results from the evaluation are used to improve the design (Fig. 1). Usability evaluation methods can be divided into two broad classes: empirical, in which the design is tested with representative users, and analytical, where the usability of the design is assessed without users. An excellent introduction to empirical methods, also known as usability testing, is found in Dumas and Redish (1993). Analytical methods, frequently called usability inspection, are surveyed in Nielsen and Mack (1994). It is generally held that the two classes of methods are complementary; they have dierent strengths and weaknesses, and typical development methodologies recommend the use of both. The focus of our study, and the topic for the rest of the paper, is the use of usability inspection methods in industrial contract-based systems development. The basic idea of usability inspection methods is to predict the usability of a system or prototype without having to test it with the prospective users. There are generally two sources of knowledge for such predictions: theories of HCI, and experience.
0164-1212/99/$ ± see front matter Ó 1999 Elsevier Science Inc. All rights reserved. PII: S 0 1 6 4 - 1 2 1 2 ( 9 8 ) 1 0 0 8 2 - 1
234
T. N aslund, J. L owgren / The Journal of Systems and Software 45 (1999) 233±240
design team, which is assumed to make the necessary changes to the proposed design. Usability inspection is only predicting usability problems. By measuring the goodness of evaluation methods in terms of the number of predicted problems, there is even a risk of rewarding ``false positives'', i.e. predicted usability problems which later, during system use, turns out not to pose any problems at all for the users. The underlying rationale seems to be that inspections should ®nd anything that may possibly reduce usability. Fig. 1. Prototyping cycle.
Theory-based methods typically embody speci®c models of how humans in general perceive, think and act when carrying out a task using a computer. Such models can be used for evaluation by simulating execution of the intended tasks using the proposed system design. Experience-based inspection methods build upon accumulated observations of design features leading to usability problems. A well-known method is heuristic evaluation, in which one or more ``usability experts'' independently scrutinize a proposed design for potential usability problems. The experts structure their ®ndings according to 10 or 12 general design rules for usability. Collections of design rules for usability, so called guidelines or style guides, are frequently used to facilitate usability inspection. The rules are often packaged as checklists, intended for use by designers without speci®c usability expertise. The design rules occupy a middle ground between theory and experience, in that some rules are encapsulated theoretical or experimental ®ndings whereas others are more experiential. 3. Underlying assumptions The existing body of knowledge about methods for usability inspection shares a set of underlying assumptions, in common with many other proposed methods in usability-oriented systems development. The assumptions are as follows. Assumption 1 (Identifying possible problems is what counts). Comparisons of methods for formative usability evaluation are typically based on the assumption that the more usability problems that can be identi®ed with the help of the method, the better is the method. In addition, the judged severity of the problem, the possibilities of early identi®cation, and the cost of identi®cation are taken into account. Once the usability problems are found, they are communicated to the
Assumption 2 (Exploration and experimentation are important). Usability inspection can be used also in early design stages, since it does not require an implemented prototype. The iterative shift between design and evaluation, as illustrated in Fig. 1, is initially to be used for exploration of general alternatives, and then to further investigate the consequences of selected alternatives. Such a process gradually leads to a better understanding of the design space in general, as well as the consequences of speci®c design alternatives. Assumption 3 (The process is convergent, resulting in a usable system). The predicted usability problems reported to the designers are supposed to lead to design revisions. In the next round of the iteration, fewer de®ciencies are supposed to be found. The design is supposed to gradually converge towards a system with adequate usability. 4. The study Most research on formative usability evaluation methods has been focused on comparing the strengths and weaknesses of dierent methods (see surveys by Desurvire and Karat in Nielsen and Mack, 1994). There is a shortage of scienti®c knowledge concerning the actual use of the methods in industrial systems development. In our own previous pilot studies of usability inspection in iterative systems development (NaÈslund, 1992), we observed an alarming tendency that the iterative process did not converge. Even though the usability evaluations indicated what must be characterized as important ®ndings, the redesigned prototypes showed only limited improvements from a usability point of view. The present study was designed to explore this phenomenon and gain a better understanding of why the potential usability problems detected were not eectively used by the designers while redesigning. The study was set up as a case study in collaboration with a large development company in the ®eld of command and con-
T. N aslund, J. L owgren / The Journal of Systems and Software 45 (1999) 233±240
trol systems. It focused on a contract-based project in which the demands for usability were very high, both by the customer and by the contractor. In our opinion, these demands were appropriate, given the nature of the intended use situations, characterized by short learning times and demanding tasks with high productivity requirements. The ®xed-price contract was based on a requirements speci®cation which stated functional requirements of the technical artifact, but had super®cial coverage of non-functional requirements and the intended use of the system. In short, the project was a representative case of what Grudin (1991) labels contract development as opposed to product or in-house development. The size of the development team was planned to vary between 5 and 10 system developers in dierent phases; the planned duration of the project was one year. Upon request of the contractor, two user representatives were assigned by the customer to participate in the project. These two user representatives had earlier an active role in the creation of the requirements speci®cation. The project was planned with a ``design and prototyping'' phase, in which prototypes were to be iteratively designed, discussed with user representatives and improved. This phase was planned for four months, but lasted eectively for a total period of ten months. Our participation in the project was planned as an extension to existing work practices, in that one of us (N aslund) would perform experience-based usability inspections of the prototypes, drawing on his expertise in usability as well as in the application domain addressed by the project. He would then provide the design team with his ®ndings. Thus, the feedback in the iterative prototyping cycle was extended to comprise not only comments from user representatives but also speci®c usability issues. 5. Research method Due to the explorative nature of the research questions, it was deemed appropriate to perform a qualitative and interpretative study over an extended period of time. We studied the design and prototyping phase throughout its ten month duration. N aslund played dual roles in the study: he contributed to the development as a usability evaluator, and he studied the handling of usability issues within the development organization as a participant observer. Through this dual role, a high level of access to the development process and to the participants' reasoning about usability was achieved. This gave us excellent opportunity to collect several kinds of research data: · ®eld notes of observations from meetings and from the daily work at the design level;
235
· interviews with people in and around the project often performed as unstructured interviews immediately after design meetings; · discussions of usability issues between the usability evaluator and the designers; · tangible products of the design process: prototypes, sketches and documents. When the ®eldwork was completed, the data were compiled into a chronological summary and classi®ed using an open coding scheme. Patterns were identi®ed and a sense-making framework was constructed in an iterative and interpretative process. 6. Results Three main themes emerge from our analysis of the research data. First, we found that only certain kinds of potential usability problems as reported by the evaluator were taken into account by the designers. Secondly, the designers seemed to reject an explorative mode of operation in spite of the overt commitment to an iterative process. Finally, we found a very clear separation between roles in the project. 6.1. Limited impact of usability evaluations The ®ndings from our pilot studies were born out in that the potential usability problems identi®ed by the usability evaluator had very limited impact on the design. To be more speci®c, we found that there is a very narrow window of opportunity for the evaluator to get his points across in the iterative prototyping process. The critical dimension of the window is the nature of the potential problems. Evaluation ®ndings concerning the relevance, appropriateness or usefulness of the design were found to be disregarded by the designers. Examples of such potential problems were: What functions should the system provide the user with? How does the system ®t with the users' existing work practices? How are system design decisions coordinated with the design of education, user handbooks and other supplementary information? Findings on the user interaction level were well received and addressed in subsequent redesigns. It was also possible to observe a certain amount of learning taking place, in that mistakes of a certain kind that had once been pointed out in an evaluation report were not repeated. ``Successful'' ®ndings typically concerning interaction techniques and choice of widgets, screen layout in relation to task ordering, deviations from style-guide rules, and appropriate use of the options were provided by the user-interface design tool. Finally a certain class of potential usability problems such as spelling errors, inappropriate terminology and strange abbreviations were seen as cosmetic issues of
236
T. N aslund, J. L owgren / The Journal of Systems and Software 45 (1999) 233±240
limited importance, possible to postpone until later. Issues of visual balance and structure as well as the visual grouping of interface objects were also dismissed as cosmetic. 6.2. Rejecting exploration At the inception of the project, there was uniform agreement concerning the importance of usability in the ®nal product. There was also a strong commitment to an iterative approach, where prototyping would be used as a way to increase the chances that the right product was built. However, it turned out that the designers did not explore the ``space'' of possible designs to the extent that one might expect. Fig. 2 illustrates our analysis of the factors behind the observation. We could identify three main factors underlying the observable lack of exploration. The ®rst, and perhaps most predictable, factor was that the productivity in terms of tangible output was valued higher than exploration: the designers measured themselves (and were measured) in terms of ®nalized design decisions and written documents, rather than in terms of number of alternative designs explored. Secondly, the notion of prototyping as an opportunity for the users to contribute constructively was replaced with prototyping as demonstration. A prototype covering a small part of the system was demonstrated to the users. As usual, the designers were running the prototype; the pace of interaction was quite rapid and it must have been hard for the users to see and understand the details. When the users commented design ¯aws, they were repeteadly
told that their comments were out of place at this early stage of development. The evaluator found this work practice inappropriate and discussed it with the designers, who replied that they were afraid that the users might introduce new ideas if allowed to see and discuss dierent alternatives. The designers used prototypes as a way to get super®cial veri®cation from the user representatives rather than constructive feedback on issues concerning relevance and work practices. The designers were also very reluctant to involve users in discussions of design ideas, externalized as prototypes. They were afraid both that the design would drift away from what was speci®ed in the contract and that options mentioned in discussions would be seen as promises of system features to deliver. Finally, the project contained several obstacles to the explorative way of working. The requirement speci®cation stated that the ``message line'' should (1) be one single line, but expandable to four lines and further to ``the size of the full message ®eld'', and (2) be scrollable if it contained several messages. It is dicult to ®nd a solution satisfying both requirements, particularly under the further constraints of an existing style guide and a fairly restrictive implementation tool. The researcher's empirical observations of the use of an older system revealed the rationale behind
Fig. 2. Factors contributing to limited design exploration in the studied systems development process. (Excerpt from sense making scheme produced in the interpretative process.)
T. N aslund, J. L owgren / The Journal of Systems and Software 45 (1999) 233±240
237
the requirement: In the older system, messages were frequently presented to the user on a single line. A new message often appeared before the user had been able to react to the previous one. The requirements cited above can be understood as a plea from the users to make it possible to handle important messages.
The management level emphasized a clear division between procuring and delivering organizations, and the formal use of the requirement speci®cation as a contract to regulate the business relations between the parties. This view was forced upon the design level, which led to situations where the designers and the user representatives had partly dierent information but felt that it was not appropriate to inform anybody from the other side.
Since the designers did not have this knowledge, they tried to satisfy the requirements rather than the underlying need. In eect, the requirements caused discussions of how to create a ®eld that is both expandable and scrollable, but blocked out many better solutions (such as reducing the heavy message load, or designing a more appropriate message browser).
When realizing that something would be dicult to implement, the designers would not tell the users because that was seen as giving away information that could be used against them in negotiations. For example, a designer said during a design meeting: ``It will be really hard for us to make it work in this way. We will have to look for alternatives, but remember not to mention this problem to the users''.
Questionable limitations and constraints were introduced from several sources, and the designers did not have enough clear vision of the system and its use. We also found it hard to perform adequate usability evaluations, since the goals as well as the prototypes to evaluate were sometimes inadequate. 6.3. Separation of roles We found the project organization to be very strongly partitioned into two levels: the design level, where the discourse concerned the future system and its use, and the management level, where the project was seen as a business agreement. The designers acted on the design level, as did the user representatives and the usability evaluator. The actors on the management level were the customer and the contractor (Fig. 3).
The users would not inform the designers about possibly relevant factors, since they learned that the designers ask when they want information. The users also felt that they did not understand the design process well enough to give information when it would be useful to the designer. One of the user representatives said in an interview: ``We try our best to be helpful and answer all the questions we get to the best of our knowledge . . . We are sorry that the designers did not allow us to present our visions in plenary brie®ngs earlier in the project. It would have been easier to give the full picture''. The strong separation of roles was perceived as particularly annoying by the user representatives; it is also
Fig. 3. The dierent roles in the studied project, divided in two levels.
238
T. N aslund, J. L owgren / The Journal of Systems and Software 45 (1999) 233±240
easy to see how it aects the ideals of constructive user participation in an iterative development process. In cases where the designers and the user representatives did not agree on the interpretation of the written requirements, the issues had to be ``sent up'' to the management level instead of being dealt with on the design level. At the management level, they were reinterpreted as legal issues and resolved in a formal manner. For a more exhaustive discussion of dierent interpretations by dierent actors, refer to NaÈslund (1996). We could also observe situations where the contractor made promises on the managerial level for intermediate deliverables. The intention was to deliver intermediate results from the development process, but the designers still had to spend a fair amount of seemingly wasted time to create the deliverables. The designers where committed to what they expressed as a service-minded attitude: give the users what they (say they) want. When discussing a potential usability problem, one of the designers told the usability evaluator that he was probably right, but that the design team had already discussed the issue with the users and agreed to do it in the original way. The evaluator moved on to discuss the particular issue with the user representatives and sketched a design rationale to illustrate the criteria for and against various alternatives. One of the user representatives remarked: ``Oh, now I understand the issue . . . This is a great picture. It's like a mind map, showing what is important to consider . . . You know, we can't do this kind of analysis ourselves. We have to trust the designers' expertise in design; we tell them about the things we are experts in''. Later, the head of the design team said to the evaluator: ``We brought you into the project because we wanted you to help us with the design . . . We have noticed that when you have had a discussion with the users, they change their minds . . . Hence, we can't allow you to meet the users anymore''. The customer, at the design level represented by the user representatives, should get what was ordered. The usability evaluator was seen as an internal advisor, and should hence stay away from the user representatives. 7. The assumptions revisited The results may not seem surprising. However, a comparison with the assumptions underlying usability inspection reveals interesting dierences.
Assumption 1 (Identifying possible problems is what counts). The ®eld study showed that only some of the identi®ed usability problems led to design revisions. Several reasons for this could be identi®ed: · Usability ®ndings which questioned the requirements speci®ed in the contract could not be handled at the design level, but needed to be sent up to the management level. This illustrates Buie and Winkler's (1994) notion of a split between acquisition and operation in the procuring organization. In our case, such issues were typically rejected by the designers as being inappropriate for consideration, since they would cause disturbances out of the designers' control. · The user representatives' opinions were considered by the designers as overriding the usability evaluators' ®ndings. The designers wanted to be service-minded vis-a-vis the user representatives, who would later have a powerful role in the acceptance of the ®nal delivery. However, a comparison between the issues brought up by the user representatives and the usability evaluator showed substantial dierences. In brief, the user representatives had excellent knowledge of the application domain, but their interpretations were aected by their experience of an existing computer application. They had severe diculties in predicting eects of other alternatives. · The designers' practice enforced a distinction between design issues and ``cosmetic'' issues, where the latter could be disregarded in the design and prototyping phase. Hence, it is not sucient for usability inspection to predict a large amount of future usability problems. The ®eld study illustrates how and why the designers regard only a limited set of the ®ndings to be appropriate to act upon. Assumption 2 (Exploration and experimentation are important). The identi®ed obstacles to design exploration were clearly at odds with the assumption that the design space should be explored and that dierent design options should be tried out. The contract, which was based on a speci®cation of functional requirements, divided explorative activities in two parts: Before and after the formulation of the contract. It imposed severe restrictions on the designers' possibilities to explore the design space. However, the contract did not transfer the knowledge achieved in early design exploration. Hence, the designers were constrained, without the knowledge of the rationale for the constraints and without the necessary visions for the system and its use. In Grudin (1991) terminology, the contract imposes a ``wall'' between users and designers. In our case, the eect was that when the usability inspection yielded knowledge about the inappropriateness of a certain design option, the designers lacked a suitable framework for integration of this additional knowledge.
T. N aslund, J. L owgren / The Journal of Systems and Software 45 (1999) 233±240
Assumption 3 (The process is convergent, resulting in a usable system). The ®eld study showed that other aspects than predicted usability takes precedence in design decisions. In essence, this could be seen as a clash between two dierent rationalities: At one hand, the rationality encompassed by usability inspection methods, that the development process should converge in a design with high predicted usability. At the other hand, the rationality underlying the studied project, where adherence to the contract and other promises made to the customer were seen as far more important for the outcome of the project than the future usability of the system. The observed rationality attempted to make the design process converge to what was likely to be judged appropriate at the time of delivery of the system to the customer, rather than to what would be appropriate during use. 8. Discussion It is clear that there are problematic discrepancies between the actual use of usability inspection and its underlying assumptions in the case we studied. To summarize, one signi®cant dierence is that ®nding usability problems is necessary but not sucient for improving usability (Assumption 1). Another signi®cant dierence is that the method presupposes an explorative stance whereas the contract-project context enforces strong restrictions in terms of phase-wise closure and ``moving forward'' (Assumption 2). Moreover, the method assumes design-for-use, which entails focusing on the future users of the system and their needs, whereas the project context becomes more of design-fordelivery: the project is seen as a business relation in which the customer ± rather than the user ± is the most important external actor (Assumption 3). The question now becomes what to do about the discrepancies. One approach would be to consider modi®cations to the method itself; another is to examine the use of the method in the context of the case study. Discussing the context of contract development in general would be a third alternative. On the method level, Sawyer et al. (1996) discuss the impact of usability inspections in a product development setting. Their analysis of factors aecting impact points to the importance of a structured procedure, involving the designers replying in writing to the evaluators on what usability problems they intend to ®x and how, what problems they will not ®x and why. Sawyer et al. argues that this procedure increases the commitment of the development teams interested in the ®ndings of the usability inspections. Other factors increasing impact include providing severity ratings of problems found, providing speci®c recommendations for ®xes, encour-
239
aging designer participation in the inspections, and arranging for the evaluators to be involved in the development projects as early as possible. As we entered the case study, our hypothesis was actually ± in line with Sawyer's et al. ®ndings ± that a structured procedure for communicating evaluation ®ndings would be important for impact (NaÈslund, 1994). However, the problems we found during the case study were not attributable to the communication procedure but rather to the use of a usability-oriented method in the contract development context. With a method-in-context perspective, the main priority becomes to address the discrepancies between method assumptions and the values of the use context. The following summarizes our conclusions on this level. · Make external design, i.e., the design of appearance and behavior of the system from the users' point of view, a visible and respectable activity by introducing it formally into the systems development model. Today, external design is typically embedded in the general ``design'' phase which mainly addresses construction of the software. It is important to promote it to ®rst-order status and to de®ne it as design of the whole system from the users' perspective, rather than equating it with user interface design. · Make usability part of the project success measures. This can be addressed in part ``from within'', but the most important determinant of success is customer acceptance. In other words, the main issue is to create a customer demand for usability. Mauro (1994) summarizes a set of cost-justi®cation strategies for usability in contract development, including liability for product safety, reducing service and maintenance costs, pointing to the increased complexity of new products and the possibility to gain competitive market advantages through more usable information systems. · Support exploration of the external-design space. This issue is largely about changing the current view of progress in a systems development project. The designers must come to see the goal of external design as reduction of uncertainty rather than as the production of one solution. To facilitate this change, it would be valuable to structure external design as a set of areas to learn about, rather than as a sequence of parts to be developed. Finally, we will brie¯y examine the structural framework surrounding contract development: the project as a business relation between customer and contractor, regulated by a contract. It is clear in our case study that the notion of an exhaustive and early-frozen requirement speci®cation runs counter to the assumptions behind the usability inspection method used. To accommodate exploration and design-for-use, it is necessary that the business relation (1) emphasizes the responsibility of the customer for the future work
240
T. N aslund, J. L owgren / The Journal of Systems and Software 45 (1999) 233±240
situation of the users he or she represents, and (2) mutually allows for new things to be learned and worked into the contract throughout the project. When purchasing information systems development, the customer organization is buying facilitation of its own internal enterprise development rather than a technical artifact. It is essential to facilitate the transition from traditional product contracts to process contracts, which primarily speci®es the process by which a product is obtained. One example in this direction is the Mentor project model for development of interactive systems (Thomsen, 1993). The idea is that after an initial speci®cation and negotiation of time schedule and price, the cycle of design, negotiation of time and price, development and evaluation is iterated to mutual satisfaction. Users and customers participate actively in all phases through structured working groups. Large projects are divided into subsystems that each have their own life cycle. The time and price negotiations are based on a rather simple estimation model where the main parameters are the type and the complexity of the system components to be developed. A similar approach is the TAK project management model developed by the Swedish consultancy Linne Data. Delivery time, cost and level of ambition (in terms of what system to build) are variable parameters, that the customer and the contractor negotiate at predetermined milestones throughout the project. Moreover, the customer in the TAK model is responsible for most project decisions. The contractor supports the customer in providing the necessary information on which to base the decisions; the business relation is generally based on mutual learning. To conclude, this paper has illustrated the implications of regarding methods not as standalone vehicles for systems development knowledge, but rather as situated in their contexts of actual use. It is our opinion that method development bene®ts from taking the contextual perspective. Acknowledgements We are grateful to the professional developers who gave their time and expertise to make this ®eld study possible. Three anonymous reviewers provided excellent comments on an earlier version. The research was funded by the Swedish Board for Industrial and Technical Development (Nutek) under the Information Systems programme.
References Buie, E., Winkler, I., 1994. HCI challenges in government contracting: a CHI '94 SIG report. SIGCHI Bulletin 26 (4), 49±50. Dumas, J., Redish, J., 1993. A practical guide to usability testing. Albex, Norwood. Grudin, J., 1991. Interactive systems: bridging the gaps between developers and users. IEEE Computer, April, 59±69. L owgren, J., 1995. Perspectives on usability. Lecture Notes LiTHIDA-R-95-23, Department of Computer Science, Link oping University, Sweden. Available by anonymous FTP at ftp.ida.liu.se/pub/publications/techrep/1995/r-95-23.ps.gz. Mauro, C. 1994. Cost-justifying usability in a contractor company. In: Bias, R., Mayhew, D. (Eds.), Cost-Justifying Usability, Academic Press, Boston, pp. 123±142. N aslund, T., 1992. On the role of evaluations in iterative development of managerial support systems. Link oping Studies in Science and Technology 335, Link oping University, Sweden. N aslund, T., 1994. Supporting design communication with explicit representation of evaluation feedback. Scand. J. Information Systems 6 (2), 21±42. N aslund, T., 1996. Computers in context ± but in which context? Scand. J. Information Systems 8 (1), 3±28. Nielsen, J., Mack, R. (Eds.), 1994. Usability Inspection Methods, Wiley, New York. Sawyer, P., Flanders, A., Wixon, D., 1996. Making a dierence ± the impact of inspections. In: Human Factors in Computing Systems, CHI `96 Proceedings, ACM Press, New York, pp. 376±382. Thomsen, K.S., 1993. The Mentor project model: a model for experimental development of contract software. Scand. J. Information Systems 5, 113±131. Torbj orn N aslund works with IT quality at the medical company Astra Arcus. His research is focused on process improvement and usability in practice. Address: Astra Arcus, AIE 208, S-151 85 S odert alje, Sweden.
[email protected]. Jonas L owgren is a teacher and researcher in interaction design at Malm o University College. His research is focused on innovative design and contemporary design theory. Address: School of Art and Communication, Malm o University College, S-205 06 Malm o, Sweden.
[email protected].