Editorial / Interacting with Computers 12 (2000) 315–322
315
Interacting with Computers 12 (2000) 315–322 www.elsevier.com/locate/intcom
Editorial
Special issue on intelligent interface technology: editor’s introduction 1. Introduction: adaptivity and modelling The term ‘Intelligent Interface Technology’ (IIT) is intended to capture the wide range of issues and methods in which some form of ‘intelligence’ is applied to both user interface design and implementation. Originally, these became known as adaptive user interfaces [1–4], but with the emergence of agent-based interaction [5,6] and specific applications of intelligence to areas as diverse as intelligent hypermedia; recommender systems; intelligent filtering; explanation systems; intelligent help and computer tutoring, both issues and methods have expanded to encompass a greater flexibility in a definition of the applications of intelligent systems, and of ‘intelligence’ itself. Whilst the application of artificial intelligence or knowledge-based techniques to say, scheduling or optimising production flows, and potential application of agent technology to, for example, automatic routing of telephone calls [6] would be seen to fall outside the area of IIT, any such application to interface issues, or to systems design and development would fall within its boundaries. Moreover, we emphasise a focus on human interaction and on a measure of adaptivity to differing user requirements and needs: McTear says in his paper [7], “One frequently cited indication of intelligence is the ability to adapt”. A crucial feature of IIT is that it always includes a representation of the user or users of such systems. These representations exist to a greater or lesser degree of formality and are commonly known as ‘user models’ since they describe facets of user behaviour, knowledge and aptitudes. A whole area of research devoted to user models has come into existence, along with its own journals, conferences, meetings and burning issues. McTear [7] identifies a useful overview of user models and the varieties of uses to which they can be put. In this Special Issue, our concern with IIT is primarily with that of the user interface. We believe that IIT should contribute to the fundamental usability of a human-computed system; though whether IIT changes the nature of usability is an interesting issue which is raised in one of our contributions (Ho¨o¨k, this volume). The basis for this publication was a workshop on ‘The Reality of Intelligent Interface Technology’ held at Napier University in the Spring of 1997. Since then, selected papers have been reviewed, been extensively revised and extended by the authors, re-reviewed and finally compiled into the present collection. In a rapidly changing field, how can ideas and systems that were developed in 1997 still be relevant in the year 2000? The answer is 0953-5438/00/$ - see front matter q 2000 Elsevier Science B.V. All rights reserved. PII: S0953-543 8(99)00024-7
316
Editorial / Interacting with Computers 12 (2000) 315–322
that they are still of critical relevance since the reality of IIT is that there are hard problems to solve and that they are not solved quickly. 2. The reality of intelligent interface technology If we go back further—to 1993—we can surely see major changes since then? 1993 was the year of the first ACM International Workshop on Intelligent User Interfaces [8]. This event was repeated in 1995 and finally evolved over the next few years into an annual International Conference on Intelligent User Interfaces. One of the papers at the 1993 workshop was a paper by Pattie Maes and Robyn Kozeriok on interface agents [9]. They had built a prototype interface agent that really seemed to bring the visions of Brenda Laurel and Alan Kay to life. Published in 1990 these visions heralded the days of ‘indirect management’ of information against ‘direct manipulation’ [10]. This debate still continues at conferences such as ICIUI [11]. That time lag, then, between vision and reality can be quite long. We do nowadays have some excellent examples of interface agents (e.g. Ref. [12]) and, in our view, any of the systems in this volume can be considered to be an example of agent-based interaction. However, have agent-based systems really had the impact that was expected in the early 1990s? Recommender systems on the Internet and WWW are legion, computer assistants do proliferate in common office applications but are knowledge-based filtering systems getting any better? Are computer tutoring systems really so much more sophisticated than the early prototypes of twenty years ago? Are agents taking on realistic information seeking and managements tasks and are explanation systems more in tune with user needs than those Expert Systems of early AI research? We believe the realistic utilisation of reliable agents and believable user models is a very slow process and that there still are fundamental unsolved difficulties with intelligent interface technologies. 1993 also saw two special issues of journals devoted to IIT and the related field of user modelling [13,14]. In these, we published a framework—an ‘architecture’ or reference model—for adaptive user interfaces [4,15] and expanded upon related work for the IWIUI Conference [16]. The aim of this architecture was to bring some common method of description for all IIT—so systems could be compared on their various components, so that we could open up these ‘intelligent’ systems and see what was inside. To what extent that aim has been fulfilled is a moot point; even whether or not that was a sensible goal is similarly up for question. What the framework does do is to provide a useful way of thinking about IIT and about focusing on just why it is so difficult. 3. A reference model for intelligent interface technology The framework is shown in Fig. 1. The domain model is the representation of the domain or application, described at one or more of the three levels indicated. So, for example, an e-mail filtering agent might have a domain model which describes e-mails in terms of the header, subject, who it is from and so on. Domain models are abstract representations of the domain, so will not include all details. Our e-mail filtering agent would probably not have a representation of the content of e-mails. The user model describes what the system ‘knows’ about the user. Some systems
Editorial / Interacting with Computers 12 (2000) 315–322
Psycholgical model
profile model
intentional level
student model
317
conceptual level physical level
user model
domain model
dialogue record evaluation mechanisms
inference adaptation mechanisms mechanisms interaction knowledge base interaction model
Fig. 1. Overall architecture for IIT.
concentrate on developing models of user habits, inferred by monitoring user-task interactions over time (i.e. by keeping a dialogue record). Other user profile data can often be most easily obtained by asking the user. Other systems try to infer user goals, although it is very difficult to infer what a user is trying to do from the data typically available to a computer system (mouse clicks and a sequence of commands). The user’s knowledge of the domain is represented in the student model. The third component of the framework is the interaction model. This is an abstraction of the interaction (the dialogue record) along with mechanisms (such as a rule-base, a statistical model, a genetic algorithm, etc.) for making inferences from the other models, for specifying adaptations and, possibly, evaluating the effectiveness of the system’s performance. Two points are important to note about this framework. Firstly, there may be several user roles and hence several user models which represent the individual agents in a multiagent system. Similarly, there may be more than one domain model. Secondly, the interaction model as expressed through the adaptive, inference and evaluation mechanisms may be extremely complex, embodying theories of language, pedagogy or explanation. A tutoring model, for example, represents a particular approach to teaching and repair dialogues concerned with the interaction between the student (the user) and the course content (the domain model). A tutoring model component of an intelligent tutoring system would be described in the interaction model. In natural language systems the difference between the interaction-independent theory of language and the interaction-dependent nature of the actual dialogue reflects the distinction we have drawn between the dialogue record and the KB mechanisms which use that data. As a generalism, the interaction model will represent the strategies and theory of the particular type of system in which it is embodied. Thus the interaction model in, Miah and Alty’s vanishing windows ‘agent’ (this volume, [17]) contains a strategy for how best to remove unwanted windows (i.e. the adaptation) and a theory of how to infer that the windows are indeed unwanted. The complexity of the various models that a system possesses defines a number of levels and types of IIT. In their early consideration of adaptivity, Browne, Totterdell and Norman
318
Editorial / Interacting with Computers 12 (2000) 315–322
[2] identify four main classes of adaptive system. Some systems are characterised by an ability to produce a change in output in response to a change in input. These rudimentary, rule-based adaptive mechanisms have a limited behaviour because the adaptive mechanism is ‘hard wired’. A simple adaptive system can be enhanced if it has a dialogue record which allows us to keep a history of the interaction. The inference mechanisms can now make use of this history, rather than simply reacting to a change in input with a change in output. Other systems monitor and evaluate the effects of adaptation on the subsequent interaction. This evaluation mechanism selects from a range of possible outputs for any given input. More sophisticated systems monitor the effect on a model of the interaction. Possible adaptations can be tried out in theory (i.e. run against a model of the interaction) and evaluated. Such systems now must be able to abstract from the dialogue record to capture an intentional interpretation of the interaction, and must include a representation of their own ‘purpose’ in the domain model. At another level of complexity, systems may be capable of changing these representations and thus ‘reasoning’ about the interaction. These levels reflect a change of intention from the designer specifying mechanisms in a (simple) adaptive system to the system itself dealing with the design and evaluation of its mechanisms in a more sophisticated fashion. Moving up the levels incurs an increasing cost which may not be justified: there is little to be gained by providing a system with a capability to adapt its own domain model, for example, if the context of the interaction is never going to change. Macredie and Keeble (this volume, [18]) use the framework to describe the components of their system, and we find it useful to use when examining different examples of IIT. For example, MMI 2 [19] is an adaptive system which aims to select the most appropriate graphical method of display of statistical data, dynamically generating charts of different types. The system includes a user model in which individual users are assigned to stereotypes based on the use they would make of the data. The intentional level of the domain model represents knowledge such as ‘for forecasting a graph is preferred to a table, whereas for complex static comparisons a bar chart should be used’. The conceptual level includes rules for selecting the type of chart and the best way to design it. At the physical level the system exploits the preferences of users and employs standard conventions for deciding how to display the data. The pioneering approach to user models was the GRUNDY system [20,21] which introduced the idea of ‘stereotypes’ as sets of characteristics shared by many users. GRUNDY was also one of the first recommender systems [22], recommending books to different people (though the principles of recommending and user modelling can be seen in all ‘computer dating’ applications). A simple set of characteristics is used to describe stereotypical people and the characteristics are given a value representing the amount of that value is associated with an individual. Triggers are objects associated with a situation (or another person) which selects the stereotype based on the values of the characteristics. The system then makes inferences concerning the values of various characteristics derived from the stereotypes, refines the values and maintains a confidence rating in its inferences. Although such an approach can be rather crude, it can be effective as the number of successful recommender systems demonstrate [22].
Editorial / Interacting with Computers 12 (2000) 315–322
319
Other systems try to represent fundamental psychological data about users. One of the reasons for focusing on psychological models is that these are characteristics which are most resistant to change in people [23] and which can vary considerable between individuals (as high as 20:1—i.e. one person may take twenty times as long as another to complete the same task). People can learn domain knowledge, but are less likely to be able to change fundamental psychological characteristics such as spatial ability. One of the difficulties with capturing psychological data is that the only signals which a computer can get are the sequences of tokens passed across the interface and attributes of that sequence such as timing information (this is the dialogue record of Fig. 1). Although this bandwidth will increase, it still remains very narrow compared to the wealth of information that we as humans can perceive. For example, the dialogue record in the example in Ref. [24] consisted of just two values, the number of tasks completed, and the number of errors made. In the Flexcel system [25], the dialogue record is a list of Excel commands and the domain model is a list of Excel functions along with Flexcel functions and Flexcel functions with default values. The user model consists of usage statistics and a usage profile which describes user-defined menu entries and an adaptation tip threshold. Whether we are talking about an agent that seeks out Web pages that it ‘thinks’ the user will like [12], an adaptive hypermedia system that hides or displays data according to what it ‘believes’ the user’s goal to be [26] or a user interface that adapts to the inferred spatial ability of a user [15], we can describe it with reference to the domain, user and interaction models that it has. Doing so can help to demystify the capability of the ‘intelligence’ in the system, and make users more willing to accept the limits of this ‘intelligence’.
4. The IIT workshop papers In this Special Issue of Interacting with Computers, there are five papers which between them provide a comprehensive coverage of the main areas of IIT. The first and last papers deal with the main challenges; both those that face IIT and those that are caused by it. In between are two papers providing detailed descriptions of examples of IIT, or agent-based interaction, and one that focuses on how to design for IIT. McTear [7] discusses the relationship between the theory and reality of IIT, focusing on two important facets—user models and natural language processing. He identifies two key aspects: that of an interface being able to adapt its behaviour to the perceived needs of individual users, and its ability to convey information and to communicate through natural language. He discusses the ideas of planning and plan inference through an explication of a number of case studies in spoken natural language systems and the MS Office assistant project, providing many examples to support his arguments. He concludes that there is indeed a gap between theoretical research and commercial practice but stresses that other aspects do intrude; notably that of system usability and acceptability. This provides a nice linkage to a later paper in this collection (Ho¨o¨k, this volume) and clearly shows the gulf between theory and practice which IIT practitioners are attempting to bridge through their implementations of intelligent and adaptive systems. Miah and Alty [17] concentrate on a much more localised and specific area: that of automating and assisting a user to manage and efficiently access on-screen windows. By
320
Editorial / Interacting with Computers 12 (2000) 315–322
explaining the notion of ‘clutter’ and the best and worst features of windowing systems, the authors make a case for an adaptive window management system based on a user model. They pose the basic questions of an adaptive system (the goals, how adaptation is effected, and under what conditions adaptation should take place). As such, this paper fits firmly in the problem domain of IIT, with the overall aim of freeing the user for a more effective interaction, for allowing users “to become more involved in achieving the task at hand and… less in manipulating the interface”. The authors specifically discuss the user interface issue of window management and look at how windows can be made to shrink when they are not used for a length of time. Their system of ‘vanishing windows’ can adopt different strategies for shrinking windows and they provide detailed analysis of the difference between the strategies. Once again this paper discusses the ideas of constraints on interfaces—windows are constrained by the type of information that they contain and this affects the appropriateness of different strategies used. Macredie and Keeble [18] posit a basic premise of having realistic expectations about available information concerning an interaction when specifying an adaptive interface. This illustrates precisely the dilemma of adaptivity: that subsequent (adapted) interactions should benefit the user rather than making the interface less usable. The problem lies in just how to achieve this aim. They ably demonstrate and explore these issues through software agents for Web browsing: a task which is well suited to varying levels of customisation and flexibility since Web agents can adapt to individual preferences and to the characteristics of the domain. The authors provide a coherent overview of Web browsing agents and list a set of typical and well-focused Web and system activities which set the scope for an adaptive interface. They present a detailed case study of the development of these agents with full descriptions of their user, domain and interaction models and adaptation mechanisms, showing how such an architecture can provide a set of ‘adaptive agents’ and how users can interact with them. Their framework is a useful addition to the adaptive interface literature, firmly based in a wellknown application domain and showing that IIT can be a distinct reality. Akoumianakis, Savidis and Stephanidis [27] describe an approach to developing unified user interfaces and show how IIT can be used to provide accessibility to systems for users with different capabilities. They provide an overview of the prevailing strands in IIT, focusing on adaptation at the user interface based upon both user and discourse modelling and address the overriding objectives of customisation, individualisation and tailorabilty. They concentrate on the issues of model-based development for interface adaptivity encapsulating reusable models and knowledge repositories and declare their support for declarative specification. They also clearly explain the paradigm of agent-based interaction in providing a new insight into the problems of creating intelligent user interfaces. The authors illustrate their design approach and software environment with examples from developing graphical user interfaces for people with disabilities and discuss how various metaphors can be implemented side by side, with access controlled by a modified PAC architecture. Such a multiple metaphor environment implies an integrated multiple toolkit platform and the authors provide a clear demonstration of the embodiment of plausible alternative metaphors in such an environment, mapping target domain functions to the presentation symbols required in a separable underlying architecture. The aim of unified user interface design is to allow the range of design articulation
Editorial / Interacting with Computers 12 (2000) 315–322
321
(from early accommodation to users through to propagation of design knowledge) to be achieved in a formalised stepwise fashion. The suggested development platform then integrates a collection of unified object classes and feeds into a specification language for interface generation and implementation. Based on this model, the authors raise a number of interesting pointers to the implementation aspects of intelligent interface design and the use of reference models to enhance such constructions, particularly focused on the limitations and required features of unified toolkits. In the final paper, Ho¨o¨k [28] cogently presents a number of ‘high level’ issues that need to be overcome before IIT becomes really useful. The usability principles and issues involved in the provision of adaptive interfaces and intelligent systems have been alluded to in all the papers in this Special Issue and in much of the user modelling literature but this paper brings them together in a coherent argument and explains in some detail just what the implications may be. Issues such as control, predictability, usability, privacy and trust need to be considered in any IIT development and the author identifies which of these, together with an ‘interface culture’, can make an adaptive design useful. She offers a comparison of a number of methods that can and should be applied and highlights areas where research is still required, setting the scene for ongoing work in design practice in adaptivity and intelligent interfaces.
5. In conclusion This Special Issue of Interacting with Computers is timely, then, as we move into the new millennium. There are many inventive systems out there and many difficulties. To what extent such difficulties will be easily and elegantly overcome remains to be seen. Changes in technology will undoubtedly have significant impact on problems that were once major and soon become minor; speed of processing, for example, or small numbers from which the make inferences. The emergence of an informed and increasingly sophisticated mass of users on the Web has helped to develop the effectiveness of recommender systems. The experience of computer interaction by major sectors of our society and a recognition of both its potential and its limitations means that expectations change and that experience and practice are both constantly in flux. We believe that we still have a long way to go but that the journey will be a fascinating one.
References [1] P.R. Innocent, Towards self-adaptive systems, Int. J. Man–Machine Studies (1982) 16. [2] D. Browne, P. Totterdell, M. Norman (Eds.), Adaptive User Interfaces Academic Press, London, 1990. [3] D.M. Murray, Modeling for adaptivity, in: M.J. Tauber, D. Ackermann (Eds.), Mental Models and Human– Computer Interaction 2, Elsevier, Amsterdam, 1990. [4] D.R. Benyon, D.M. Murray, Developing adaptive systems to fit individual aptitudes, in: Proceedings of the International Workshop on Intelligent User Interfaces, Orlando, Florida, January 4–7, 1993, pp. 115–121. [5] B. Laurel, Interface agents, in: B. Laurel (Ed.), The Art of Human–Computer Interface Design, AddisonWesley, Wokingham, UK, 1990. [6] M.J. Wooldridge, N.R. Jennings (Eds.), Intelligent Agents Lecture Notes in Artificial Intelligence, Springer, Berlin, 1995.
322
Editorial / Interacting with Computers 12 (2000) 315–322
[7] M.F. McTear, Intelligent interface technology: from theory to reality? in: D. Benyon, D.M. Murray (Eds.), The Reality of Intelligent Interface Technology, Special Issue of Interacting with Computers 1999 (this issue). [8] W. Gray, W. Hefley, D.M. Murray (Eds.), Proceedings of the International Workshop on Intelligent User Interfaces, Orlando, Florida, January 4–7, ACM Press, New York, 1993. [9] R. Kozeriok, P. Maes, A learning interface agent for scheduling meetings, in: W. Gray, W. Hefley, D.M. Murray (Eds.), Proceedings of the International Workshop on Intelligent User Interfaces, Orlando, Florida, January 4–7, pp. 81–88, 1993. [10] A. Kay, User interface: a personal view, in: B. Laurel (Ed.), The Art of Human–Computer Interface Design, Addison-Wesley, Wokingham, UK, 1990. [11] P. Maes, B. Schneiderman, Direct manipulation vs. interface agents: a debate, Interactions 4 (6) (1997). [12] H. Lieberman, Integrating user interface agents with conventional applications, Knowledge-based Systems 11 (1998) 15–23. [13] Special issue of Knowledge-Based Systems, 6 (4) 1993. [14] Special issue of Artificial Intelligence Review, 6, 1993. [15] D.R. Benyon, D.M. Murray, Applying user modelling to HCI Design, Artificial Intelligence Review 6 (1993) 43–69. [16] W. Hefley, D.M. Murray, Intelligent user interfaces, in: W. Gray, W. Hefley, D.M. Murray (Eds.), Proceedings of the International Workshop on Intelligent User Interfaces, Orlando, Florida, January 4–7, pp. 3–10, 1993. [17] T. Miah, J.L. Alty, Vanishing Windows—a technique for adaptive window management, in: D. Benyon, D.M. Murray (Eds.), The Reality of Intelligent Interface Technology, Special Issue of Interacting with Computers, 1999. [18] R.J. Keeble, R.D. Macredie, Assistant agents for the world wide web intelligent interface design challenges, in: D. Benyon, D.M. Murray (Eds.), The Reality of Intelligent Interface Technology, Special Issue of Interacting with Computers, 1999. [19] H. Chappel, M. Wilson, Knowledge-based design of graphical responses, in: Proceedings of the International Workshop on Intelligent User Interfaces, Orlando, Florida, January 4–7, 1993, pp. 29–36. [20] E. Rich, Stereotypes and user modelling, in: A. Kobsa, W. Wahlster (Eds.), User Models in Dialog Systems, Springer, Berlin, 1989. [21] E. Rich, Users are individuals: individualizing user models, Int. J. Human-Comput. Studies, 30th Anniversary Special Issue 51 (2) (1999) 323–338. [22] Special issue of Communications of the ACM, 1994. [23] G.C. van der Veer, Human–Computer Interaction. Learning, individual differences and design recommendations, Offsetdrukkerij Haveka B.V., Alblasserdam, 1990. [24] D.R. Benyon, Adaptive systems: a solution to usability problems, User Modeling User-Adapted Interaction 3 (1) (1993) 65–87. [25] C.G. Thomas, M. Krogsaeter, An adaptive environment for the user interface of Excel, in: Proceedings of the International Workshop on Intelligent User Interfaces, Orlando, Florida, January 4–7, 1993, pp. 123–130. [26] K. Ho¨o¨k, Evaluating the utility and usability of an adaptive hypermedia system, J. Knowledge-Based Systems 10 (5) (1998) 0. [27] D. Akoumiandis, A. Savidis, C. Stephanidis, Encapsulating intelligent interactive behaviour in unified user interface artefacts, in: D. Benyon, D.M. Murray (Eds.), The Reality of Intelligent Interface Technology, Special Issue of Interacting with Computers, 1999. [28] K. Ho¨o¨k, Steps to take before intelligent user interfaces become real, in: D. Benyon, D.M. Murray (Eds.), The Reality of Intelligent Interface Technology, Special Issue of Interacting with Computers, 1999.
D.R. Benyon School of Computing, Napier University, 219, Colinton Road, Edinburgh EH14 1DJ, UK E-mail address:
[email protected] D.M. Murray 59, Cambridge Road, Teddington, Middlesex TW11 8DT, UK