CHAIN: Developing model-driven contextual help for adaptive user interfaces

CHAIN: Developing model-driven contextual help for adaptive user interfaces

Accepted Manuscript CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces Pierre A. Akiki PII: DOI: Reference: S0164-1212(17)3...

3MB Sizes 0 Downloads 104 Views

Accepted Manuscript

CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces Pierre A. Akiki PII: DOI: Reference:

S0164-1212(17)30250-9 10.1016/j.jss.2017.10.017 JSS 10056

To appear in:

The Journal of Systems & Software

Received date: Revised date: Accepted date:

7 February 2017 26 September 2017 13 October 2017

Please cite this article as: Pierre A. Akiki , CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces, The Journal of Systems & Software (2017), doi: 10.1016/j.jss.2017.10.017

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT The Journal of Systems and Software

1

Highlights

CE

PT

ED

M

AN US

CR IP T

CHAIN is an approach for developing model-driven contextual help for adaptive UIs. CHAIN help models are defined using a language (CHAINXML) and a visual notation. The definition of help models is supported by Cedar Studio. CHAIN help models work with both new and legacy application UIs. Two evaluation studies provided positive indications and insights for future work.

AC

    

ACCEPTED MANUSCRIPT 2

CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces Pierre A. Akiki Department of Computer Science, Notre Dame University – Louaize, Lebanon

AN US

CR IP T

Abstract— Adaptive user interfaces (UIs) change their presentation at runtime to remain usable in different contexts-of-use. Such changes can cause discrepancies between the UI and static help materials, e.g., videos and screenshots, thereby negatively impacting the latter’s usefulness (usability and utility). This paper presents Contextual Help for Adaptive INterfaces (CHAIN), which is an approach for developing model-driven contextual help that maintains its usefulness across UI adaptations. This trait is achieved by interpreting the help models at runtime and overlaying instructions on the running adapted UI. A language called Contextual Help for Adaptive INterfaces eXtensible Markup Language (CHAINXML) and a visual notation were developed for expressing and depicting help models. A technique was also devised for presenting CHAIN help models over legacy applications, whether or not their source-code is available. A supporting tool was developed as an extension to Cedar Studio. This work was empirically evaluated in two studies. The first study performed a preliminary evaluation of CHAIN’s visual notation. The second study evaluated CHAIN’s strengths and shortcomings after using it to develop help for real-life adaptive UIs. The results gave a positive indication about CHAIN’s technical qualities and provided insights that could inform future work. Keywords: Contextual Help, Adaptive User Interfaces, Model-Driven Help, Software Support, Design Tools and Techniques

1 INTRODUCTION

A

AC

CE

PT

ED

M

daptive user interfaces (UIs) change their presentation at runtime to preserve their usability in different contexts-of-use (user, platform, and environment) [1]. Such changes negatively affect the usefulness (usability and utility [2], p.25) of static end-user help such as videos and screenshots. This undesirable effect is due to discrepancies between the initial UI, on which the help is based, and the adapted UI. For example, a widget would become difficult to locate using static UI help if its location changes at runtime. Therefore, adaptive UIs require a new type of help system, which also needs to be adaptive [3]. This paper presents an approach for developing model-driven contextual UI help as an alternative to static help. Contextual help is a form of assistance that is presented to end-users on the running UI, rather than in a separate viewer such as a browser or a video player [4]. As end-users interact with a running UI, contextual help can provide assistance by displaying popup messages, entering example data, highlighting elements, and so on. Representing contextual help as interpreted runtime models, allows instructions and markings to be overlaid dynamically on the running UI, thereby maintaining the help‟s usefulness for different UI adaptations.

1.1 Advantages of Model-Driven Contextual Help Non-contextual help is presented apart from the UI, e.g., using a video player or a secondary display [5]. Moving back and forth between the UI and the help viewer could distract end-users and increase their memory load. Since contextual help is presented on top of the running UI, it can more effectively support the end-users‟ learning pro————————————————

Pierre A. Akiki is with the Department of Computer Science, Notre Dame University – Louaize, Zouk Mosbeh, P.O. Box: 72, Zouk Mikael, Lebanon. E-mail: [email protected]

cess [4]. A direct advantage of representing contextual help as models, is allowing its content to be updated if the UI is modified due to rapidly changing requirements, e.g., in web applications. On the other hand, static help artifacts like videos are more costly to update, because they would have to be redeveloped from scratch. Cheng et al. [6] consider that “software systems must become more versatile, flexible, … , customizable, configurable, and self-optimizing by adapting to changing operational contexts, environments or system characteristics”. Adaptive UIs have been researched as a way for solving usability problems when contexts-of-use vary. Several approaches have been devised for developing adaptive UIs [7]. For example, Comet(s) are a set of widgets that support UI plasticity by either self-adapting or being adapted [8]. Supple uses preference elicitation and cost functions to generate UIs, which are adapted to an end-user‟s abilities and preferences [9]. RBUIS adopts an interpreted runtime model-driven approach for developing adaptive UIs, which offer a minimal feature-set and an optimal layout based on the context-of-use [10]. Regardless of the adopted development approach, adaptive UIs can ultimately change their visual characteristics, thereby becoming inconsistent with the content of static help materials. The work presented in this paper, complements UI adaptation techniques by allowing software applications to benefit from adaptive UI capabilities, while providing end-users with contextual help that maintains its usefulness with different adaptations of the same UI. For example, enterprise applications like enterprise resource planning (ERP) systems can benefit from adaptive behavior to tailor their thousands of complex UIs for different contexts-of-use. These applications, e.g., Lawson Smart Office [11], can also benefit from offering help to their end-users.

2017

ACCEPTED MANUSCRIPT 3

CR IP T

Pierre A. Akiki / CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces

Fig. 1. An Example of a Single Contextual Help Scenario Running on Two Different Adaptations of the Same ―Customer Information‖ UI -

AN US

(a) UI V1 Groups its Widgets Using Group Boxes, whereas (b) UI V2 Groups its Widgets Using Tab Pages

AC

CE

PT

ED

M

1.2 Proposed Contextual Help Development Approach This paper presents a help development approach called Contextual Help for Adaptive Interfaces (CHAIN). Help models can be defined using an XML-compliant language called CHAINXML and a visual notation. A supporting visual tool was added as an extension to Cedar Studio, an integrated development environment (IDE) for adaptive model-driven UIs [12]. Sophisticated contextual help can be created using this tool without any programming. Software applications can request help models by calling a web-service that returns a CHAINXML document. This document is interpreted at runtime and presented over the running UI as contextual help. The example illustrated in Fig. 1 demonstrates part of a contextual help scenario running over two different adaptations of the same UI. This simple example shows the importance of the proposed approach for developing adaptive contextual help. If a static help development technique was used, then a different artifact (e.g., video) would be needed for each adaptation (V1 and V2) of the UI. Having many runtime variations of the context-ofuse makes developing different versions of the help very costly or even unfeasible. CHAIN help scenarios can be played on a timeline. The end-users control a help scenario by going forward and backward, as is done in videos, using a slider and a set of buttons like the ones shown in Fig. 1 – c. Endusers can also request part of a help scenario that explains a specific section of the UI. CHAIN is also useful for developing help for nonadaptive UIs. Yet, it primarily targets the development of contextual help for adaptive UIs. Hence, the latter will be the focus of this paper.

1.3 Structure of the Article Section 2 categorizes the related work according to the adopted development approach, and discusses its strengths and shortcomings. Section 3 presents CHAIN‟s metamodel, while Section 4 presents its XML-compliant lan-

guage (CHAINXML) and visual notation. Section 5 explains how help models can be developed and validated based on a proposed architecture. Section 6 explains how CHAIN help maintain its usefulness as the UI adapts. Section 7 explains how CHAIN help models can be used with legacy systems, regardless of whether the source-code of these systems is available. A tool for supporting the development of CHAIN help is presented in Section 8. The work presented in this paper is evaluated in two studies, which are presented in Sections 9 and 10. The first study is a preliminary evaluation to obtain feedback on CHAIN‟s visual notation. The second study evaluates the strengths and limitations of CHAIN through actual use. Finally, the conclusions are given and the future work is discussed in Section 11.

2 RELATED WORK Some software help systems, e.g., the work done as part of project Genie [13], date back several decades. To the best of my knowledge, contextual help that maintains its usefulness across different UI adaptations, has not been targeted by the state-of-the-art help development systems. Researchers have used different approaches for developing various forms of end-user help. This section classifies the existing works based on the adopted development approach and points out their strengths and shortcomings. A summary of the state-of-the-art help systems discussed in this section is presented in Table 1. This table briefly states each system‟s expected input and produced output. The UI level of abstraction that each system operates on is also shown in Table 1, with respect to the Cameleon Reference Framework (CRF) [1].

2.1 Static Non-Contextual Help Static non-contextual help can have different forms such as: external text-based documentation, pictures, and videos, and it accompanies software applications to guide end-users on how to perform certain tasks. Plaisant et al. provide guidelines for developing prerecorded demonstra-

ACCEPTED MANUSCRIPT 4

Pierre A. Akiki / The Journal of Systems and Software

TABLE 1 Summary of the State of the Art Help Systems CRF Level

Help and Manual [16]

Text and figures

Help document (PDF, Word, etc.)

-

ScreenSteps [19]

Text and screenshot

Help document (PDF, Word, etc.)

-

Steps Recorder [18]

Key log, screenshots, and annotations

Help steps as text and screenshots

-

Snagit [17]

Screenshots and audio recording

Help video

-

CogentHelp [21]

Help Parts

Help Pages

AUI

Balloon Tooltips [22]

Programmed tooltips

Tooltips showing brief messages

CUI

SideViews [23]

Programmed side view (MVC)

Interactive tooltip

CUI

Toolclips [24]

Text and video

Tooltips with text and video

CUI

Crystal framework [25]

Built-in answer generator

Dialogs that answer questions

CUI, FUI

Writer’s Aid [26]

User preferences, bibliography information, and AI planning

Help for identifying and inserting citations in a document

CUI, FUI

Dixon et al. [27]

UI content and hierarchy

Stencil-based tutorials

FUI

Sikulli [28]

Script and screenshots of UI elements

Slide show presented over the UI

FUI

Microsoft [29], Apple [30], JitBit [31], MiniMouse [32], Selenium [33], CoScripter [34]

End-user activities that are recorded or manually specified (e.g., using a script)

Activity playback over the UIs of desktop or web applications

FUI

Task-Interface-Log [35] OWL [36]

End-user UI interaction log for a task Logged user data (e.g., commands)

Greco et al. [37]

System logs

AIMHelp [38]

End-user interaction information and the monitored state of the GUI

Playback of previous interactions Intelligent tips and skillometer Helps administrators in identifying frequent workflow executions Automatically generated help information Previously asked questions and their answers for selected UI part Help wiki page created by the community

Xplain [41], [42]

AN US

M

Automatic context identification

FUI FUI FUI FUI FUI FUI

Task model association with the UI

Self-explanatory UI

Annotations on task models

Help for Adaptive UIs

Task

Pangoli et al. [43]

Tasks associated with interaction objects

Generated task oriented help

Task

Moriyon et al. [55]

Model (presentation, behavior, and help extensions) and help rules

Generated help text

CUI

CE

Kern et al. [3]

Task, CUI

Collagen [44]

Task model

DiamondHelp [56], [57]

Task model (based on Collagen), and user actions and utterances

Agent that helps end-users in performing certain tasks Collaborative dialogues between application and end-users

Dekoven [45]

Task model (based on Collagen)

Help for physical appliances

Task

HATS [46]

ATOMS task model

Task-oriented context sensitive help

Task

Gribova [47]

Ontology models linking tasks to UI

Context-sensitive help

AC

Model-Driven Engineering

ED

CHiC [40]

UI part on which end-user requires help

PT

Frameworks and Toolkit Extensions Usage Logging Crowdsourcing

Lemon-Aid [39]

CR IP T

-

Static NonContextual Help

Help document (PDF, Word, etc.)

Pixel Analysis

Output

Text and figures

Macro Recorders

Input Helpndoc [15]

tions on how to use the UIs of software systems [14]. A variety of authoring tools are available for supporting the development of static non-contextual help. For example, some tools support the preparation of HTML-based help files, while others support screen casting for recording demonstration videos. Helpndoc [15] and Help and Manual [16] are two examples of such tools. Snagit can be used to prepare screenshot tutorials [17], while Steps Recorder

Task Task

Task, CUI

[18] can create screenshots with descriptions based on a set of actions that an end-user performs on Windows. Although some tools such as ScreenSteps [19] refer to the type of help they support as “contextual”, but the help materials are mainly screenshots and not overlays on the running UI. For example, ScreenSteps was used with Sales Force [20] to develop help lessons that only improve on fully text-based help documents by including screenshots with annotations.

ACCEPTED MANUSCRIPT Pierre A. Akiki / CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces

PT

ED

M

AN US

2.2 Frameworks and Toolkit Extensions Frameworks and toolkit extensions can be built on top of presentation technologies such as Java Swing and Windows Forms, in order to support the development of enduser help systems. For example, CogentHelp was developed with Java‟s Abstract Windowing Toolkit (AWT) as a prototype tool for authoring, maintaining, and customizing help [21]. With CogentHelp, technical stakeholders, e.g., developers, author the help system parts that describe how individual windows or widgets function. Then, CogentHelp generates help pages based on the individual parts that these stakeholders produced. Balloon tooltips [22] is a help system that presents basic explanation inside popup balloons. Other examples such as Side Views [23] and ToolClips [24] can present richer help content. The ability to ask questions about the UI allows endusers to stay informed on what is happening [48]. One example targeting this ability is the Crystal framework, which goes beyond simple help catalogues into answering questions on: “why do things occur or do not occur”, and “how to use certain application features” [25]. AI planning is used by some systems such as SmartAidè [49] and Writer‟s Aid [26], in order to offer advanced help. Despite their importance, these techniques do not offer contextual help that works with adaptive UIs. Although some systems, e.g., CogentHelp, can generate contextual help content that targets a particular context, this help does not adapt to varying context-of-use. Also, frameworks and toolkit extensions can make the help content technology dependent. Other techniques such as pixel analysis and model-driven engineering can provide technology independence.

2.4 Macro Recorders Macro recorders are used to record user-activity on a UI in order to replay it later. These tools are not primarily intended for developing end-user help, but are aimed at automating routine activities. Microsoft Office applications support the creation of macros for automating tasks [29]. JitBit [31] and Mini Mouse [32] are two examples of macro recorders that can support various Windows applications. Apple Automator automates activities on a variety of Mac OS applications [30]. Selenium is a browser plugin that supports recording and playback of activities on web applications [33]. One example from the academic literature is CoScripter [34], which supports collaboration on task recording and automation in web-based systems. Macro recorders do not support annotations such as markings and callouts, which are necessary for end-user help to provide explanation. Some macro recorders are negatively affected by resolution change. This limitation also hinders the ability to operate on different adaptations of a UI. Furthermore, macro recorders cannot validate if a macro still conforms to a UI after it gets updated, whereas this trait is possible with help models through model validation. A macro-like tool for recording user-activities can complement a visual help model design tool and facilitate the creation of help content. Developers or expert users could perform a task using a UI and their actions would be recorded in order to serve as a basis for defining help models.

CR IP T

As mentioned in Section 1.1, the main shortcoming of non-contextual help materials such as prerecorded videos and screenshots lies in their static nature, which prevents them from preserving their usefulness as the UI adapts.

5

AC

CE

2.3 Pixel Analysis Pixel analysis is used by some techniques for providing contextual help on UIs without accessing the software application‟s source code. These techniques can be considered as attachments, which are small interactive programs that augment the functionality of other applications [50]. Dixon et al. used pixel analysis for extending UIs with stencil-based tutorials [27]. Sikuli allows endusers to query a help system using screenshots from the GUI [28]. Yeh et al. presented a technique that analyzes the pixels of GUI screenshots and provides contextual help using a scripted slideshow [4]. An advantage of pixel analysis is its ability to support the development of technology independent help, which can be added to existing applications without code modifications. Yet, pixel analysis is inhibited by scale changes and theme variations in the UI [4]. Hence, if it is used with adaptive UIs, these limitations could be crippling. Adaptive UIs adapt at runtime characteristics such as: accessibility of functions, colors, icons, information density, image-to-text ratio, and widget types [51]. Adapting such characteristics can make it impossible to locate a UI element through its initial pixel representation.

2.5 Usage Logging Some approaches log the usage history data of end-users over a long period of time and play it back to explain how certain software activities are used. Babaian et al. provide a model called the Task-Interface-Log (TIL), which logs the tasks performed by enterprise system end-users for later playback [35]. Organization-Wide Learning (OWL) is a recommender system that logs end-user data and uses it to provide learning tips [36]. Greco et al. use system logs to help administrators in identifying the frequent past choices that lead to a desired configuration [37]. AIMHelp generates help content by eliciting interaction information at runtime [38]. Like macro recorders, this approach does not provide the ability to add annotations on the UI. Existing techniques such as the TIL do not work with a legacy system unless there is access to its source-code. Logging data in large-scale systems such as enterprise applications can consume a lot of storage space, and it could be hard to identify the part of the data that is needed for explaining an activity. On the other hand, CHAIN help models are composed based on knowledge from human experts. Striking a balance between automation and human input is important for realizing systems that efficiently produce a high quality output. Another disadvantage that usage logging shares with macro recorders is the difficulty of supporting multilingual help. Querying usage logs for entries in the target language could be difficult. On the other hand, help models can target multiple human languages.

ACCEPTED MANUSCRIPT Pierre A. Akiki / The Journal of Systems and Software

AN US

Fig. 2. A Class Diagram Representing the Meta-Model of CHAIN’s Abstract Concepts

CR IP T

6

AC

CE

PT

ED

M

2.6 Crowdsourcing Some tools rely on crowdsourcing to involve end-users in the process of developing help for software applications. LemonAid is a tool that leverages the knowledge of the crowd to provide contextual help for web applications [39]. End-users can benefit from previously answered questions about specific parts of the UI. While using a web application, end-users can ask for help on a specific part of the UI, and LemonAid provides them with a searchable list of questions and answers that were generated by the crowd. CHiC is another example tool in this category. It provides community help for the Eclipse development environment [40]. Although this approach does not directly target adaptive UIs, it has a promising potential for involving endusers in the development of contextual help for such UIs. The data collected through crowdsourcing can potentially be used for producing CHAIN models. For example, if end-users were given an automation tool (like a macro recorder) for recording their on-screen activities, it would be possible to generate a CHAIN model from this recording. Crowdsourcing already has its applications in the discipline of software engineering [52], and it shows a potential for involving end-users in UI adaptation [53]. Hence, it could allow expert end-users to create help for end-users who are less knowledgeable about how to use a certain software system.

2.7 Model-Driven Engineering Model-driven engineering (MDE) can be leveraged to develop end-user help. With MDE, it is possible to offer help by anticipating problems that end-users may face at runtime [54]. García Frey et al. used MDE to develop selfexplanatory UIs [41] with the help of a tool called Xplain [42]. Kern et al. used annotations on task models to provide adaptive help for adaptive UIs [3]. Several MDEbased approaches target automatic generation of end-user help. Pangoli et al. automatically generate task-oriented help from abstract models [43]. Moriyon et al. automati-

cally generate help text from models [55]. DiamondHelp [56], [57] uses task models and the Collagen [58] middleware for supporting collaborative dialogues between an application and its end-users. Collagen takes a task model as an input and provides an agent that helps end-users in performing certain tasks [44]. Dekoven uses the same approach to produce textual help points that primarily target physical appliances [45]. HATS uses task models to generate context sensitive help, which informs end-users about how the interaction could be done in order to achieve certain tasks [46]. Gribova presented an approach for generating context-sensitive help by using ontology models that link tasks to their respective UI elements [47]. The existing work demonstrates the potential of using MDE for developing help systems, which are technology-independent and dynamic. However, it does not target help that remains useful when a UI adapts, thereby leaving many challenges to be addressed. One exception is the work from Kern et al. [3], but this work is at a preliminary stage and does not demonstrate a way to replace complex static help artifacts like videos. The automatic help generation offered by some MDEbased approaches could reduce development costs and anticipate help scenarios at runtime. However, in many cases, full automation might not be practical enough for developing complex help scenarios, which require the input of a domain expert. Hence, the approach presented in this paper offers human experts the tool support they need for developing model-driven contextual help. Modelling languages like the Unified Modelling Language (UML) [59], do not provide dedicated models for representing end-user help. The Cameleon Reference Framework [1] suggests multiple levels of abstraction for developing model-driven UIs: task model (e.g., ConcurTaskTrees [60]), abstract UI (AUI), concrete UI (CUI), and final UI (FUI). Yet, these levels of abstraction only represent a UI‟s content. This paper presents a novel approach for representing model-driven contextual help.

ACCEPTED MANUSCRIPT 7

AC

CE

PT

ED

M

AN US

CR IP T

Pierre A. Akiki / CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces

Fig. 3. A Class Diagram Representing the Meta-Model of CHAIN’s Concrete Concepts

2.8 Challenges for Help Systems Some challenges can be deduced after exploring the related work and the different approaches that were adopted for developing end-user UI help. Help systems can become more effective by overcoming these challenges. As

explained previously, some challenges were overcome by certain approaches but not others. The first challenge lies in providing technology independence, which can be overcome by adopting a modeldriven approach. The second challenge is the efficient development of a high quality help, by providing a bal-

ACCEPTED MANUSCRIPT Pierre A. Akiki / The Journal of Systems and Software

AN US

CR IP T

8

M

Fig. 4. A Class Diagram Representing CHAIN Resources

CE

PT

ED

ance between human input on the help content and automation through tool support. The third challenge is involving end-users in the definition of help. Olsen suggests that UI systems could empower new design participants such as end-users by introducing them into the design process [61]. Crowdsourcing can be one way to achieve this suggestion. Finally, a major challenge is developing end-user help that can adapt and maintain its usefulness when its corresponding UI adapts. The latter is not addressed by any of the existing systems; hence, it will be the main focus of this paper without disregarding the other key challenges.

3 CHAIN CONCEPTS

AC

Contextual Help for Adaptive INterfaces (CHAIN) supports the definition of end-user contextual help, which maintains its usefulness across different adaptations of the same UI. Some systems such as Helpndoc [15] and Help and Manual [16] are used to develop tutorials and manuals for end-users. Other systems are not dedicated for developing help, but provide a form of guidance for endusers. For example, the UI development systems MacIDA and TRIDENT offer guidance related to task progression, so end-users can identify the tasks that were done and those that remain [62]. On the other hand, the work presented in this paper is a UI help system that is dedicated for the development of contextual help. In model-driven UIs that are based on the Cameleon Reference Framework, abstract UI (AUI) models are independent of any modality, e.g., graphical, while con-

crete UI (CUI) models are modality-dependent. Since help models are related to UIs, CHAIN encompasses abstract (Fig. 2) and concrete (Fig. 3) concepts. CHAIN currently supports the graphical UIs at the concrete level. Yet, if other modalities are added in the future the abstract help model will be mapped to the concrete help models representing the different modalities.

3.1 Abstract Concepts The class diagram shown in Fig. 2 represents CHAIN‟s abstract concepts. Abstract help models are composed of AbstractHelpModelElements, which can be one of four types: AbstractHelpContainer, AbstractHelpOperation, AbstractUserInput, and AbstractResponseToUserInput. These elements have a sortorder attribute that indicates the order in which an element is presented to the end-user. An AbstractHelpContainer groups related help model elements that will be presented together. For example, elements that explain a related part of the UI can be grouped under one container. AbstractHelpOperations embody the help that will be presented to the end-users. Each operation can be one of three types: explanation, demonstration, and emphasis. Explanation operations instruct end-users on how to perform a task. For example, on the concrete level, these operations can be mapped to a voice narration or a callout that displays text. On the other hand, demonstration operations show the end-users how to perform a task. For example, these operations can involve moving the cursor, typing text, selecting values, and so on. Finally, emphasis operations are used to make parts of the UI

ACCEPTED MANUSCRIPT 9

CR IP T

Pierre A. Akiki / CHAIN: Developing Model-Driven Contextual Help for Adaptive User Interfaces

Fig. 5. A Class Diagram Representing Some Possible CHAIN Concrete User Inputs and Responses

them in the execution of a certain task. For example, in a line-of-business application, a Scenario can demonstrate how a sales invoice is created. A speed can be specified to indicate how fast a Scenario is presented. A Scene is a single step in a Scenario. For example, “select customer” can be a Scene in the “create sales invoice” Scenario. A ScenarioFlow indicates how a Scenario moves from one Scene to another. The flow can be automatic like a video, or it can be manual thereby end-user input would be required to proceed. A Scenario can also be used as a static annotation, where all Scenes are presented at once like a single snapshot. A SceneFlow indicates whether a Scene should run in parallel with other Scenes or sequentially after the previous one. Each Scene is composed of a number of SceneElements, each having one of the following four types: Action, Creation, Speech, and Animation. Action scene-elements invoke a change such as an input. TextInput simulates human-like typing at a specified speed, one letter at a time, in an input ConcreteUIElement (e.g., text box). This simulation gives data-entry a natural feel. A ValueChange can be used for changing the value of a selection ConcreteUIElement (e.g., combo box). The MouseMovement simulates cursor movement to guide end-users from one ConcreteUIElement to another. Mouse and Keyboard Commands can also be sent to the UI. Creation scene-elements create help visualizations. A Callout is created next to a UIElement to describe its purpose. A Marking can be used to attract the end-user‟s attention to part of the UI, by highlighting it with a different color or drawing a shape around it. A UIWindow can be used to show supporting UIs, e.g., those used for looking up information. Spoken narration is recommended by existing guidelines on producing help demonstrations [14]. Both Synthesized and Human Speech are supported by CHAIN. SynthesizedSpeech configures and invokes a text-to-speech engine to read a text, while HumanSpeech plays a prerecorded human voice.

PT

ED

M

AN US

more apparent to the end-users. For example, these operations can be realized by highlighting or magnifying a widget. AbstractHelpOperations are applied to AbstractUIElements, which are in turn mapped to ConcreteUIElements. Help models can become interactive by eliciting enduser input and providing responses. On the abstract level, inputs and responses are specified using AbstractUserInputs and AbstractResponsesToUserInputs. Examples of user input on the concrete level include: pressing a button, selecting an option, typing a question, and so on. As a response to an end-user‟s input, the system can, for example, present a relevant part of the help model. Each of the classes representing abstract concepts are assigned one of the following stereotypes: <>, <>, <>, and <>. These stereotypes are also assigned to classes that represent help models at the concrete level, in order to show how the concrete concepts match their abstract counterparts.

AC

CE

3.2 Concrete Concepts The class diagram shown in Fig. 3 represents CHAIN‟s concrete concepts for graphical UIs. Other modalities, e.g., voice and tactile, can be explored further in the future. For example, the current support for speech could form a basis for exploring fully vocal help in the future. Speech is currently supported in CHAIN as part of the help for graphical UIs. Yet, in the future, speech could be explored with purely vocal UIs. The concrete concepts shown in Fig. 3 map to their abstract counterparts in Section 3.1. For example, Callout, TextInput, and Marking are concrete implementations of the explanation, demonstration, and emphasis AbstractOperations, respectively. Classes representing the related abstract concepts (Fig. 2) are also presented in Fig. 3 (colored in gray) to show the relationship between CHAIN‟s abstract and concrete concepts. Each CHAIN Document comprises a help Scenario composed of several Scenes. A Scenario represents contextual help that is presented to end-users for assisting

ACCEPTED MANUSCRIPT 10

Pierre A. Akiki / The Journal of Systems and Software

Listing 1. An Example Excerpt of a Help Scenario Represented as a CHAINXML Document 1. Enter the maximum purchases that this customer can make on credit. Entrez les achats maximales que ce client peut faire à crédit.

17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40.