Advances in Engineering Software 40 (2009) 1287–1296
Contents lists available at ScienceDirect
Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft
Framework and authoring tool for an extension of the UIML language Luis Iñesta *, Nathalie Aquino, Juan Sánchez Centro de Investigación en Métodos de Producción de Software, Universidad Politécnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain
a r t i c l e
i n f o
Article history: Received 22 September 2008 Received in revised form 19 November 2008 Accepted 19 January 2009 Available online 11 April 2009 Keywords: Multi-platform user interface UIML Model-driven development Authoring tool
a b s t r a c t This paper presents a framework for the design of User Interfaces (UIs). By applying model transformations, the framework allows different UIs to be generated for different computing platforms. The tool presented in this work helps designers to build an abstract user interface which is later transformed into a concrete user interface by means of transformation techniques based on graph grammars. These techniques can be used to generate implementation code for several UI platforms including desktop applications, dynamic websites and mobile applications. The generated user interfaces are integrated with a multi-tier application by referencing external services and communicating with the application core over Web Service protocols. Our tool also allows the concrete interfaces to be enhanced before generating the final UI. The approach uses an adaptation of UIML (User Interface Markup Language). The adaptation focuses on defining a data model and a services model, and it also introduces a navigation model that allows data communication from one UI to another. The obtained UIs together with Web Services can conform complete applications instead of just being prototypes. Ó 2009 Elsevier Ltd. All rights reserved.
1. Introduction With the popularization of the Internet and related wireless technologies, users can access the Web from almost any place in the world using a wide variety of devices such as electronic agendas, mobile phones, laptops, etc. In order to face this challenge, current software applications must provide valuable services to their users independently of how and from where they access these services. With the emergence of pervasive computing, operations and transactions can be started from almost any type of device, and organizations must provide access to data and services at any time and from any place. During the last years, the development of highly interactive software with improved Graphical User Interfaces (GUI) has become increasingly common. User Interfaces (UI) have become more and more complex, providing a total freedom of creativity for the interface designer. The programming of user interfaces is a relevant part of software development. Many studies have analyzed software development practices, but qualitative studies of UI-related work practices in software development are relatively rare [1]. Nowadays, when applications have to be implemented on several platforms and devices, the work needed to obtain high-quality interfaces is increased. This effort can be avoided by means of model-driven development (MDD) techniques [2], where application code can be automatically generated from conceptual models. Although * Corresponding author. Tel.: +34 963 877 350; fax: +34 963 877 359. E-mail address:
[email protected] (L. Iñesta). 0965-9978/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.advengsoft.2009.01.020
techniques for Model-Based User Interface Development (MBUID) were developed more than 20 years ago (most of which were presented in the 90’s [3]), current existing products do not have the required industrial maturity. MB-UID frameworks allow designers to specify UIs by means of declarative models using constructors of a high level of abstraction, but they lose control over low-level details of the generated product. There is currently a real need for the creation of tools to design UIs. These tools should support the creation of multi-platform UIs. In this work, we present a tool that is based on an adaptation of the User Interface Markup Language (UIML) [4] and that allows the definition of UIs at different levels of abstraction. Taking advantage of the characteristics of UIML, the designer can work with abstract and concrete interfaces that use device families and different vocabularies or UI toolkits. Vocabulary transformations are defined using graph grammar. The designer can enhance the UI at every level of abstraction. In short, the main contributions that we present in this paper are the following: A model-driven approach for creating consistent user interfaces for a wide range of computing platforms. An improved version of the UIML language. A framework that is independent of the computing platform and that smoothly integrates the user interface with the functional core of the application. The article is structured as follows: Section 2 is dedicated to the foundations of the proposal and to related works; Section 3
1288
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
presents the modifications made to UIML in order to adapt it to our approach; Section 4 briefly explains vocabulary transformations; Section 5 presents the architecture of the tool and also offers a general view of the implementation using the Eclipse framework; finally, Section 6 is dedicated to conclusions and future works.
2. Foundations and related works This section is divided into two sub-sections. Works that are related to our approach are presented in the first sub-section. UIML is introduced in the second sub-section. As we have mentioned above, our approach is based on an adaptation of UIML. 2.1. Related works There is a set of works that uses different dialects of the eXtensible Markup Language (XML) [5] for describing interfaces. These languages are known as XML-compliant User Interface Description Languages (UIDLs) [6,7]. Most of these languages are based only on specific UI models and, thus, are not intended to describe UIs on various levels of abstraction. According to [6,7], XISL focuses only on dialog; WSXL focuses on presentation, dialog and data; and AUIML, Seescoa XML, XUL, AAIML, XForms, RIML and ISML all focus mainly on presentation and dialog. Some of these UIDLs are explained in more detail below. AUIML (Abstract User Interface Markup Language) [8] defines the intent of an interaction with a user and does not focus on the appearance of the UI. It is intended to be independent from any client platform, any implementation language, and any UI implementation technology. A UI is described in terms of manipulated elements (data model), interaction elements (presentation model), and actions (which describe a micro-dialog in order to manage events between the interface and the data). XUL (eXtensible User Interface Language) [9] is a Mozilla XML-based language for describing window layouts. The goal of XUL is to build cross platform applications, making them easily portable to all of the operating systems on which Mozilla runs. The UIs can be bound to Web Services. In this way, developers can quickly create their own applications and execute them in a web server or directly on their own computers. XForms [10] is the result of the effort of the W3C consortium to express forms-based UIs at a level that is more abstract than physical HTML descriptions. XForms is basically aimed at expressing forms-based UIs with presentation and some dialog aspects. A XForms definition differentiates three modules or layers related to the UIs: data, logic and presentation. The data layer contains the definition of the data model of the form according to a XML-Schema [11]. The logic layer establishes relations and restrictions among different elements of the model and also establishes calculations that cannot be established in the data layer. The presentation layer contains the form representation as it will be perceived by the final user. Considering the type of models that can be represented, the mentioned UIDLs are not very suitable for describing UIs at various levels of abstraction. XIML, Teresa XML and UsiXML provide support to a more extended set of UI models and allow UIs to be described at various levels of abstraction. They also provide mechanisms for traceability or mappings among models. XIML (eXtensible Interface Markup Language) [12] is a hierarchically organized set of interface elements that are distributed into one or more interface components. XIML
predefines five basic interface components: task, domain, user, dialog and presentation. A XIML description is also composed of attributes and relations. A predefined set of attributes already exists. XIML describes UI abstract aspects (e.g., tasks, domain and user) and concrete aspects (i.e., presentation and dialog) throughout the development life cycle. Mappings from abstract to concrete aspects are similarly supported. Teresa XML [13] is composed of two parts: (i) a XML-description of the CTT notation [14], (ii) a language for describing UIs. Teresa XML for describing UIs specifies how the various Abstract Interaction Objects (AIOs) [15] that make up the UI are organized, along with the specification of the UI dialog. UsiXML (USer Interface eXtensible Markup Language) [16] consists of a UIDL that allows designers to apply a multi-path development of UIs. In this development paradigm, a UI can be specified from different, and possibly multiple, levels of abstraction (Tasks and Concepts, Abstract UI, Concrete UI, Final UI) while still maintaining the mappings between these levels if required. Thus, the development process can be initiated from any level of abstraction and proceed towards obtaining one or many final UIs for various contexts of use at other levels of abstraction. UsiXML incorporates graph transformations in order to derive models from other models and to establish the element correspondences in the different levels. UsiXML also incorporates a context model that indicates the platform and target device, among other aspects.
2.2. The UIML language UIML [4] is a UIDL which allows a UI to be described following the meta-interface model (MIM) presented in [17]. The MIM proposal consists of three components: presentation, interface, and logic, which are shown in Fig. 1 and explained as follows. The presentation component defines how the interface will be shown to the user. It is formed by a set of vocabularies that can represent the UI in several platforms or devices. The interface component describes the user interface itself, from four separate perspectives: structure, style, content, and behaviour. The structure describes the organization of components or UI parts; the style describes presentation specific properties for each part of the UI; the content establishes the information that is presented to the user; and the behaviour defines the interaction in execution time, including UI events and invocations to application operations. This component is connected to the real world with the peers (which map UI parts to presentation and logic parts). The logic component is the bridge between the UI specification and the logical application, which communicate with each other by means of functions and services. Device / Platform UI Metaphors
Peers
Presentation Interface Vocabularies
Structure Style Content Behaviour
Fig. 1. The UIML meta interface model.
Applications / Data Sources
Logic
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
The clear separation of these aspects facilitates both the reuse of UI definitions and the consistency over different platforms. The UI is described just once employing a model that abstracts the type of software and hardware that will be used as final support. Similarly, the definition of the interconnection of the UI with the business logic, services, and other components is made just once. In this early stage, the final device is not important nor is the metaphor of the UI: a graphic, textual or audio environment. A UIML UI can be rendered to any of the mentioned environments using the appropriate interpreter or compiler. One distinguishing aspect of UIML is the use of different UI vocabularies, which allow the different parts that can be used in a UI specification to be defined in a generic way. A vocabulary is defined as a set of UI components (widgets) together with their properties, operations and events. Following this approach, UIML uses different abstraction levels, ranging from very abstract vocabularies to vocabularies that are very close to the platform over which the UI is going to be rendered. In our proposal, we use an abstract vocabulary (a generic one) and a set of vocabularies for each device family (desktop applications, mobile applications, etc.). It is possible to use different concrete vocabularies for each device family, and it also is possible to generate the final UI from each one of them. A MDD approach allows us to define transformations among UI definitions expressed in UIML from an abstract level to the different concrete levels. The transformation is defined by means of rules that allow elements of a vocabulary to be transformed into elements of another vocabulary, and, hence, a UIML file is transformed into another UIML file. This approach allows us to define translations from one vocabulary to another by means of MDD model transformations.
3. UIML adaptation In our opinion, UIML does not adjust entirely to the expressive requirements of a UI complete model. Firstly, the vocabulary definitions are inconsistent and do not adequately separate the rendering mappings. Secondly, UIML lacks a navigational model to establish communicational data flows between interfaces. We present the proposed improvements below. 3.1. Revision of vocabulary definition As explained in Section 2, the presentation component in UIML is used to specify user interface vocabularies. Each vocabulary defines a set of class names, events, and listeners, together their property names, which are available to be referenced when describing parts. Thus, the presentation element consists of a collection of dclass elements, in which the platform-specific widget type can be set in order to be rendered (e.g., XML tags or Java classes). In addition, d-property elements within d-class define not only class properties but the way they implement methods for setting and getting values. Using presentation definitions to establish the mappings between UIML parts and their desired implementation is the mechanism adopted by the UIML standard. However, from our perspective, mixing UI descriptions with specific implementations breaks the MDD approach. Therefore, in this work, we propose a strict separation between definition and rendering. An alternative presentation structure is introduced below, with external renderers/code generators being responsible for the mapping to final UI components. In order to provide render-independent vocabulary definitions, we extend the standard UIML language with the vocabulary element. This element is an external alternative to presentation sec-
1289
tions since they both represent the same basic concepts. Within vocabulary, uiClass elements can be defined; property, operation and event elements can be nested. The following sub-sections explain extra features that are not found in standard UIML. 3.1.1. Top containers We define top-container widgets as those which do not need to be contained in another widget to provide full functionality and to be independently executed. Examples of top container classes are Window or Applet but not Listbox (in the context of Java vocabulary). Identifying top container classes helps to determine whether a UI is fully well-formed or is intended to work as template. 3.1.2. Inheritance The default UIML mechanism to define UI classes is hard to handle when there is a need to share common features (properties, operations, and events) among several classes since UIML forces each class to be defined independently. It implies possible consistence errors when it comes to changing vocabulary definition (e.g., updating a property name within one class but not updating the same property within the other classes). The proposed extension is based on the object-oriented paradigm [18] by importing the inheritance concept into vocabulary classes. Thus a UI class can extend any other UI class within the same vocabulary, adopting its properties, operations and events. 3.1.3. Containment constraints Another inconsistent aspect of the standard language is that UIML gives the chance to nest interface parts freely, regardless of their nature. In practice, it can be proved that, although some components can be generic containers, most of them are specialized and can only contain certain specific elements. An illustrative example is the window menu bar, where menus, sub-menus and separators are expected, but not text boxes or scrollbars. Since UIML does not provide expressivities to specify such containment constraints, we extend it with containment-constraint elements. These constraints are located within class and define which other classes are suitable to be contained in class. In addition, cardinality can be set, establishing the maximum amount of components of a certain class that the container accepts. 3.1.4. XML schema data types UIML lacks a specification of the data types used in a UI definition. In UIML files, every literal value is represented by means of a text string with arbitrary value. The interpretation and validation of this text string is relegated to the software that renders the UI. In our approach, we have used XML Schema [11] as a base to be able to reference the data that is displayed in a UI. In this way, the UIML files can be strictly validated, checking the syntax of each literal value against the XML Schema definition of the corresponding data type. 3.1.5. Vocabulary extension meta-model The extensions explained above are collected in the meta-model shown in Fig. 2. A very close similarity with the OMG UML [19] class diagram meta-model can be observed. Due to this similarity, a UML profile that conforms to the vocabulary definition has been designed in order to obtain an agile method for defining vocabularies. The Table 1 shows the stereotypes defined and their correspondences. In this way, any profile-compliant UML editor can be used to specify user interface vocabularies. The standard XMI code can be translated to the equivalent UIML code. In addition, by applying reverse engineering methods, it is possible to get a UML class diagram from UI toolkits code (e.g. Java libraries). By using the appropriate stereotypes in this diagram, a semi-automated process that
1290
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
Fig. 2. Vocabulary extension meta-model.
Table 1 UML stereotypes and their UIML correspondences. OMG UML stereotypes
Extended UIML vocabulary
<
> package <> class <> property <> operation <> operation Generalization Aggregation
Vocabulary UIClass Property Operation Event Extends Containment Constraint
that UI A has output ports (the information that it provides) and UI B has input ports (the information that it receives). Fig. 3 presents a graphical representation of the data flow within a communicational navigation using the example of an e-mail validation service (which checks whether or not an e-mail address actually exists). The interface ‘input email’ provides two data output ports: mail and domain. The data flow uses concatenation operators to construct the whole e-mail address. The result of these operations is passed as input to the external service (in this case, a Web Service), and the service response parameters are connected to the input data ports of the ‘show response’ interface, which is responsible for showing feedback to the user. This navigation model with data flows corresponds to a fifth perspective in the user interface definition (in addition to structure, style, contents and behaviour, as seen in Section 2). Since UIs can be considered as black boxes with input/output ports, there is no need to know explicitly the other aspects of the interface in order to define the data flow and navigations.
achieves vocabulary definitions directly from toolkits code can be obtained. 3.2. Communicational navigation model In order to specify the data source of a UI and the possible transfer of information from one interface to another, we have introduced a communicational navigation model. The information presented in a UI A can come from the invocation of an external service which fills in the fields of the UI A, or it can come from information that the user enters when interacting with the system. This information can be directed to a UI B such as it is, or it can be transformed by means of the invocation of one or more services and/or elemental operations. From this point of view, we consider
4. Vocabulary-based UI transformations The purpose of allowing different vocabularies is to be able to use as many abstraction levels as the designer considers opportune. Although it is possible to design UIs using concrete vocabularies directly and generate the corresponding code, our goal is
Physica C ‘@’ <>
external service
mail
<> <>
+ +
input email
data operator
valid MailBoxValidator
data output
show response
data input
email domain
message
‘_’
Fig. 3. Graphic representation of a navigation rule with data communication.
literal value
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
Fig. 4. MDD model transformation architecture applied to UI vocabularies.
different: we want to define complete UI models in an abstract vocabulary and then transform this model to an equivalent model defined in a more concrete vocabulary (which is closer to implementation); this process can be repeated vertically (for every defined level of abstraction) as well as horizontally (deriving various models for different device families or platforms, in the same level of abstraction). In this way, the UI can be enhanced with even more descriptive vocabulary. To make this possible, an automated process is needed. The approach that we have followed is to consider each vocabulary as an instance of a UI meta-model and to apply MDD model transformations as Fig. 4 illustrates. 4.1. Graph grammar transformations There are multiple alternatives that can be considered when choosing a transformation mechanism. Some of these alternatives are detailed in [20]. Of these, we have studied graph transformations [21] with special interest. The reason is that this approach provides a formal framework based on mathematical foundations. This mechanism uses graph grammars to represent the elements of
1291
the model that are going to change. Every transformation rule consists of a pair of graphs. The graph presented in the left-hand side (LHS) of the rule represents the original status, and the graph presented in the right-hand side (RHS) of the rule represents the final status. Additionally, positive or negative conditions can be included, which establish variable values that must be reached in order to apply the rule. The parts of the model that are not represented are not changed. Fig. 5 presents a specific example of the graph transformation rules that can be applied to our work. This rule represents the case where a single abstract window with a large number of components must be split into two dialogs for a mobile device due to screen size restrictions. There is a positive condition (PAC) for checking the number of components and for avoiding the application of this rule where there are few components. The source model must have two sub-containers in order to separate them. As the figure shows, at the target model the original top window has disappeared and there are two windows, each of which corresponds to the original sub-containers. Thus, buttons and navigation rules between these two windows must be created. For other navigations, where the original window is the source or the target, updates are required. 4.2. Rule representations and transformation engines Graph grammars are mathematical structures that do not have a standard representation within software engineering, unlike other transformation proposals such as the Query–View–Transformation language (QVT) [22] or the ATLAS Transformation Language (ATL) [23]. However, there are some transformation frameworks based on graph grammars that do support the modeling of rewriting rules, and there are also some working groups are developing XML formats for graph grammar data interchange (such as the Graph eXchange Language, GXL [24]). Examples of mature frameworks are AGG and VIATRA, which are explained below: The Attributed Graph Grammar System (AGG) [25] is a development environment for attributed graph transformations. Rules can be specified with a graphical editor, and models can be
Fig. 5. Example of one graph transformation rule.
1292
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
stored in XML files using the standard format GXL. AGG also provides a transformation engine that can be used within other applications. The Visual Automated model TRansformations (VIATRA) [26] framework is aimed to support the entire life-cycle of engineering model transformations. VIATRA works on top of the Eclipse framework and provides a model space for the representation of graph transformations and abstract state machines. It also includes a high-performance transformation engine. Any of these alternatives is suitable for our framework since they support graph grammar rule modeling and provide the trans-
formation engine. However, VIATRA provides better interconnectivity with our Eclipse-based authoring tool and models, which is the main reason why we are currently working with it. 4.3. Code generation The last step in the UI development process is the generation of a FUI from a CUI. Our approach defines a very adaptative mechanism for this task: a generator must receive the UI model as input and emit artifacts (e.g., code, documentation, reports, etc.) as output. We have developed code generators for HTML, JavaSE and JavaME in order to validate the whole process of UI generation. The
Fig. 6. Design, transformation and generation of UIs for three platforms: HTML, JavaSE/Swing and J2ME.
1293
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
mechanism employed for the generation are JET templates (Java Emitter Template [27]). However, the application admits any kind of artifact generator regardless of its implementation. 4.4. Transformation example Fig. 6 shows screenshots that are observed in a typical use scenario of the tool. The user designs an AUI using the generic vocabulary. The resulting model corresponds to the PIM (Platform-Independent Model) of the Model-Driven Architecture [28]. The AUI can be transformed into UI models associated to device families, in an intermediate level PIM/PSM. In the following level, the PSM (Platform- Specific Model) is derived and, finally, the corresponding code is generated. The tool allows the designer to enrich the UI appearance, content, and functionality in any of the levels. The closer is the UI to the PSM, the more expressive the UI models are. 5. Traceability between user interface models In the scope of model-driven development, automated transformation model techniques provide new chances for traceability. When a transformation is applied, it implicitly establishes a relationship between the source elements and the target elements. This information can be stored in a traceability model in an explicit way, integrating this step into the transformation process. Thus, the management of traceability information can be automated, avoiding difficult and error-prone manual handling. In the context of our work, where user interface models are transformed from abstract levels towards concrete levels, traceability is essential. Since the same interface model is handled through several abstraction levels, maintaining the consistence between models is a priority requirement. Adding, deleting and updating interface elements at an abstract level must cause change propagation towards more concrete models. This feature can be easily achieved by forcing a complete regeneration of the models, but this solution implies repeating the same transformations for elements that were not affected. However, by using traceability information, the transformation engine can detect the set of elements that are affected by changes and apply the appropriate transformation rules specifically to them, thereby turning the change propagation into a more agile process. In addition, traceability relationships allow propagation to made in both a top-down direction (from abstract models towards concrete models) and in a bottom-up direction (concrete model changes can redefine abstract models). Moreover, they allow horizontal propagation, detecting implicit traceability dependencies among models at the same level of abstraction.
5.1. UI traceability meta-model Even though there are several traceability meta-models proposed in the literature [29–32], in this work, we suggest using a basic core meta-model that has enough expressiveness to represent links among elements. Fig. 7 shows the UML diagram corresponding to the meta-model proposed. The main element in the model is TraceabilityLink, which captures the relationship between a set of source elements and another set of target elements. This link has many-to-many cardinality to allow high adaptation; however, in most cases, it will be either one-to-many or many-to-one cardinality. Source elements and target elements are expected to belong to different models, even though this is not a constraint (traceability between elements within the same model can be defined). In addition a TraceabilityType property can be set for TraceabilityLink objects, which defines the specified nature of the relationship. Initially, three traceability types have been detected in the context of user interface model transformation, as shown in Fig. 8: – Identity: the target element has the same semantics as the source element. This relationship is bidirectional and could have one-to-many cardinality, if the source element is cloned in several places of the target model. – Composition: the target element is made up of the source elements. This relationship is common when a large interface is split into smaller ones (or vice versa). Depending on the perspective of the observer, the same relationship can be considered as a composition and a decomposition. – Derivation: the target element is a new part which has been created, conforming an auxiliary pattern in the transformation (e.g., navigational parts when an interface is split). Source elements of the link are the cause of this creation. This traceability model is extensible and capable of improvement as new traceability requirements are detected in our models. 5.2. Integrating traceability mechanisms The integration of the traceability models that we have presented in the UI development method is done in two ways: by extending the transformation process generating traceability information, and by improving the design process with change detection and propagation. In the first case, the transformation models that are based on graph grammars must be enriched to obtain the related traceability model besides the UI target model. To achieve this, rewriting
identity
+
I
composition derivation
I
Identity n
ModelElement
n
Composition
abstract model
x
ok
<>
Derivation
+ source
TraceabilityLink
target
TraceabilityType
concrete model
I I
type Fig. 7. Basic traceability meta-model.
x
>
<
x
ok
Fig. 8. Example of traceability links between UI elements.
1294
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
Fig. 9. Example of traceability link generation within a transformation rule (only showing a subset of the links).
rules create new TraceabilityLink elements for each transformation applied, as shown in Fig. 9. In the second case, a notifying mechanism within the design environment must be implemented. This mechanism will analyze all the changes undergone by the model and will detect the affected elements. Next, it will get the traceability information that is related to the interface model to discover other models that are linked to the current one by means of transformations and to determine which elements need updating. The next step is to run the transformation engine by giving the identifiers of elements to be updated as arguments, instead of transforming the entire model. The change propagation process can be either automated or user-requested (allowing the model to be modified without affecting the other models). 6. Authoring tool Although XML-based UI description languages can define user interfaces in a simple textual representation, a visual WYSIWYG (What You See Is What You Get) editor is usually preferred when designing the interface. Therefore, in a complete UI development process, we need an authoring tool that allows for fast design and feedback. There are few UIML-based UI editors. The most representative is LiquidApps [33], which is developed by the same group responsible for the standard version of UIML. Other contributions are described in [34,35]. None of these tools uses a full model-driven development approach. Even though they allow UI design and deployment to several target platforms, they lack a fully-covered development process from abstract definitions towards final UIs and well-defined formal model transformations. Therefore, we have developed an authoring tool that provides support to the design and development process introduced in the above sections. The application translates the concept of the UIML perspectives directly to the editor. The editor provides four views: model structure, UI designer, table of contents, and behaviour rules. Each view is explained as follows.
The model structure view presents a global view of the entire UI model and offers advanced options such as filtering and searching for concrete elements. It is not intended to make a direct edition of elements. The UI designer view consists of a graphical editor with a drawing area and a palette of components (see Fig. 10). The interfaces of the model can be edited by placing them into the drawing area. New components can be added to the UI by selecting the component type and drawing directly on the interface by means of a drag-and-drop mechanism. An outline panel shows the content tree of the current UI, and a property sheet allows the designer to establish values for the class properties defined by the current vocabulary. In addition, this visual designer can switch between the different appearances and contents defined in the model, refreshing the figures of the drawing area. In the contents view, UI content schemas can be created by establishing constants of the model and assigning concrete values in each schema. This way, the user can establish sets of values for specific constants and select one of them when the application generates the final interfaces. The behaviour view allows the creation of rules that define the interactions of the UIs with the user, as well as navigation rules. An interaction rule is composed of a triggering event, conditions to reach, and actions. Some possible actions are: changing the current value of component properties, invoking class methods, calling external services, and triggering navigation rules. Navigation rules define the movement from one UI to another one (see Section 3). The environment is fully integrated with the transformation engines. It means that the user can directly invoke a transformation using the current interface as input, and then edit the output model customizing the new details that are allowed. This process can be repeated throughout several transformations, enriching the interface model until a concrete model is reached, from which the final code can be generated.
1295
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
Fig. 10. UI designer view of the tool.
6.1. Tool architecture Abstract UI
The tool presented in this work has been developed using the Eclipse platform [36,37]. We have taken advantage of the Eclipse infrastructure for the generation of applications. Furthermore, our tool has been designed to be integrated with Eclipse and to be used together with other plugins. The application follows a strict three-layer architecture (model–viewer–controller), which allows modularity. In order to optimize the development process of our tool, we have tried to take maximum advantage from the multitude of tools and plugins developed for Eclipse. We have chosen EMF (Eclipse Modeling Framework) [38] for the development of the UIML meta-model, and GEF (Graphical Edition Framework) [39] for the development of the graphical editor. In spite of the existence of the Graphical Modeling Framework (GMF) [40], an advanced plugin that allows full generation of graphical editors using Ecore meta-models as input, we consider that GMF is not suitable for this project. Since this authoring tool is not based on a static metamodel but can use several different vocabularies, manually developing an adaptative editor is a better solution than automatically generating a new editor for each possible vocabulary. The tool is built as a plugin that simply implements an editing engine and a transformation engine. This plugin offers extension points in which vocabularies, transformation rules and final code generators can be added, by either being provided from other plugins, or by directly importing the model files. In this way, users can configure the environment with the elements that they want, choosing them from a central repository. Fig. 11 represents this architecture. The main challenge for this tool is defined by the requirement that the editor must be able to work with different vocabularies. Therefore, for each defined vocabulary, a drawing palette with the corresponding UI components must be available. The solution is a mechanism for the dynamic load of palettes. The mechanism compiles some meta-data (specific to this tool) from the model of the vocabulary. These meta-data define (among other things) the icon to be displayed in the palette, or the figure to appear in
Java/Swing J2ME
repository
Desktop plugin
HTML
plugin extension point
vocabularies
EMF GEF
UI Edition Engine
Transformation Engine
code generators transformation models Java/Swing Generator
AbstractToDesktop
HTML Generator J2ME Generator
DesktopToJava DesktopToHTML
Fig. 11. Eclipse-based architecture of the authoring tool.
the drawing area. A dynamic mechanism has been designed to bind properties of a component with graphic aspects of the represented figure, such as the background color or the displayed text. All this meta-information is annotated in the vocabulary model. The editor is also able to deal with vocabularies without annotations, treating them in a generic way. 7. Conclusions and future works The availability of new authoring frameworks and tools is gradually changing the situation in the field of multi-platform user interface design. Just a few years ago, designing such systems was a large, time-consuming endeavor that could only be attempted by skilled teams with programming expertise. Our framework is aimed at allowing a low-trained development team to create a meaningful set of user interfaces for the same application. Our work is currently focused on the following development lines: the extension and improvement of the conceptual
1296
L. Iñesta et al. / Advances in Engineering Software 40 (2009) 1287–1296
meta-model, the addition of more functionality in the application, the design and performance of usability experiments, and the definition of interoperativity with other methods. Conceptual meta-models can be extended and refined to face specific challenges. In particular, defining transformation models with graph grammar rules can be difficult and requires training. An easier transformation meta-model that is focused on UI vocabularies could be defined and then translated to graph grammars. We also want to improve the current process of vocabulary transformation by integrating assistants in such a way that users will be able to parameterize the transformation process. Since third-party transformation engines are currently used in this work, customization is not a simple task. It would be convenient developing our own engines. In order to validate the goodness of the proposal, a set of experiments is planned as future work. These experiments are intended for several user profiles (traditional UI programmers, Web designers, requirement analysts, and stakeholders) and involve several factors such as usability and development time. The quality of the generated interfaces will also be tested. All this feedback is needed to adjust future versions of the framework and the authoring tool to designer necessities. It is also very important to outline how this UI development process will interoperate with other software generation methods. There is special interest in connecting the UI modeling with human-computer interaction models (i.e. ConcurTaskTress, CTT [14]) so that an AUI model can be automatically obtained starting from tasks and our semi-automatic UI generation process can be started from this AUI model.
[9] [10] [11] [12]
References
[29] [30]
[1] Campos P, Nunes N. Practitioner tools and workstyles for user-interface design. IEEE Softw 2007;24(1):73–80. [2] Selic B. The pragmatics of model-driven development. Softw. IEEE 2003;20(5):19–25. [3] da Silva P. User interface declarative models and development environments: a survey. In: Proceedings of DSV-IS2000. Springer-Verlag; 2000. [4] Ali MF, Perez-Quinones MA, Abrams M. Building multi-platform user interfaces with UIML. Multiple user interfaces: cross-platform applications and context-aware interfaces; 2004. [5] Bray T et al. Extensible markup language (XML) 1.0. W3C recommendation; 2000. [6] Souchon N, Vanderdonckt J. A review of XML-compliant user interface description languages. Lect Notes Comput Sci 2003;2844:377–91. [7] Luyten K. Dynamic user interface generation for mobile and embedded systems with model-based user interface development. Transnationale Universiteit Limburg; 2004. [8] Merrick RA, Wood B, Krebs W. Abstract user interface markup language. Developing user interfaces with XML: advances on user interface description languagesp; 2004. p. 39–46.
[13]
[14] [15]
[16] [17]
[18] [19] [20]
[21] [22] [23] [24] [25] [26] [27] [28]
[31]
[32] [33] [34]
[35]
[36] [37] [38] [39] [40]
Bullard V et al. Essential XUL programming. Wiley; 2001. Dubinko M et al. XForms 1.0 recommendation; 2003. W3C XML schema. . Puerta A, Eisenstein J. XIML: a common representation for interaction data. In: IUI 02: proceedings of the seventh international conference on intelligent user interfaces. ACM; 2002. Mori G, Paternò F, Santoro C. Tool support for designing nomadic applications. In: Proceedings of the 2003 international conference on intelligent user interfaces IUI’2003. ACM New York, NY, USA: Miami; 2003. Paterno F, Mancini C, Meniconi S. ConcurTaskTrees: a diagrammatic notation for specifying task models. London, UK: Chapman & Hall, Ltd.; 1997. Vanderdonckt JM, Bodart, F. Encapsulating knowledge for intelligent automatic interaction objects selection. In: Proceedings of the ACM conference on human factors in computing systems InterCHI’93. ACM New York, NY, USA: Amsterdam; 1993. Limbourg Q et al. UsiXML: a user interface description language supporting multiple levels of independence. Eng Adv Web Appl 2004:325–38. Abrams M et al. UIML: an appliance-independent XML user interface language. In: WWW ‘99: proceeding of the eighth international conference on World Wide Web. Elsevier North-Holland, Inc.; 1999. Booch G. Object-oriented analysis and design with applications, vol. 608. 2nd ed. Benjamin-Cummings Publishing Co. Inc.; 1994. OMG UML. . Czarnecki K, Helsen S. Classification of model transformation approaches. In: OOPSLA’03 workshop on the generative techniques in the context of modeldriven architecture; 2003. Rozenberg G. Handbook of grammars and computing by graph transformation. World Scientific Publishing Company; 1997. MOF QVT final adopted specification; 2008. . ATL project; 2008. . Winter A, Kullbach B, Riediger V. An overview of the GXL graph exchange language? Language, vol. 1(7). p. 34. The homepage; 2008. . Csertan G et al. VIATRA: visual automated transformations for formal verification and validation of UML models; 2002. Eclipse modeling – M2T; 2008. . OMG. The official MDA guide version 1.0.1. . Aizenbud-Reshef N et al. Model traceability. IBM Syst J 2006;45(3):515–26. Dick J. Rich traceability. In: Proceedings automated software engineering; 2002. Gotel OCZ, Finkelstein CW. An analysis of the requirements traceability problem. In: Proceedings of the first international conference on requirements engineering. Uthrecht, The Netherlands; 1994. Walderhaug S et al. Towards a generic solution for traceability in MDD. In: Proceedings of the thrid ECMDA traceability workshop (ECMDA-TW); 2008. LiquidApps. . Meskens J et al. Gummy for multi-platform user interface designs: shape me, multiply me, fix me, use me. In: Proceedings of the working conference on advanced visual interfaces. Napoli, Italy: ACM; 2008. Casado F et al. Una herramienta para la creación de interfaces multiplataforma con UIML. In: VI Congreso Español de Interacción Persona-Ordenador (INTERACCION’2005); 2005. Eclipse.org home. . Clayberg E, Rubel D. Eclipse: building commercial-quality plug-Ins. AddisonWesley; 2004. Eclipse modeling framework project. . Eclipse graphical editing framework. . Graphical modeling framework; 2008. .