Pervasive and Mobile Computing 3 (2007) 276–299 www.elsevier.com/locate/pmc
Task-driven automated component deployment for ambient intelligence environments Peter Rigole a,∗ , Tim Clerckx b , Yolande Berbers a , Karin Coninx b a Department of Computer Science, K.U. Leuven, Celestijnenlaan 200A, BE-3001 Leuven, Belgium b Hasselt University, Expertise Centre for Digital Media, and Transnational Universiteit Limburg,
Wetenschapspark 2, BE-3590 Diepenbeek, Belgium Received 23 April 2006; received in revised form 8 January 2007; accepted 13 January 2007 Available online 25 January 2007
Abstract This article presents a strategy for deploying component-based applications gradually in order to match the functionality of pervasive computing applications onto the current needs of the user. We establish this deployment strategy by linking component composition models with task models at design-time, from which a run-time deployment plan is deduced. Enhanced with a Markov model, this deployment plan is able to drive a component life cycle manager to anticipate future deployments. The result is a seamless integration of pervasive computing applications with the user’s tasks, guaranteeing the availability of the required functionality without wasting computing resources on components that are not currently needed. c 2007 Elsevier B.V. All rights reserved.
Keywords: Task models; Component-based software engineering; Component deployment
1. Introduction The way computing is provided to end-users has changed drastically ever since personal computing became a term. Applications with a steep learning curve gradually became more ∗ Corresponding author.
E-mail addresses:
[email protected] (P. Rigole),
[email protected] (T. Clerckx),
[email protected] (Y. Berbers),
[email protected] (K. Coninx). c 2007 Elsevier B.V. All rights reserved. 1574-1192/$ - see front matter doi:10.1016/j.pmcj.2007.01.001
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
277
intuitive and started to display a behavior that is more usable to humans. Novel paradigms anticipate a continuous evolution of computing interaction towards the idea of Ambient Intelligence (AmI): a surrounding computing space that is spontaneously and proactively supporting the user’s needs. Although the path towards realizing ambient intelligence (AmI) is greatly unpaved and ill-lit, many research tracks are already exploring the way ahead by designing solutions for requirements set forth by this challenging vision. This article describes a life cycle mechanism for deploying the components that compose an application or a service dynamically. Task descriptions define a deployment strategy for loading and unloading components based on the user’s current needs. The strategy is resource-aware and tries to optimize the consumption of these resources so that the user’s current task is best supported. Three concrete goals are set forth in this article: (1) avoiding needless consumption of resources; (2) optimizing the availability of useful application functionality and (3) providing seamless computing support by preloading application functionality. The first goal is to reduce the amount of allocated resources by deploying only those components that are needed for the user’s current task. The second goal is to optimize the available functionality to support the user. This means that optional functionality (such as a spell checker during text editing) should be made available whenever the resources are available to do so. The third goal is to provide application functionality to the user in a seamless manner, meaning that latencies caused by transitions in the availability of the application components must be avoided or reduced as much as possible. This is realized by anticipating the most probable user choices and preloading the required components. The three goals are related in the sense that they can be obtained through a single component life cycle management strategy. The approach presented in this article enhances the capability of pervasive computing applications that are hosted on devices that harbor a restricted set of computing resources. Resource scarce environments benefit from the ability of changing the application’s deployment configuration by selectively loading and unloading its components as needed. The approach we take is based on the fact that the components in the application’s design-time composition model are not instantiated for the entire duration of the application’s lifetime. Instead, the composition model is mapped onto use case scenario’s, which are modeled as hierarchical task trees. Each component is associated with the tasks that require its functionality. Among the subtasks in the hierarchical task tree, temporal relations can be defined that allow the task tree to be transformed into a Finite State Automaton (FSA). Each state in the FSA represents a set of components whose associated tasks are needed simultaneously by the application. In addition, the state transitions can be labeled with a probability to obtain a user-specific Markov model of the transition occurrences. This Markov model is used to reduce transition latencies by preloading the most probable components, if enough resources are available. Our contribution presented in this article consists of a technique that allows for seamless interaction between the user and his applications on devices that are resource-constrained. We created a task-driven life cycle manager for application components that bases its deployment decisions on a user-specific usage profile. This way, the availability of the
278
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
needed application functionality is improved, even though computing resources are limited. This task-driven deployment is particularly interesting in the field of AmI computing where seamless integration of the user’s tasks with his supporting computing space is paramount [1,2]. Not only is it inefficient to waste resources on application components that are not currently needed, it is often not even feasible to instantiate all components of a large task on the lightweight devices carried by the user. Our approach has currently been validated on a single device, although the available resource set can be extended to the resources of a distributed environment. This article develops as follows. Section 2 introduces component model concepts that are relevant for the work in this article and presents the task and composition model. Both models are established at design-time and combined to produce the finite state automaton that is used to drive the deployment, as described in Section 3. Section 4 elaborates on the anticipation of future deployments based on Markov models. Section 5 discusses some experimental results that were obtained using the task-driven deployment mechanism. Section 6 relates our work to the related work in the area of self-managing software, followed by Section 7 in which we briefly describe our future work. Finally, a conclusion is formulated in Section 8. 2. Models supporting a task-oriented design Software design principles teach us how to develop software by means of general guidelines and abstract models. This article benefits from the existence of three types of design models: component models, task models and composition models. The use of component models is well known and generally used in software engineering [3,4]. We discuss some general concepts related to component models in Section 2.1. Task models on the other hand serve a narrower software engineering domain. They are very popular in human–computer interaction engineering where they are used to describe sequences of user interaction activities. We elaborate on task models, and more specifically on the ConcurTaskTree notation, in Section 2.2. Composition models specify how a set of components have to be linked together to create a functional application. The definition of the composition model we propose is based on the existence of task models, as explained in Section 2.3. Our contribution in this article is represented by the synergy that follows from the combination of task models and composition models. 2.1. Component model concepts Component models are available in numerous shapes and flavors and a detailed comparison is out of the scope of this article. However, it is necessary to provide a brief overview of common concepts that are recurring in several component models and which we assume to be available in a component model to make it suitable for the purposes we elaborate on in this article. In this section, we shortly discuss these concepts and relate them to the Corba Component Model (CCM) [5], Fractal [6] and D RACO [7]. A component blueprint specifies the syntactic and semantic definition of a component. It defines the functional behavior provided by the component once it is instantiated. Some component systems, such as D RACO , also associate quality of service and synchronization
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
279
aspects with the component blueprint in order to add extra functional characteristics [8]. In CCM, the component blueprint is a combination of a component definition and its declaration. Fractal uses the term component type and the notion of component subtypes, based on component substitutability. A component instance is the run-time instantiation of a certain component that uniquely identifies it. In this article the term component is used to denote an instantiation, unless it is clear from the context that we are talking about its blueprint. A port is a component interaction point, or a component’s interface. The CCM model has many types of ports such as facets, receptables, event sources, event sinks and attributes. Fractal simply defines it as a component access point that may be either synchronous or asynchronous. The D RACO component model only allows reactive asynchronous ports and supports three types: single ports, multiports and multicast ports. Contingency and cardinality are two concepts that are related to ports. Contingency is a Fractal-specific concept that indicates if the functionality corresponding to a specific interface is guaranteed to be available or not while the component is running. The cardinality of a port is a concept used in many component systems, but sometimes slightly differently interpreted. In general, it denotes the number of times a port of a component instance can be used by other components. In Fractal and D RACO it is defined as the number of times a certain interface may be instantiated, while CCM specifies it as the number of port references you can obtain. Component life cycle management deals with deploying and maintaining a component on a component middleware system. The life cycle management instance ensures that the component is instantiated in an environment where it is able to function and removes the component when it is no longer needed. CCM uses component homes to manage components, while Fractal and D RACO have no life cycle support in their basic form. Only extended versions of the latter systems have support to deal with life cycle management. The task management framework described in this article provides this functionality and is generic in the sense that it is easily grafted as an extension onto existing component models. A component composition comprises a selected set of components that are connected and configured to provide a combined functionality as specified by each component’s blueprint. Both in Fractal and CCM, attributes tune a component’s behavior to the specific needs of the composition. Often, the composition and configuration is stored in a configuration script that represents the component-based application. Fractal uses a system with factories and templates to instantiate components from. In this article, we define the model that specifies how a component composition should be constructed as the Composition Model. The work presented in this article tightly couples this composition model with the task model that defines the tasks the application is composed of. In order to describe our approach in a generic way, independent of a specific component model, we introduce a simplified representation of components, as illustrated in Fig. 1(a). A component is represented by a rectangle and its ports by smaller protruding rectangles at the edges of the component. The component’s name can be tagged inside the component and port names may be shown next to the port at the outer side of the component. Connectors between ports are represented by a line. We ignore the notion of port types because they impose no limitation on the applicability of our approach. The life cycle
280
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
(a) Two connected components.
(b) A life cycle management component.
Fig. 1. A simple component representation.
management of an entire application is handled by a dedicated component. Having a component to handle this life cycle management is one possible decision that was made for clarity. It may be dealt with differently in other component architectures. The life cycle management component is distinguished by the symbolic character inside the component, as shown in Fig. 1(b). 2.2. Task model The task model in model-based user interface development The purpose of using models in software engineering is obtaining a good insight of the application without implementing it. An Interface Model covers all the aspects concerning a user interface and structures the human–computer interaction (HCI) of an application. It can be seen as a collection of specific models that all together describe the user interface. Each of these models cover some aspects of the user interface [9], such as the domain, the application, the user’s and system’s tasks, the dialog between the user and the system, the presentation, the platform, and the user. According to A. Dix, “A task is an operation to manipulate the concepts of a domain” [10, p. 125]. This means tasks are the access points towards making changes in the application’s domain layer. A task model is a description of the tasks a user may encounter when trying to accomplish a certain goal by using the application. As such, the model describes a valid sequence of actions a user can perform when interacting with the application. Applied to user interface design, these tasks are those a user has to perform to interact adequately with the user interface,1 or those to be performed by the system.2 [11, p. 3] gives an overview of the five benefits of a task model. They consist of (1) understanding the application domain, (2) representing interdisciplinary results for people related to HCI, (3) integrating user requirements in the development process of the system, (4) providing a measurement to analyze and evaluate the user interface and compare it to the postulated requirements, and (5) supporting the end user as a manual. 1 Such as editing in a text field, pressing a button, etc. 2 For example, to show the results of a query.
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
281
Fig. 2. Levels of abstraction in a task model. The top level describes the task in its most abstract form, while the leaf tasks represent the concrete user interaction widgets. The domain layer is situated in between.
In this article we introduce another area in which a task model is useful: deployment of component-oriented applications. The top level of a hierarchical task model is the most abstract representation of the user’s task. For example, a description of an abstract representation on this level could be called ‘buy something in a store’. Tasks at a lower level in the hierarchy are more concrete and provide a more detailed description of the user’s main task. At the bottom of the model concrete tasks close to the keystroke level (low-level tasks such as keystrokes, mouse moves/clicks, etc.) are situated. In the view presented in this article, the levels in between are a representation of domain layer task concepts, as illustrated conceptually in Fig. 2. Tasks at an intermediate level between the abstract task description and the keystroke level in the hierarchical task model are interesting because they can be used for the analysis of the problem domain. The task model examples used in the continuance of this article are presented without keystroke level tasks because they describe low-level details about interacting with the system and are therefore not relevant to the focus of this article and they reduce the readability of the presented task models. The ConcurTaskTree notation The ConcurTaskTree (CTT) notation is a formal method commonly used for modelling human–computer interaction of interactive applications at the task level. It was first presented in [12] and is extensively discussed in [11]. The advantage compared to other notations for expressing task models is the graphical representation. As will become clear the syntax and semantics are easy to learn and do not require knowledge of, or experience with formal languages such as LOTOS [13]. However, a task specification using the ConcurTaskTree notation can be transformed into a LOTOS statement. In this way the advantage of a formal specification for correctness assessments still applies. Fig. 3 shows an example of how the notation allows to hierarchically specify tasks. The nodes of the tree represent the tasks. If a task can be divided in subtasks, they are represented by the children of the task. In the ConcurTaskTree notation a distinction between four types of tasks is made: user tasks ( ) are performed entirely by the user without interaction with the system, application tasks ( ) are performed entirely by the application without interference by the user, interaction tasks ( ) are tasks where interaction between the user and the system takes place, and abstract tasks ( ) are tasks that need to be refined by dividing them into subtasks and the tasks can be specified to be executed in several iterations.
282
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
Fig. 3. Example of a task model using the ConcurTaskTree notation.
Table 1 Temporal operators connecting sibling tasks in a task specification [] |=| ||| |[]| [> |> []|> >> []>>
choice Order independence Independent concurrency Concurrency with information exchange Disabling Suspend/resume Suspend/resume with information exchange Enabling Enabling with information exchange
Operators are used to express how tasks are related to each other over time. Table 1 gives an overview of these temporal operators and their meaning. These operators are introduced in [11]. We added the operator []|>, which expresses that information can be interchanged between the suspended and the newly enabled task. These temporal relations are particularly interesting in this article as they specify the order in which tasks are to be executed. The notation also includes operators to specify optional tasks and iterative tasks. Optional tasks are placed between square brackets and iterative tasks are marked with an asterisk. The operators in Table 1 are ordered according to the priority of the operator in a specification. The choice operator [] has the highest priority, the enabling operators >> and []>> the lowest. To illustrate the notation we introduce a case study which we will also use in the examples and experiments throughout this article. The problem domain of the case study
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
283
Fig. 4. Example of a task model using the ConcurTaskTree notation that elaborates on the perform traffic intervention task of Fig. 3.
is police intervention in a traffic situation where a police officer works with a PDA. Imagine the following scenario. A police officer drives around a city in order to spot traffic violations. When the officer is driving the law enforcement vehicle, the PDA is used to visualize the speed of cars in front of the vehicle with the possibility to record the view of the officer. At the moment the officer spots a traffic violation, he can make the traffic violator stop to call him to order. The law enforcement officer has the possibility to show the traffic violator the violation and he can process the violation by giving him a warning or write him a violation ticket. At any time, the patrol can be interrupted by an emergency call. To make the task specification more readable, we have divided the task model into three parts. The first part (Fig. 3) describes the main task of the task specification, the second part (Fig. 4) refines the “Perform Traffic Intervention” subtask of the main task, the third part (Fig. 6) refines the “Process Violations” subtask of the “Perform Traffic Intervention” task. Fig. 5 shows a component composition that implements the functionality that is required in the police intervention scenario. The SpeedMeter component sends its output to the MediaRecorder component that is recording the violation and an Editor component that is used by the police officer to process the violation ticket. The Query component provides access to the database through the Database component in order to let the police officer check the driver’s documents. The document that describes the violation can be spell checked by the SpellChecker component, it can be printed by the Printer component and it can be transmitted to the police’s base server through a secure connection by the DocumentTransmitter component. The recording of the violation can also be shown by using the MediaPlayer component. The recording is stored and retrieved through the MediaContainer component. The composition contains a number of components that are not really necessary to let the user do his tasks, but that can be helpful. We call these components optional components. Examples are the SpellChecker component and the MediaPlayer component. The notation used in the main task model in Fig. 3 serves as a good example to get familiar with the task notation. The main task “Perform Patrol” consists of three subtasks:
284
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
Fig. 5. A component composition that implements the functionality for the police intervention scenario.
Fig. 6. The task model that expresses the details of the process violations subtask of the task model depicted in Fig. 4.
“Patrol”, “End Patrol”, and “Respond To Emergency”. Because the choice operator [] has a higher priority than the disabling operator [> (according to the rank in Table 1), the iterative task “Patrol” will be disabled by either the “End Patrol” or “Respond To Emergency” task. The “Patrol” task consists of two tasks: “Detect Violations” and “Perform Traffic Intervention”. Because the “Patrol task” is marked to be an iterative one, the task will start over after the execution of the “Perform Traffic Intervention” task. In the same way tasks are refined in Figs. 4 and 6. 2.3. The composition model The composition model is the artifact that specifies how a component composition should be constructed. In most component systems the composition model is a static entity that specifies how a specific application is built from a set of components. It defines which
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
285
components should be instantiated from their blueprints, how they should be configured and how they should be interconnected in order to establish the application’s functionality. The composition model is often seen as a script that is used to start the application. 2.3.1. Component–task associations In our approach, we associate the components of a composition model with tasks in a task model. This way, a relation is created between components and the tasks they are needed for. The solution we present uses the ConcurTaskTree notation, which can be seen as a high-level model for the underlying domain layer. It is possible to deduce a finite state automaton, called the dialog model,3 from a CTT task tree that contains several application states. Each of these states represents a set of tasks that are simultaneously needed at a specific moment during the lifetime of the application. When an application is in a specific state, the tasks that are not associated with that state are irrelevant. Since our composition model associates components with each task, the dialog model also represents lifetime information for each component the application is composed of. In other words, the order in which domain functionality may be needed can be deduced from the task model, allowing it to be a guideline for efficient component life cycle management. 2.3.2. Integrating task and composition model The integration of the composition model with the task model is performed during the application’s design process. It is important to note that the task model and the components could have been developed separately, often by different designers. For example, off-theshelf components could assembled and mapped onto the task model. There are two important goals to keep in mind during the design process of the taskbased composition model. The first goal is to put components or sub-compositions as low as possible in the task tree so that their use is constrained to be as task-specific as possible. Putting a component higher up implies that the component is needed for a larger set of tasks (for every subtask of the task it is associated with). The second goal consists of avoiding connectors between sub-compositions as much as possible as these connectors introduce dependencies that increase the number of components needed to realize a specific task. The constraints imposed by the temporal operators on the connectors between subcompositions are further explained in Section 4. Components are associated with tasks in the CTT notation as follows. Since the life cycle management component needs to be available at all times, it is associated with the root task in the ConcurTaskTree. Other components that need to be available during the whole task are attached to the root task as well. From that task on, subtask-specific functionality is associated with the child tasks by means of sub-compositions. These subcompositions are only a part of the entire composition the application is built from. Subcompositions may be connected with the sub-compositions associated with their ancestor and child tasks. The existence of connectors between compositions associated with sibling tasks (sibling compositions in short), however, is constrained by the temporal relations defined between the sibling tasks. As a result we can divide the temporal operators (used 3 The dialog model is further discussed in Section 3.1.
286
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
Fig. 7. A composition model implementing the functional core for the combined task model that is shown in Figs. 3, 4 and 6. Sub-compositions are associated with the task they implement.
in the task specification) into two categories: connectors that do not explicitly require connectors between sibling compositions and connectors that do require connectors. The first category prohibiting connectors between sibling compositions consists of: choice ([]), independent concurrency (|||), enabling (>>), order independence (|=|), disabling ([>) and suspend/resume (|>). Operators that require connectors between sibling compositions are: concurrency with information exchange (|[]|), enabling with information exchange ([]>>) and suspend/resume with information exchange ([]|>). 3. Task-driven deployment Up to now, we discussed the use of models that support a task-oriented application design. These models are engineered by application designers and have no direct run-time use. They are, however, used to deduce a dialog model that supports life cycle managers at run-time. The result is a task-driven deployment plan, whose purpose is to describe which components are needed as the user’s tasks progress. This way, a life cycle management scheme can be followed that enables efficient use of computing resources by incrementally molding the available functionality onto the user’s requirements. 3.1. The dialog model The dialog between the user and the system is described in the dialog model. This means a dialog model describes how the user interface reacts when the user or the system performs an action. This can vary from a transition to another state (dialog) in the user interface to the invocation of a application function. The dialog model in our approach is represented by a state transition network (STN) [14]. In the STN, the states are collections of tasks that are enabled during the same period of time, also called an Enabled Task Set [11]. Transitions to other states are labelled
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
287
Fig. 8. The dialog model corresponding to the task model depicted in Fig. 6. Each transition is annotated with the probability of a specific user choosing the path towards the next state.
with the tasks that trigger the transition to the other state. Because a task model is provided by the designer and, in previous work, we have developed an algorithm to extract an STN from a task model [15], it is possible to obtain the dialog model offline. This algorithm inspects the temporal relations between the tasks in order to deduce how the tasks can be performed over time. In this way an STN is constructed following the constraints of the task model. Fig. 8 shows a part of the dialog model deduced from the task model of the case study we discussed. Each state si contains a group of tasks and directed connections to other states, labeled with the task triggering the transition. 3.2. Towards a task-driven deployment plan The purpose of a task-driven deployment plan is to describe which components are needed as the user’s tasks progress. The resulting incremental deployment allows for a more efficient use of resources. In Fig. 7, a composition model is depicted that implements the combined task model of Figs. 3, 4 and 6. The root task, Perform Patrol, contains the life cycle manager component PatrolManager and EmergencyManager, which are both available during the entire task. Connectors are always allowed between sub-compositions in a ancestor–descendant relation. Sibling sub-compositions on the other hand may be limited by their temporal operators. Sub-compositions that have no direct ancestor–descendant or sibling relation may have connectors between them if their connecting shortest path through the task tree only passes relations that allow connectors. For example, the task Measure Speed and the task Perform Traffic Intervention may have connectors between their associated subcompositions because Measure Speed is related to Perform Traffic Intervention through an
288
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
ancestor–descendant relation followed by an enabling with information exchange relation. However, the sub-composition associated with Give Warning can not be connected with the sub-composition associated with Write Violation Ticket because of the choice relation between both tasks. Component MediaContainer occurs twice in Fig. 7 because it implements the functionality that is needed both for task Record Video and task Show Violation Video. As mentioned before, the component name (MediaContainer) refers to a component instance and as such, both occurrences may refer to the same instance. In our composition model, both occurrences do not necessarily have to refer to the same instance because of the temporal relations expressed in the model. Both occurrences are realized by the same instance only when the life cycle manager anticipates the component will be needed in the near future and decides to keep the component alive. Obviously, proper rewiring of the connectors attached to the component needs to be performed as well, as specified in the composition model. A component deployment algorithm Deployment timing for each sub-composition can easily be deduced from the dialog model as it collects the tasks that are enabled during the same period of time in an enabled task set. The following algorithm4 determines the components that are related with a specific state s in the dialog model (δ τ ) associated with task model τ : T being defined as the set containing all tasks in the task model τ , S and C initially being defined as empty sets, A(t) being defined as the set of all ancestors of task t, D(t) being defined as the set of all descendants of task t, and ∗t the set of components directly associated with task t: (1) collect all tasks attached to state s in S (2) for each element t in S, unite all elements in A(t) with S (3) for each element t in T , unite t and all elements in A(t) and D(t) with S if t is related with an element in S by a []>> or []|> temporal relation (4) repeat step (2) until no more tasks are added to S (5) for each t in S, unite ∗t with C C now contains the requested components. S is defined as the task superset of state s, containing all tasks that are relevant in that state. Let be the operator that retrieves the task superset from s, or S = s. Based on the definitions above, we can summarize the result of the algorithm, ~, as: [ C = ~(s) = ∗ti with s ∈ δ τ . (1) ∀ti ∈S|S= s
Note that step (2) involves all temporal relations with information exchange, except the temporal relation |[]| because parallelism already involves both tasks being contained within the same dialog model state since they are enabled during the same period of time.5 Fig. 8 depicts the dialog model for the task model represented in Fig. 6. From this dialog model and the composition model of Fig. 7, we can deduce (using the relaxed algorithm) 4 Expressed in pseudo-language, not to confuse with set theory. 5 See the definition of an Enabled Task Set in Section 3.1.
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
289
the following association between dialog model states and application components: state s0: PatrolManager, EmergencyManager, Editor and optionally MediaPlayer and MediaContainer state s1: PatrolManager, EmergencyManager, Editor, DocumentTransmitter and optionally MediaPlayer, MediaContainer and SpellChecker state s2: PatrolManager, EmergencyManager, Editor and optionally MediaPlayer and MediaContainer state s3: PatrolManager, EmergencyManager, Editor, Printer and optionally MediaPlayer and MediaContainer. Relaxation of the component deployment algorithm As stated before, the overall goal of our task-driven component deployment approach is to reduce the number of components needed at a given point in time and yet providing the user with all the functionality he needs. Unfortunately, step (3) of the algorithm harbors the danger of causing the task superset to explode in size. Temporal relations with information exchange introduce dependencies that tightly couple the components in the composition, leaving less space for deployment optimization. To deal with this problem, we can limit the communication semantics used between components that are associated with tasks that have temporal relations between them that involve information exchange (such as []>> and []|>). Two communication styles may be employed. A push-style method involves the first in a sequence of tasks pushing its output onto the next task before finishing. In the pull-style approach the next task in a sequence of tasks pulls the needed output out of the previous task as needed. If information exchange from the previous to the next task is agreed upon to finish when the previous task finishes, then the previous task becomes needless once the next task has started. When the push-style communication semantics is used between components associated with successive tasks, then step (2) and (3) in the algorithm can be omitted, which drastically reduces the temporal dependencies between sub-compositions in the deployment plan. As a consequence, the relaxation of the algorithm allows a more optimized deployment plan. 4. Anticipating need-based component deployment Previous section described how the set of components needed at a specific application state can be deduced from the dialog model. Dialog state transitions, however, often trigger the deployment plan to change towards offering new functionality, causing previously unavailable components to be deployed. Unfortunately, seams in the availability of the application are introduced when this process causes salient latencies. This section introduces a dynamic prediction mechanism that reduces these seams whenever possible. Our approach realizes prediction of near-future component deployment requirements on a user-specific basis. For this purpose, a profile based on historical information collected during previous uses of the application is created. Each state in the dialog model maintains a probability function that specifies the chances of the user choosing a particular transition
290
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
to a next state. This discrete function is updated each time the user makes a choice from within that state. Fig. 6 depicts an example task model whose dialog model is illustrated in Fig. 8. The dialog model is annotated with the probabilities mentioned above in underlined font. As expected, the probabilities of all outgoing transitions of each state sum up to one. We define the transition probability function as follows: Ptr (si |sj ) = The chance of state si being reached after the user triggering the next transition, given our current state s j .
(2)
In the following sections we discuss how we can adapt Ptr and design a method to anticipate future deployment needs. 4.1. Updating the transition probability function The initial value of Ptr (si |s j ) is n1 where n is the amount of transitions leaving state s j . A simple way to update Ptr (si |s j ) would be to count how often state si is reached from state s j and divide this number by the total number of choices that were taken from state s j . This updating method, however, considers each new occurrence as important as all previous choices that were made, meaning that history is as important as the present. This consideration conflicts with the evolution of the user’s mental model (the user’s view upon the system) over time. As the user experience evolves over time, the user’s mental perception and thus the way the user interacts with the system changes as well. As a consequence, the transition probability function Ptr should be updated in a way that recent user actions are weighted considerably more than older history. In order to consider history less important than recent samples, we use a single exponential smoothing method so that past observations are weighted with exponentially decreasing weights. This implies the following strategy for updating Ptr : ∀si accessible from s j : Ptr (si |s j ) = αxi + (1 − α)Ptr0 (si |s j ).
(3)
Ptr0
Where represents the old probability and xi the value for the choice taken in state s j with respect to state si . xi is either zero or one. xi = 1 means that si was chosen as the next state, xi = 0 means it was not. α is a real number between 0 and 1 that denotes the speed of favoring recent observations over those in history. The following example clarifies the updating strategy. Imagine that user actions trigger a transition from state s0 in Fig. 8 (having three reachable states s2 , s3 and s4 ) to state s3 . Then the following update is processed with α = 0.1: i 2 3 4
Ptr0 (si |s1 ) xi 0.4 0 0.5 1 0.1 0
Ptr (si |s1 ) 0.36 0.55 0.09
Notice that this method maintains the property that the sum of all outgoing probabilities equals one. α is chosen rather high here to make the effect obvious. It represents a strategy that quickly adapts to the user’s current behavior, which may be useful for inexperienced users that are still learning to use the application. As the user becomes more experienced, the value for α may be decreased to reach a more stable probability function.
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
291
4.2. Anticipating future deployment needs The probability function can be summarized in a matrix notation, which is also known as a Markov model. The Markov transition matrix T p matching the dialog model in Fig. 8 is illustrated below.6 Element (i, j) in the matrix7 represents Ptr (s j |si ). Although it is clear that the sum of the elements in a row should add up to one [16, Eq. (3.6)], a state without transition to a next state8 (e.g. state s4 ) implies a row with zeros in the transition matrix. This imposes a problem for our state prediction as no decision can be made for the next state. To resolve this, we add a dummy returning transition with transition probability equal to one to all terminal states to state that we stay in this state forever. Such transition is visualized with a dashed transition line, as depicted in the dialog model in Fig. 8. 0 0 0.4 0.5 0.1 0.7 Tp = 0 0.6 0
0 0.9 0 0
0 0 0 0
0 0 0 0
0.3 0.1 . 0.4 1
Imagine the current state is s1 . Following T p , state s2 is the most likely state to follow next. As a result, the components that are most likely to be needed are those associated with the tasks in this state. The probability of a specific component ci needing to be deployed after state s j is: X Ptr (ci |s j ) = Ptr (sl |s j ). (4) ∀sl |ci ∈~(sl )
This result is, however, only valid for the immediate future, i.e. exactly one transition away from the current state. Although the immediate future is very important, it may often be more realistic to consider a larger horizon, especially when nearby states are likely to reoccur because of loops in the transition graph. A larger horizon will often avoid the removal of components that will be needed again shortly in such cases. An interesting feature of our probability matrix is that we can easily calculate the probabilities of reaching state si from state s j in exactly two transitions by multiplying the matrix by itself. For this fact, we rely on the theory of Markov chains [16]. Obviously, the property that all rows add up to one is maintained. For example, T p squared makes: 0.30 0.36 0 0 0.34 0 T2p = 0.63 0 0
0 0 0 0
0.28 0.35 0.37 0 0 0.37 . 0.24 0.30 0.46 0 0 1
The same property holds for T p3 , T p4 , . . . , T pn . As a consequence, a probability horizon of four steps is calculated by weighting the sum of the probability matrices that define step 6 For reasons of clarity, we do not consider the dialog model of the whole task three. 7 In (i, j), i and j identify the rows and columns respectively. 8 Which is a terminal state.
292
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299 T p +T 2 +T 3 +T 4
p p p , which we denote T p64 in short. one to four. Equally weighted, this makes: 4 Other weights can be used in this linear combination when one wants to consider the near future more important than transitions further away, or vice versa. T p6n basically defines the probability that a specific state follows a given state within the next n transitions. T p64 matching Fig. 8 results in the following probability matrix: 0.1605 0.1170 0.1552 0.1940 0.3733
4 T6 p =
Tp + T2p + T3p + T4p 4
0.2716 0.0630 0.0910 0.1137 0.4607 = 0.2047 0.2817 0.0630 0.0788 0.3718 . 0.2328 0.0540 0.0780 0.0975 0.5377 0 0 0 0 1
State s3 for example has a large probability of ending up in state s4 within the next four transitions, while states s1 , s2 and s3 are much more unlikely to be on the future path. Given the transition probability function, we can express the following deployment probability function: X Ptr6n (ci |s j ) = Ptr6n (sl |s j ). (5) ∀sl |ci ∈ ~(sl )
This equation expresses the probability of having to deploy component ci within a transition horizon of n steps given that the current state is s j . A life cycle manager using this information can easily select the components with the best deployment probability to deploy next. 5. Experimental results In this section we discuss some experimental results we obtained by integrating the methodology discussed in previous sections in the DynaMo-AID framework [17]. The DynaMo-AID framework supports the development of context-aware user interfaces and consists of (1) a design tool that facilitates the specification of task and composition models and (2) a run-time architecture that uses dialog models to drive the interaction with the user. The run-time architecture is extended with the Markov chain enhanced dialog models and the task-driven composition models presented in this article. The Markov model probabilities are progressively updated with logging information of the user’s actions. 5.1. Test scenario The example traffic scenario that we presented earlier in this article was used in an experimental set-up in which we illustrated the advantages of our approach. Using the DynaMo-AID framework we simulated a realistic path through the finite state automaton represented by the dialog model. This path contains several cycles and represents the workflow history of a traffic agent over a certain timeline. The component memory footprints are depicted in Fig. 7. For the ease of calculations, without loss of generality, we rounded the memory size of a larger component to a natural multiple of the smallest component. During the evolution of the agent’s workflow, the different states of the dialog
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
293
model are visited according to the agent’s usage pattern and the probabilities associated with each state transition are updated. 5.2. The experiment In the experiment we tested two life cycle management strategies that were led on the same path through the dialog model. The first strategy loads the components at request only, without prediction, while the second strategy uses the prediction strategy that looks four steps ahead, thus using Ptr64 . The second life cycle manager’s preloading algorithm has been tested twice, once with an inexperienced Markov model (equal probabilities on each transition) and once with a Markov model that had the chance to build up realistic probabilities from experience. Each of those experiments with the second life cycle management strategy have been performed once on a (single) device that had 1000K memory available and once on a device that had 1250K memory available for the application. The total amount of memory needed by the application is 1450K. Thanks to our life cycle management strategy, only 1000K is needed for the application to run. This is because the state in the application’s dialog model that needs the most memory needs 1000K, including optional components. The goal of the experiment is to evaluate the effect of just-in-time loading of components versus component preloading. The idea is that just-in-time loading may introduce latencies because the user has to wait for the components the system needs to load to perform the user’s current tasks. Preloading on the other hand reduces these latencies because the components are already ready to be used during state transitions. Obviously, preloading can only be performed when enough memory is available to load additional components. 5.3. Results Table 2 presents the number of 50K-blocks that were loaded just-in-time for each transition during the experiment. The column at the left hand side in the table lists the state transitions that occurred during the experiment. The second column shows the number of occurrences of each transition. The state numbers match the numbers of the states shown in Fig. 8, although Fig. 8 does not depict all states that occur in the test case. The remaining columns represent the number of 50K-blocks that were loaded just-in-time during the experiment. For example, state transition 2 → 1 occurs 12 times and each occurrence involves loading 6 50K-blocks in the No prediction column. If we look at state transition 2 → 1 (see Fig. 8), we see that three new tasks have to be instantiated in state s1: Write Violation Report, [Perform Spell Check] and Send Report. These tasks need two components that were not needed in state s2: SpellChecker and DocumentTransmitter (see Fig. 7). Both components add up to a 6 × 50 k memory requirement, hence the factor 6 in the No prediction column. The first experiment, labeled No prediction, loads all components just-in-time without preloading. The result is straightforward. A total of 84 50K-blocks need to be loaded during the application’s state transitions. Experiment 1 uses preloading based on an inexperienced Markov model. The first test with this set-up was performed in an environment that
294
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
Table 2 The number of just-in-time loaded components expressed in 50K-blocks Transitions
Occurrences
No prediction
Experiment 1 1000K 1250K
Experiment 2 1000K 1250K
0→3 0→4 1→0 2→1 3→0 5→6 5→2 6→7 7→8 8→0
1 1 12 12 1 1 12 1 1 1
1 0 0 6 0 0 0 2 0 9
1 0 0 6 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
84 4200K
73 3650K
12 600K
1 50K
0 0K
50K-blocks Total
provides 1000K of available memory, which is the lower boundary for the executing the application. This memory limitation implies that only those states that do not entirely allocate the 1000K memory space can profit from component preloading. The result is that 73 50K-blocks are still loaded just-in-time, which is not much better than the No prediction experiment. The second test we performed with the inexperienced life cycle manager was done in an environment that provides 1250K of available memory, which is still less than the amount of memory the whole application would need if its components would be instantiated together. Here, the advantages of preloading become clearer. The number of components that needed to be loaded just-in-time are drastically reduced, dropping the number of allocated memory blocks from 73 to 12. The right hand side column in the table illustrates preloading with a the same life cycle manager that had the chance to update the Markov model for a specific usage pattern (i.e. how a specific user uses the application). The usage pattern in our case is based on a traffic agent who writes a lot of violation tickets instead of giving warnings. The results are remarkable compared with the inexperienced Markov model. Only one component had to be preloaded in the 1000K environment, while no component had to be preloaded in the 1250K environment. 5.4. Concluding remarks on the experiment Just-in-time component loading allows life cycle managers to use system memory carefully because only those components that are currently needed have to be instantiated. On the other hand, component preloading based on well-considered anticipation of user needs enables life cycle managers to minimize the transition times between the states of the application’s dialog model. In other words, the task-driven approach we present in this article enables life cycle managers to deal with memory as a system resource in a flexible manner that can cope with memory scarceness and still optimize the availability of application functionality to the user.
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
295
6. Related work In literature, little work is presented on task-driven computing and task-driven deployment of components or services in pervasive computing environments. Our work, however, can be categorized in the domain of self-management and self-adaptive software. A survey on self-management in dynamic software architecture specifications is presented in [18]. The task-driven component deployment mechanism presented in this article is a life cycle management technique that performs self-management with respect to the deployment configuration of the application. It makes deployment decisions based upon a state transition network that is generated once the application has been composed from its components. Our approach relates to the graph-based approaches that are discussed in [18], where components are graph nodes and connectors graph edges. Typically, graphbased approaches use graph rewriting rules to perform architectural reconfiguration of the application (i.e. changes in the composition). We proposed a different approach to perform self-management: selective deployment. Our approach does not exclude graph rewriting, as long as rewriting only involves replacing components with equivalent versions. When the shape of the composition-graph is changed, a new mapping to the given task model may be required and a new dialog model would have to be generated, which is currently done at design-time only in our solution. Other approaches that realize self-management typically use process algebras or first order logic to decide upon the adaptation actions that have to be taken. Compared to those, our approach benefits from its simplicity: once the component composition to task model mapping is performed, no complex rules have to be created to enable self-management. Furthermore our approach adapts the deployment plan with regard to choices made by the human user acting in the ambient intelligence environment. In [19], six general characteristics of self-adaptive software are discussed and a spectrum of self-adaptability based on the complexity of the decision making process is presented. The first characteristic (i) is the condition upon which adaptation occurs. The condition for adaptation (i.e. a change in the deployment configuration) in our task-driven deployment solution is determined by the user’s interaction with the application, which drives the dialog model to transit between its states. The second characteristic (ii) specifies whether the application is open- or closed-adapted. Our solution takes its decisions in isolation, independently from other running applications and, as such, categorizes as closedadaptation. Even though the changes are driven by the interaction of the user with the application, the decision taking process with respect to changes in the deployment configuration is performed completely without a human-in-the-loop, which determines the type of autonomy (iii) of self-adaptive software. The frequencies of adaptation (iv) in our solution are dependent on the application’s dialog model, the user’s familiarity with the application and the amount of resources available on the device the application is deployed on. The costeffectiveness (v) of task-driven deployment is based on a best-effort assumption. When resources are low, and only a selection of the application components can be deployed, the solution we present tries to optimize the number of simultaneously deployed components. Finally, the information type and the information accuracy (vi) is purely determined by the application’s dialog model and the path through the dialog model that follows from the user’s interaction with the application. By nature, this information is 100% accurate. In the spectrum of self-adaptability (as presented in [19]), our task-driven solution in its basic
296
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
form (i.e. without prediction) is situated at the low end (adaptation based on conditional expressions). Taking into account the ability to predict future deployments based on the user’s usage profile, our solution is located at the high end of the spectrum (characterized by learning-based decision making), which represents the richest types of self-adaptability. The most important work related to task-driven computing is presented by David Garlan and his team, who are working on the Aura project [20]. In [21] Garlan et al. describe task-aware adaptation in a system that maintains explicit representation of user intent to encode user goals. They define a task as a set of services in combination with a set of quality attribute preferences. These preferences are used to consider quality tradeoffs when deploying applications. In the view presented by Garlan et al., a task is a high-level concept only that is used to drive the deployment of applications and to discover services needed by the user. The presented task concept, however, does not have hierarchical task compositions and there are no relations between tasks and their composing subtasks. As a consequence, their deployment decisions are not as fine-grained as the deployment decisions we presented in this article. Our approach to self-managed component deployment results from an ambient intelligence viewing angle on component deployment, in which resource allocation needs to be performed economically. We can conclude that our approach is new in the sense that we propose a type of self-management that has not been tackled before, but that is complementary to most existing self-managing mechanisms. 7. Future work Our life cycle management approach currently supports only a single device, although in a distributed environment the resources of multiple connected devices could be considered for deployment. Distributed deployment approaches such as presented in [22] are complementary to the work described in this article. Our task-driven life cycle management approach defines which components need to be loaded or preloaded, while distributed deployment approaches define where to deploy these components. The reconciliation of these two complementary mechanisms leads to a solution that truly realizes self-organizing component deployment in Ambient Intelligence environments. Both mechanisms need an accurate view on the distributed environment with respect to resource availability in order to deploy the user’s application components well. Contextawareness is therefore essential, and solutions such as presented in [23,24] are needed to detect, spread and process context information containing the needed resource information. The dynamic nature of Ambient Intelligence environments could definitely complicate the realization of stable distributed deployment configurations. The possibility of fluctuations in the availability of resources may prove to be a prime obstacle in the integration of the presented task-driven deployment mechanism in a distributed solution. Coping with these implications will constitute the most important part of our future work. 8. Conclusions The approach presented in this article tackles two mainstream problems that surface in the research domain of AmI. First, seamless and proactive computing support is expected
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
297
from AmI applications in order to minimize user distraction while maximizing the offered services. With the solution we presented in this article, seamless computing support is realized by design-time task models whose corresponding dialog models drive the application deployment to match the user’s needs. Second, AmI is expected to be realized on mobile computing devices that accommodate a limited amount of computing resources. This restriction forces application life cycle managers to optimize the availability of functionality while minimizing the needless consumption of resources. Our run-time dialog models harbor task information to accurately support a life cycle management mechanism that removes unnecessary components. Furthermore, this dialog model is also enhanced with a Markov model that is able to predict near-future deployments to optimally use free memory in the prospect of making state transitions more seamless. Experiments with the DynaMo-AID framework have proved the efficiency of the life cycle management and the advantages of optimizing the use of computing memory, which reduces the latencies produced by just in time component loading. Acknowledgments The authors would like to thank Jo¨elle Coutaz for her contributions to this work. Part of the research at EDM was funded by EFRO (European Fund for Regional Development), the Flemish Government and the Flemish Interdisciplinary Institute for Broadband Technology (IBBT). The CoDAMoS (Context-Driven Adaptation of Mobile Services) project IWT 030320 is directly funded by the IWT (Flemish subsidy organization). References [1] P. Rigole, Y. Berbers, T. Holvoet, Component-based adaptive tasks guided by resource contracts, in: Proceedings of the First Workshop on Component-Oriented Approaches to Context-Aware Computing (in conjunction with ECOOP’04), Oslo, Norway, 2004. [2] P. Rigole, Y. Berbers, T. Holvoet, Mobile adaptive tasks guided by resource contracts, in: the 2nd Workshop on Middleware for Pervasive and Ad-Hoc Computing, Toronto, Ontario, Canada, 2004, pp. 117–120. [3] F. Bachmann, L. Bass, C. Buhman, S. Comella-Dorda, F. Long, J. Robert, R. Seacord, K. Wallnau, Volume II: Technical concepts of component-based software engineering, 2nd ed., Tech. Rep. CMU/SEI-2000-TR008, Carnegie Mellon University 2000. [4] C. Szyperski, Component Software: Beyond Object-oriented Programming, Addison-Wesley, New York, 1998. [5] O.M. Group, Catalog of omg corba/iiop specifications, Internet. http://www.omg.org/technology/ documents/corba spec catalog.htm. [6] E. Bruneton, T. Coupaye, J. Stefani, The fractal component model, Internet. http://fractal.objectweb.org/ specification. [7] Y. Vandewoude, P. Rigole, D. Urting, Y. Berbers, Draco: An adaptive runtime environment for components, Tech. Rep. CW372, Department of Computer Science, K.U. Leuven, April 2003. [8] A. Beugnard, J.-M. J´ez´equel, N. Plouzeau, D. Watkins, Making components contract aware, IEEE Computer 13 (7). [9] A. Puerta, A Model-Based Interface Development Environment, IEEE Software (1997) 40–47. [10] A. Dix, J. Finlay, G. Abowd, R. Beale, Human–Computer Interaction, 3rd ed., Prentice Hall, 2004. [11] F. Patern`o, Model-Based Design and Evaluation of Interactive Applications, Springer-Verlag, ISBN: 185233-155-0, 1999. [12] F. Patern`o, C. Mancini, S. Meniconi, Concurtasktrees: A diagrammatic notation for specifying task models, in: IFIP TC13 human–computer interaction conference, INTERACT’97, 1997, pp. 362–369.
298
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
[13] ISO, Information Processing Systems, Open Systems Interconnection, Lotos — A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour, IS 8807, Geneva. [14] A. Wasserman, Extending state transition diagrams for the specification of human–computer interaction, IEEE Transactions on Software Engineering 11 (1985) 699–713. [15] K. Luyten, T. Clerckx, K. Coninx, J. Vanderdonckt, Derivation of a dialog model from a task model by activity chain extraction, in: J.A. Jorge, N.J. Nunes, J.F.E Cunha (Eds.), Interactive Systems: Design, Specification, and Verification, in: Lecture Notes in Computer Science, vol. 2844, Springer, 2003, pp. 191–205. [16] S.P. Meyn, R.L. Tweedie, Markov Chains and Stochastic Stability, Springer, 1996. http://probability.ca/ MT/. [17] T. Clerckx, K. Luyten, K. Coninx, Dynamo-aid: A design process and a runtime architecture for dynamic model-based user interface development, in: R. Bastide, P.A. Palanque, J. Roth (Eds.), in: Lecture Notes in Computer Science, vol. 3425, Springer, 2004, pp. 77–95. EHCI/DSV-IS. [18] J.S. Bradbury, J.R. Cordy, J. Dingel, M. Wermelinger, A survey of self-management in dynamic software architecture specifications., in: Proceedings of the 1st ACM SIGSOFT Workshop on Self-Managed Systems, WOSS 2004, 2004, pp. 28–33. [19] P. Oreizy, M.M. Gorlick, R.N. Taylor, D. Heimbigner, G. Johnson, N. Medvidovic, A. Quilici, D.S. Rosenblum, A.L. Wolf, An architecture-based approach to self-adaptive software, IEEE Intelligent Systems 14 (3) (1999) 54–62. [20] J. Sousa, D. Garlan, The aura software architecture: An infrastructure for ubiquitous computing, Tech. Rep. CMU-CS-03-183, CMU, Pittsburgh, PA, 2003. [21] D. Garlan, V. Poladian, B. Schmerl, J.P. Sousa, Task-based self-adaptation, in: Proceedings of the 1st ACM SIGSOFT workshop on Self-managed systems, ACM Press, Newport Beach, California, 2004, pp. 54–57. [22] P. Rigole, Y. Berbers, Resource-driven collaborative component deployment in mobile environments, in: P. Dini, D. Ayed, C. Dini, Y. Berbers (Eds.), Proceedings of the International Conference on Autonomic and Autonomous Systems, IEEE Computer Society Press, Santa Clara, USA, 2006. [23] D. Preuveneers, Y. Berbers, Adaptive context management using a component-based approach, in: L. Kutvonen, N. Alonistioti (Eds.), Distributed Applications and Interoperable Systems, 5th IFIP WG 6.1 International Conference, DAIS2005, in: Lecture Notes in Computer Science (LNCS), vol. 3543, Springer, Athens, Greece, 2005, pp. 14–26. [24] D. Preuveneers, J.V. den Bergh, D. Wagelaar, A. Georges, P. Rigole, T. Clerckx, Y. Berbers, K. Coninx, V. Jonckers, K.D. Bosschere, Towards an extensible context ontology for ambient intelligence, in: P. Markopoulos, B. Eggen, E. Aarts, J.L. Crowley (Eds.), Second European Symposium on Ambient Intelligence, in: LNCS, vol. 3295, Springer, Eindhoven, The Netherlands, 2004, pp. 148–159. Dr. Ir. Peter Rigole is a research assistant with the department of Computer Science at the Katholieke Universiteit Leuven, Belgium. He is currently working on the Music project as a member of the DistriNet research group. His research interests include distributed computing, resource-awareness, component-based development and automated deployment of component-based applications. He received his Ph.D. degree from the Katholieke Universiteit Leuven in 2006. Contact him at
[email protected].
Tim Clerckx is a Ph.D. student at the Expertise Centre for Digital Media which is a research institute of Hasselt University in Belgium. His research areas are modelbased development of user interfaces, context-aware and adaptive user interfaces, and human–computer interaction for ambient intelligence environments. He has published several papers with regard to the aforementioned topics. He received his M.S. degree in Computer Science from Hasselt University in 2003.
P. Rigole et al. / Pervasive and Mobile Computing 3 (2007) 276–299
299
Prof. Dr. Ir. Yolande Berbers received her Ph.D. degree from the Katholieke Universiteit Leuven in 1987. She is currently a full professor at the Department of Computer Science at the Katholieke Universiteit Leuven and a member of the DistriNet research group. Her research interests include software engineering for embedded and real-time systems, model-driven architecture, context-aware computing, distributed computing, and multi-agent systems with emergent behavior. She runs several projects in cooperation with other universities and/or industry. She teaches several courses on programming real-time and embedded systems, and on computer architecture. She can be reached at
[email protected] Prof. Dr. Karin Coninx has a Ph.D. (1997) in computer science, in the domain of Human–Computer Interaction in virtual environments. She is a member of the management committee of the Expertise Centre for Digital Media (EDM), which is a research institute at Hasselt University, Belgium. Karin Coninx is division leader of the Human–Computer Interaction (HCI) group of EDM (25-30 researchers) and is responsible for 10 Ph.D. students. She has managed the EDM-HCI research in several basic research projects (EU projects as well as national/Flemish projects), and coordinated the consortium for some Flemish/Euregional projects. Karin Coninx is a full-time professor at Hasselt University, responsible for courses in the area of programming, software engineering and basic and advanced HCI topics. She has more than 90 scientific publications, most of them in the area of HCI. Her research interests include diverse HCI topics, such as interaction in multimodal/virtual environments, interactive workspaces, user centred development approaches, model-based user interface development, mobile systems, and software architectures for ubiquitous systems including distributed and migratable user interfaces. Karin Coninx is a member of ACM SIGCHI.