The Journal of Systems and Software 86 (2013) 449–467
Contents lists available at SciVerse ScienceDirect
The Journal of Systems and Software journal homepage: www.elsevier.com/locate/jss
Design of component-based real-time applications Patricia López Martínez ∗ , Laura Barros, José M. Drake Universidad de Cantabria, 39005 Santander, Spain
a r t i c l e
i n f o
Article history: Received 26 July 2011 Received in revised form 13 September 2012 Accepted 16 September 2012 Available online 5 October 2012 Keywords: Real-time systems Software component Component-based applications Software reusability Reactive model Schedulability
a b s t r a c t This paper presents the key aspects of a model-based methodology that is proposed for the design of component-based applications with hard real-time requirements. The methodology relies on RT-CCM (Real-time Container Component Model), a component technology aimed to make the timing behaviour of the applications predictable and inspired in the Lightweight CCM specification of the OMG. Some new mechanisms have been introduced in the underlying framework that make it possible to schedule the execution of code and the transmission of messages of an application while guaranteeing that the application will meet its timing requirements when executed. The added mechanisms also enable the application designer to configure this scheduling without interfering with the opacity typically required in component management. Moreover, the methodology includes a process for generating the real-time model of a component-based application as a composition of the reusable real-time models of the components that form it. From the analysis of this model the application designer obtains the configuration values that must be applied to the component instances and the elements of the framework in order to make the application fulfil its timing requirements. © 2012 Elsevier Inc. All rights reserved.
1. Introduction
productivity, shorter time-to-market, more efficient maintainability, etc. But precisely due to the complexity introduced by the real-time design and configuration process, in contrast to other application fields where component-based development has been successfully introduced, its usage in the design and development of real-time systems has evolved much more slowly (Crnkovic, 2004; Panunzio and Vardanega, 2009). Applying a component-based approach to the development of real-time applications requires making their two main characteristics compatible, so that an application:
Modern real-time applications support really complex functionalities and execute on distributed heterogeneous platforms sharing infrastructure resources. These applications are built by integrating subsystems or modules with different functions, such as modules for analogue and/or digital data acquisition, loggers, user interfaces, data bases, samplers and servo controllers. Reusing these modules in different applications or incorporating them as legacy code would bring about relevant advantages such as reducing development time and cost and increasing reliability. The software development paradigm that best addresses this idea of module reusability is component-based development (Szyperski, 1998). When building applications with timing requirements, an additional source of complexity is added to the strictly functional one, since both the applications and the platform must be configured to guarantee the fulfilment of the timing requirements imposed on the applications. However, despite this complexity, it is our belief that applying a component-based approach to the development of real-time systems can improve the quality of their design and development processes, and bring to the real-time domain the typical advantages of component-based development: higher
• Can be built as an assembly of reusable software components that have been developed by third parties, and which are managed in an opaque way when they are included in an application. The same component can take part in different applications and can be installed in different platforms, but its code cannot be modified or accessed by the application designer. • Can guarantee a predictable timing behaviour. The application designer must have means of controlling and configuring the execution of the internal activities of the application so that the timing requirements that have been specified in the application specification are fulfilled.
∗ Corresponding author. Tel.: +34 942201477; fax: +34 942201402. E-mail addresses:
[email protected] (P. López Martínez),
[email protected] (L. Barros),
[email protected] (J.M. Drake).
Combining opacity and predictability is a difficult challenge. In the case of traditional real-time systems design (Klein et al., 1993; Liu, 2000), predicting the timing behaviour of a system relies on the formulation of the real-time model of the system. This model, which is complementary to the functional or logical one, contains
0164-1212/$ – see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jss.2012.09.036
450
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
all the qualitative and quantitative information about the internal behaviour of a system that is required to predict or evaluate its timing behaviour. It constitutes the centre of the real-time design and analysis process, where it is typically processed with two objectives: • Given a certain configuration and workload scenario, certifying the schedulability of the system for the worst case execution timing. • Obtaining a suitable configuration for the parameters (priorities, ceilings, etc.) that influence the temporal behaviour of the system, guaranteeing that the timing requirements are met (i.e. that the application is schedulable). The obtained configuration parameters are manually assigned to the internal mechanisms used to implement the system, such as threads and mutexes. When trying to apply this process to component-based applications, two main problems arise: • It is not possible to directly configure the scheduling parameters of the internal elements of the system (threads, mutexes, etc.) according to the results obtained from the real-time analysis, due to the opaque management of the components’ code. A real-time component technology is required, which provides means of configuring these aspects externally, without requiring access to the code. Of course, the technology must assure that the application will meet its timing requirements once launched; therefore, it must rely on predictable internal mechanisms, execution platforms and communication protocols. • Formulating the real-time model of a system requires complete knowledge about its internal code, which is not possible in a component-based application, also due to the opacity requirement. In the case of a component-based application, the real-time model must be generated as a composition of the temporal behaviour models provided by the components that form the application, as the application’s code is generated as a composition of the code of its constituent components. A timing modelling methodology with composability and reusability properties is required to carry out this model composition process. This paper presents a real-time design methodology for component-based applications that solves the two former issues: • It relies on RT-CCM (Real-time Container Component Model), a component technology that originates from the application of some extensions to the component model and the component implementation framework defined in LwCCM (Lightweight CORBA Component Model) (Object Management Group, 2006a) in order to guarantee a predictable and configurable timing behaviour for the applications developed. Although it will be better explained in Section 3.2.2, it is important to clarify here that although RT-CCM is based on the LwCCM component implementation framework, it does not rely on CORBA as communication middleware, i.e. it follows only the specification of the interfaces and the general structure of the LwCCM component programming model, but it uses other strategies to implement communication between components. • RT-CCM is used together with CBS-MAST (López et al., 2006), an extension of the MAST modelling methodology (González Harbour et al., 2001; MAST, 2011), which provides it with the semantics of component-based development. It enables a component developer to formulate the real-time model of a software component in a reusable way, so that an application designer can automatically obtain the real-time model of an application by assembling the models of the components that form it.
To make a success of the component-based paradigm in the realtime systems industry, the method followed for the real-time design should be consistent with a standard traditional componentbased development process. In this work, both the component and the application development processes (Crnkovic et al., 2006) follow the ones proposed in the D&C (Deployment and Configuration of Component-based Distributed Applications Specification) of the OMG (Object Management Group, 2006b). D&C defines the formats that must be used to formulate the metadata that describe the components, and which allow the agents involved in the development of an application to manage them in an opaque way. In fact, our methodology relies on RT-D&C (López Martínez et al., 2010) to define these metadata. RT-D&C constitutes an extension of D&C that includes support for adding, to the components descriptors, the real-time metadata that are required to predict, analyse and configure the temporal behaviour of a component-based application during its design and deployment process. Therefore, we claim that applying a component-based strategy to the development of systems with real-time requirements requires a combination or integration of different technologies: a real-time component technology, a real-time modelling methodology oriented to composability and reusability, and a real-time specification for the deployment and configuration of componentbased applications. Thus, as shown in Fig. 1, the methodology in which this paper relies originates from the combination of the three previously mentioned contributions: RT-CCM, CBS-MAST and RT-D&C. This paper completes and extends previously published work on RT-CCM (López Martínez et al., 2008). It defines in more detail the behaviour of the mechanisms introduced in RT-CCM to manage scheduling, namely a description of the new services (ThreadingService, SynchronizationService, etc.) added to the reference frame and how they have been defined in a way that guarantees easy adaptation of the technology to different execution platforms is included. Regarding the referred work, this paper also makes special emphasis on justifying how these mechanisms solve the typical problems that appear when trying to deal with real-time requirements in component-based applications. Besides, while López et al. (2006) expose the general behaviour and features of CBSE-MAST, this paper explains the relation between the mechanisms used in RT-CCM and the elements used to formulate the CBS-MAST reusable models associated with the components, and how the configuration values to apply to the individual component instances and the underlying services can be obtained from the analysis of the model resulting from composing the individual models of the components. Also the whole process of design of an RT-CCM application is explained more deeply than in previous works, mainly due to the detailed explanation of the case study. RT-D&C aspects will be mentioned only when necessary to give an integrated view of the proposed strategy, but without deep explanations about them, since they are not the focus of the paper and they can be consulted in López Martínez et al. (2010). The paper is structured as follows: • Section 3 explains the reference model of RT-CCM, with special emphasis on the mechanisms introduced in the framework to control the scheduling and timing behaviour of the applications. • The key concepts of CBS-MAST are introduced in Section 4. They provide the basis for being able to model the timing behaviour of a software component in a reusable manner, which enables the generation of the final real-time model of an assembly of components by composition of the models of the elements (both components and elements of the platform) that form it. • Section 5 describes the design process itself, in which all the elements collaborate in order to find a suitable scheduling
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
451
Component Technology
RT-CCM CCM Component Model + Reactive behaviour aspects
LwCCM Framework (CIF) + Mechanisms to control scheduling and concurrency
Connectors + Predictable communication mechanisms
<
>
Component-based Real-Time Application <>
<>
CBS - MAST
RT - D&C
Composable real-time modelling
Real-Time extension of the OMG’s D&C specification
Real-Time Modelling and Analysis Environment
Development process and specification methodology
Fig. 1. Elements involved in the proposed real-time component-based methodology.
configuration for an application designed as an assembly of component instances. • Finally, a case study is presented in Section 6 as a proof of concept of the proposal. The paper ends with the main conclusions about the proposed approach, and the lines for future work that arise from it. 2. Related work The component technology and design process explained in this paper arose as a result of searching for a component technology satisfying the following requirements: • Compatibility with hard real-time requirements. This implies that the components must provide services with predictable timing behaviour, i.e. with a real-time model describing them. Moreover, the technology must offer mechanisms for controlling aspects related with temporal predictability, such as concurrency and synchronization. • Capacity to configure the scheduling of an application in an opaque way, without requiring access to the internal code of the components. • Suitable for distributed applications. This implies using underlying communication mechanisms with predictable behaviour. • Providing components of high granularity, i.e. components that can implement different services, and different responses to events. None of the conventional component technologies, widely applied in several application fields, such as EJB (Sun Microsystems, 2001) and .NET (Microsoft, 2006) satisfy these requirements. They cannot provide temporal predictability, mainly due to the platforms they are targeted at. There are few proposals that attempt to incorporate real-time aspects into these technologies. The component model presented in Wang et al. (2005) uses EJB on top of an RTSJ (Real-time Specification for Java) platform, supporting a version of RMI (Remote Method Invocation) modified to provide realtime characteristics, but the paper does not provide many details about the modifications or restrictions that have been made with that purpose. Without relying on EJB, there are several proposals (Hu et al., 2007; Plˇsek et al., 2012) aimed at building real-time component-based applications on top of RTSJ-compliant platforms. One of the component models most widely used nowadays in distributed applications is the CORBA Component Model, CCM (OMG,
2006a). In contrast to the previous models, it has been defined by means of a platform-independent specification, so it is multilanguage and multiplatform. However, relying on CORBA makes it too heavy and complex for the limited resources typical of real-time embedded systems. For this reason, OMG defined a lighter version of the model, called Lightweight CCM, more suitable for this kind of systems. There are several implementations of this specification, but the most interesting and complete is CIAO (Subramonian et al., 2007; CIAO, 2011). It provides real-time behaviour by relying on the real-time specification of CORBA, RT-CORBA (Object Management Group, 2005a). A complete model-driven development environment, called COSMIC (Gokhale et al., 2008), has been defined for supporting the entire development process of CIAO applications. For managing deployment and configuration, COSMIC provides the DAnCE (Deng et al., 2005) runtime infrastructure, which constitutes an implementation of the D&C specification. COSMIC synthesizes a flattened deployment plan for use by DAnCE, which, as in our case, includes metadata to configure the real-time characteristics of the applications. In this case, the metadata are oriented to configure underlying RT-CORBA features, i.e. threadpools, scheduling policies, ORB policies, etc. so that the QoS or real-time requirements of the application are fulfilled (Kavimandan and Gokhale, 2008). In our proposal, since RT-CCM does not rely on CORBA, the metadata are oriented to configure directly the threads and synchronization mechanisms required by the components. As a consequence, in our approach, the configuration metadata is extracted from the composition and analysis of the metadata provided by the individual components (formulated on the D&C descriptor of each component) regarding their internal behaviour, whereas in CIAO it is obtained from the evaluation of system as a whole (application plus underlying middleware). Another proposal that also relies on RT-CORBA is the implementation of the UM-RTCOM predictable component model presented in Díaz et al. (2008). Similarly to our approach, in order to achieve predictability of the assembled applications, each UM-RTCOM component provides a behaviour model, which will be composed with the models of the rest of the components that form an application in order to obtain the final model to which schedulability analysis can be applied. In this case, the real-time models are formulated according to a real-time extension of SDL (Alvarez et al., 2003). UMRTCOM is also defined in a platform independent manner, but as a proof of concept, an implementation has been developed on top of RT-CORBA, including tools that extract the configuration that must be applied to the RT-CORBA elements in order to achieve the desired behaviour.
452
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
The approach followed in UM-RTCOM shares, along with the one proposed in this paper, the same goals as the PACC project (Predictable Assembly for Certifiable Components) (PACC, 2011), whose aim is to predict the behaviour of an assembly of components during the design phase based on information provided by the components involved. They have built a MDE integrated environment (Moreno and Merson, 2008), which, based on model transformations, enables the application designer to generate both the code and the analysis models of component-based applications. As with our approach, the schedulability analysis is performed by means of the tools available in MAST. In their case, the underlying technology is PIN (Hissam et al., 2005), which in order to certify schedulability, introduces several restrictions, such as e.g. being suitable only for monoprocessor environments. Our approach can be applied to a wider range of platforms and applications. The main difference between the PIN technology and our approach (and also the UM-RTCOM one) relates to the granularity of components. In our case, a component can implement several different services, grouped in ports, and even different responses to events. However, there are several component technologies, specially focused on control systems, which are based on lower granularity components that follow the “run to completion” semantics. Each component implements a unique passive function, which when triggered, reads a set of input values, applies the function to them, and generates a set of output values. The design of this kind of systems consists in identifying the sequence of components that are involved in each end-to-end response of the system, and mapping them to the threads of the underlying platform. Each thread receives a priority, and in this way, schedulability analysis can be easily applied. With this model, the timing behaviour of each component is defined only by its worst-case execution time, and the process of composition and configuration is very straightforward. Apart from PIN, examples of works of this type are COMDES II (Ke et al., 2008) and its evolution COMDES III (Angelov et al., 2008), PECOS (Nierstrasz et al., 2002), and also the SaveCCT technology (Åkerholm et al., 2007). SaveCCT emphasizes the role of the components as design entities over implementation entities. The applications are designed as a set of interconnected design-level components with the previous semantics. The technology provides an integrated design environment, Save-IDE (Sentilles et al., 2008a) which enables the formulation of the behaviour of each individual component by means of timed automata, applying the composition process, and analysing the timed automata model of the final system using the UPPAAL-PORT (Håkansson et al., 2008) tool. Later, these designlevel components are transformed directly, by means of code generation and task mapping tools, to executable entities that are deployed in the corresponding execution environment. This contrasts with our approach, in which the components, both their codes and models, are developed before they are used in any application. An extension of SaveCCT, called ProCom (Sentilles et al., 2008b), has been defined to provide components with higher granularity, based on a hierarchical strategy. Most of the previously referenced contributions are academic, although some of them either have arisen or have started from industrial proposals. For example, the main influence of the Save-CCT technology was the Rubus-CM model (Hanninen et al., 2008), developed by Arcticus Systems and used by Volvo Construction Equipment. Another industrial technology is Koala (van Ommering, 2002), which represents the starting point of the ROBOCOP (Bondarev et al., 2006) component model. The Koala component model, initially developed by Philips for the consumer electronics industry, was very simple. Components were implemented as C functions, which were later mapped to the tasks of the underlying execution platform. Initially it did not deal with realtime aspects. The ROBOCOP project addressed the introduction of
real-time features and performance prediction on Koala, following a similar approach to ours: associating a composable model to each individual component which can be used to certify the performance of the applications in which the component is used. The ACM (ARINC-653 component model) (Dubey et al., 2011) is a software component framework that combines CCM with the ARINC-653 platform services, so it provides static memory allocation for determinism, and spatial and temporal isolation between components. It provides also a modelling environment, which relies in a well-defined metamodel for modelling component, services, etc. It relies on a set of compositional semantics that guarantees the analysability of the assembly-based applications. Similarly to our activation ports (that will be explained in Section 3), an ACM component can define a set of component triggers that are executed independently of the services invoked on the component facets, but in our case we use the activation ports to implement part of the functionality of the components and not only record or supervision activities. Another difference between ACM and our approach is that they apply deadlines to each service provided by a component, while in our approach we conceive only end-to-end deadlines. Also in the domain of high-integrity systems, another interesting approach is the component-based technology presented in (Panunzio and Vardanega, 2010; Panunzio, 2011). Based on a software reference architecture that warrants separation of concerns, property preservation and correctness by construction to the development process, an integrated development environment has been defined, which includes also support for applying schedulability analysis using MAST.
3. RT-CCM technology As it has been explained in Section 1, an appropriate component technology is essential for applying a component-based approach to the development of real-time applications. The key aspect when designing a real-time application is the concurrency design, so it is also the main aspect to take into account when defining a real-time component technology. Traditionally, realtime applications have been conceived, designed and written as a set of concurrent responses to events, which must fulfil the timing requirements that are imposed on them. The activities that constitute the responses are scheduled using schedulable entities such as threads and communication channels, which must be configured with appropriate values for their scheduling parameters in order to guarantee the schedulability of the application. Traditional real-time design processes (Gomaa, 1989; Klein et al., 1993), and also object-oriented ones (Gomaa, 2000), start by identifying the responses to events executed in the system. Then, the activities that form these responses are mapped to the schedulable entities according to some criteria, and based on that mapping the configuration of the schedulable entities is obtained. Thus, the schedulable entity is the centre of the design and code generation process. However, this strategy is not suitable in a traditional component-based scenario due to the opacity required in components management. The threads, and hence the activities that are executed on them, belong to the internal code of the components, so they are not accessible to the application designer, who cannot manage or configure them to guarantee the schedulability of the application. To make compatible the real-time design principles, which require a global knowledge and control of the system concurrency mechanisms (threads, communication channels and synchronization mechanisms), and the component-based paradigm, which imposes an opaque management of the components business code; it is required to establish a set of restrictions and suitable concurrency patterns that provide automatic identification, control and configuration of the concurrency mechanisms used by the
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467 FacetInterface ComponentInterface
Facet facetPort
ReceptacleInterface 1 / 0..n / 1..n Receptacle recepPort
prop:PropertyType Business Configuration Properties
evResponse: TransactionType
Event Responses (Transactions)
Fig. 2. Elements of the component interface (or specification) of an RT-CCM component.
components, without requiring access to their business code. The solution adopted in RT-CCM relies on the container programming model and consists of transferring to the container and the connectors the responsibility of creating and managing the threads, communication channels and synchronization mechanisms that are required for the concurrency design of a component (it will be explained in detail in Section 3.2). There are some component models in which it is possible to apply a concurrency design process like the one mentioned before, since they rely on components of very low granularity, such as the “run to completion” components mentioned in Section 2. In that case, each component constitutes one single function, so the application is designed by mapping each component to the thread where it is scheduled. On our case we focus on components with higher granularity, i.e. components that implement several services, which may be part of different responses to events of the application, and which can even trigger responses to events in the application by themselves. Therefore, it is not possible to define a single mapping between a component and a thread. The rest of this section explains the main features of the RTCCM component technology: the concept of RT-CCM component and how the technology deals with real-time related aspects, i.e. concurrency, synchronization, scheduling and communications. 3.1. RT-CCM component An RT-CCM component is a reusable software module of arbitrary granularity or complexity (it can provide a single service, or several different ones), whose functionality is described by means of a set of required and provided ports. Since RT-CCM takes the LwCCM specification as a starting point, the kind of ports used to specify the interface of a component (i.e. its external view or functionality) are facets and receptacles, as can be seen in Fig. 2. The figure shows a generic component interface with all the elements that could be defined in it and how they must be formulated. A component interface can define also a set of business configuration properties, which will receive specific values for each instance of the component included in an application. The underlying strategy applied in this paper for the realtime design process follows the reactive or transactional approach (Kopetz et al., 1989; Klein et al., 1993) typically used in realtime systems. Accordingly, real-time applications are conceived and specified as the set of responses to external (coming from the environment) or timed events that they implement. Each response is modelled as a Transaction (González Harbour et al., 2001; Object Management Group, 2005b) or End-to-End Flow (Object Management Group, 2009). A transaction is a sequence of activities or steps related by control flow, which in the case of a componentbased application usually correspond to services executed in different component instances. So, a component technology compatible with this reactive approach requires a mechanism to trigger responses to external or timed events in an application. In RT-CCM, this mechanism is implemented by the components themselves, specifically by which are called active components. A characteristic of an RT-CCM active component, which differentiates it from other approaches, is that, as well as the functionality offered through its facets, it can also implement responses to
ScadaControl
<> ScadaControl supervise() cancel() getLastLoggedData() getBufferedData() …
<> AnalogIO aiReadCode() aiWriteCode() …
453
controlPort ScadaEngine samplingPeriod:Float loggingPeriod:Float SamplingTrans (period = samplingPeriod) LoggingTrans (period = loggingPeriod) adqPort 1..n
AnalogIO
logPort
1
Logging
<> Logging log() …
Fig. 3. ScadaEngine component interface: example of an RT-CCM active component.
external or timed events, which must also be identified and declared at the component interface level. These responses are not associated with any of the services it provides: they are triggered internally in the component by the corresponding events, independent of the invocations received in the component facets. Each response may involve the invocation of services on the components that are connected to the receptacles of the active component. As said in Section 2, the proposal in Dubey et al. (2011) incorporates a similar approach, but in that case, the triggered activities are used only with record and monitoring purposes, instead of implementing parts of the functionality of the component. Fig. 3 shows an example of a component interface declaration. It is the interface of an active component called ScadaEngine, which is part of the case study that will be introduced in Section 6, and it is also used throughout the paper to clarify some of the RTCCM features. It implements a SCADA (supervisory control and data acquisition) functionality: it supervises a set of signals, whose values are acquired through the adqPort receptacle (aiReadCode operation), and calculates statistics about them. The resulting values are stored in a storage mechanism accessed through the logPort receptacle (log operation). Therefore, the ScadaEngine component implements its functionality in a reusable manner, since it is independent of the acquiring and storing mechanisms. Two responses to periodic events (two transactions) are implemented by this type of component: SamplingTrans, which represents the response to the event that triggers the sampling activity, and LoggingTrans, which represents the response to the event that triggers the storing activity. The sampling and logging periods, samplingPeriod and loggingPeriod, are defined as business configuration properties of the component and they correspond to the period of activation of SamplingTrans and LoggingTrans, respectively. In our methodology, component interfaces are formulated by means of the corresponding RT-D&C descriptors (Figs. 2 and 3 are only graphical representations). The ports and configuration properties are formulated according to the formats established by D&C; whereas the responses to events, and the mappings between business configuration properties and properties of the responses, are formulated according to the extensions defined in RT-D&C. 3.2. RT-CCM reference model As said at the beginning of the section, the container programming model is used as the basis of the RT-CCM reference model, since it allows grouping in the container all the aspects related with concurrency and synchronization management. Thus, the application designer can configure and manage them without requiring access to the business code of the components. The RT-CCM reference model is shown in Fig. 4. Based on the container programming model, a final executable component is made up of two parts: • Business code. Reusable code delivered in the component package, which implements the functionality offered through the
454
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
RecepBusiness Instance tacle Client (bucode ssines Activation Port Receptacle Client wrapper Container
Threading Service
Facet
Connector code
Connector Container
Scheduling Scheduling attribute Service service
Server Business Server code business siness ( bu
Facet Synchonization Port
Communication Schedulingattribute SchedulingService service
Synchronization Service Execution Platform
Fig. 4. Elements of the RT-CCM reference model.
component facets, and in the case of RT-CCM, the responses to the events supported. It is written by an application domain expert, independently of the application and the execution platform (both the operating system and/or the communication middleware) in which it may be used. • Container. Run-time support for the business code, which adapts it to a specific execution platform without modification. It includes all the mechanisms required to instantiate, connect, and execute the component in the corresponding platform. The code of a container is generated by tools based on metadata also provided in the component package (formulated according to RT-D&C). RT-CCM extends the LwCCM container programming model with some new services and mechanisms to make the timing behaviour of the components predictable and externally configurable. All these mechanisms are introduced at the container level, so they are invisible for the component developer (the one that writes the business code). The different elements that have been included in the RT-CCM reference model are explained in the following subsections. 3.2.1. Management of threads and synchronization mechanisms Due to its capacity to implement different services and responses to events, the business code of an RT-CCM component may be concurrently executed by multiple threads, either needed by the component itself to attend events (i.e. to implement its own responses to events), or coming from external components that invoke services in the component’s facets (as part of responses triggered by other components). The activities executed as part of the different responses to events of an application may need access to shared data, or compete for shared resources. As a consequence, synchronization mechanisms may be required to guarantee a mutually exclusive access to shared data for the threads that execute the different responses, or to synchronize their execution. Fig. 5 shows an example of how different threads may concur at a certain moment inside a component instance. The engine instance corresponds to an Ada 2005 implementation, called AdaScadaEngine, of the ScadaEngine component interface. As is shown in the left part of Fig. 5, the specific AdaScadaEngine implementation (hence, the engine instance) uses two threads to implement its functionality: samplingTh, used to implement the signals sampling (SamplingTrans); and loggingTh, in charge of storing the values (LoggingTrans). The other two threads, displayTh and keyboardTh, originate from invocations made by the manager external instance through the controlPort facet. The four threads require access to some data held internally by the component to store the acquired values, so they are synchronized by using a mutex called dataMtx.
Applying real-time design and schedulability analysis to an application requires a complete knowledge about its threads, their associated synchronization mechanisms, and the control flow of the activities that they execute. From another point of view, configuring the scheduling of a component-based application requires opaque management of the scheduling parameters, i.e. they must be assigned without requiring access to the business code. To make this inherent opacity of components compatible with a real-time design process, in RT-CCM the creation and management of threads and synchronization mechanisms have been extracted from the business code of the components; it is implemented by the containers. This has two main consequences: the business code of an RT-CCM component is completely passive; and besides, it can be written independently of the underlying platform, since it does not depend on how the platform manages concurrency and synchronization related aspects. Threads, and in consequence the responses to events, are managed in RT-CCM through a new kind of port, called activation port, and a new service, called ThreadingService. For each thread that a component implementation needs in order to execute its internal responses, the component developer must declare an activation port as part of the metadata included in the RT-D&C descriptor of the implementation. When an instance of the component is launched in an application, its corresponding container creates a thread for each declared activation port. The container uses the ThreadingService to create the threads and start their execution. Two kinds of ports can be declared, depending on the type of response that the component needs to execute through them. They are distinguished by their interface: • Ports implementing the PeriodicActivation interface are used to require threads that will implement internal periodic responses, e.g. attending to requirements from external devices using a polling strategy, or executing internal activities triggered in the component in response to periodic timed events. The created thread will periodically execute the update() method provided by the port. The invocation period is a configurable value that will be assigned for each component instance declared in a deployment plan. • Ports implementing the OneShotActivation interface are used to request threads that will execute a response, the one implemented through the corresponding run() method, in a one shot manner. They can be used, for example, to attend to requirements from external devices using an interrupt strategy, to implement internal activities in an asynchronous way, to implement the main procedure of an application, etc. A component implementation can declare several activation ports, one for each level of concurrency that it needs to implement
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
455
<>
manager: AdaScadaManager keyboardTh
displayTh
ScadaControl
[scada]
AdaScadaEngine::ScadaEngine samplingThPrty:Priority loggingThPrty:Priority dataMtxCeiling:Priority
controlPort
samplingTh
loggingTh
dataMtx
<>
engine: AdaScadaEngine dataMtx
samplingTh
logPort
PeriodicActivation
Mutex
Scheduling Configuration Properties
loggingTh
adqPort Logging
AnalogIO Fig. 5. Management of threads and synchronization mechanisms in RT-CCM.
its functionality. For each of them, the component developer writes the code of the corresponding update() or run() method, as part of the business code of the component. Automatic code generation tools create the skeletons for these methods, so the component developer only has to complete them with the appropriate code. Following a similar strategy, synchronization mechanisms are managed through another new kind of port, called synchronization port, and a new service, called SynchronizationService. For each synchronization mechanism that a component needs, the component developer declares a synchronization port in its RT-D&C implementation descriptor. The container recognizes these ports when the component is launched in an application, and uses the SynchronizationService to acquire the required mechanisms. Two kinds of synchronization mechanisms are currently supported in RT-CCM:
Including the Threading/SynchronizationService in the reference model simplifies the code of the containers, and therefore, their corresponding generation tools. As is shown in Fig. 6, they offer a standard interface for creating threads and synchronization mechanisms, which abstracts the way in which they are generated in each possible platform. Thus, adapting the RT-CCM technology to a new platform requires only implementing the code of these services, since the container generation tools can be used without modification. The interface of the SynchronizationService is selfexplanatory. The ThreadingService provides an execute() method that abstracts the code to be executed (implemented by the corresponding OneShot/PeriodicActivation port) from the mechanism used for its concurrent execution. The meaning of the StimulusId argument is related to scheduling so it is explained in Section 3.2.3.
• Mutex ports are used to require typical mutex mechanisms that will be used to guarantee mutual exclusion among threads executing critical sections of code. • ConditionVariable ports are used to require condition variable mechanisms that will be used to synchronize the execution flow of different threads.
3.2.2. Management of connections Predictable communications are essential to implement distributed real-time applications, where timing requirements are imposed on end-to-end responses that involve distributed invocations. In this case, the real-time design process must obtain also as result the configuration that must be assigned to the underlying communication mechanisms in order to guarantee the system schedulability. With that purpose, a real-time model describing the timing behaviour of each distributed or remote connection established is required. With the aim of simplifying the code of the components, and at the same time guaranteeing a predictable behaviour of distributed connections between components, the interactions between RTCCM components are implemented by means of connectors (as is shown in Fig. 4). A connector is a special type of component, whose internal code supports all the mechanisms required to implement an interaction (e.g. a remote invocation) between a client and a server component. Since in a real-time application, this interaction must provide predictable behaviour, only communication mechanisms compatible with real-time systems are allowed for connections. This is one of the main characteristics of this technology: although it is defined as an extension of LwCCM, it replaces the usage of CORBA with other more suitable middleware or communication mechanisms for embedded systems with hard real-time requirements. Predictability of connections in RT-CCM is guaranteed because the code of the connectors is generated automatically by tools, based on the interface of the connected ports, the location of the components (local or distributed connections), and the communication mechanism chosen for the connection. Since the code is automatically generated by tools, its corresponding real-time model can be generated at the same time. The tool takes all the required information from the deployment plan, where the
The mutexes and condition variables created through the SynchronizationService implement the standard interfaces (and functionality) of these elements (with lock() and unlock() methods in the case of a Mutex, and wait() and signal() for a ConditionVariable). The business code of the component is written based on calls to these methods, therefore, independently of the way in which the mechanisms are implemented on the underlying platform. As an example of usage, the right hand part of Fig. 5 shows the ports declared by the AdaScadaEngine implementation: • Two PeriodicActivation ports, samplingTh and loggingTh, which are used to obtain the corresponding threads that implement the SamplingTrans and LoggingTrans responses. • One synchronization port of Mutex type, dataMtx, used to obtain the mutex that allows the component to control concurrent accesses to some internal data. The scheduling parameters of these elements (both the concurrency and the synchronization ones) are configurable. Appropriate values for them, those that make the application schedulable, will be extracted from the analysis of the real-time model of the application and assigned in an opaque way through the container. As is also shown in Fig. 5, the priority of both activation ports, samplingTh and loggingTh, and the priority ceiling of dataMtx, constitute the scheduling parameters defined by this specific implementation.
456
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
Fig. 6. Specification of the Threading and Synchronization services in RT-CCM.
connections between components are declared and configured, and from the components’ metadata (RT-D&C descriptors). The responsibilities, and hence the complexity, of a connector depend on the type of connection that it implements. When the components are installed in the same node, the connector is just a normal component with one facet and one receptacle implementing the interface of the connected ports (such as the one shown in Fig, 4). Section 3.2.3 will explain why in this case the connector is required, and why the connection is not implemented directly as a reference between the components. However, the essential role of a connector is to implement distributed connections. In this case, the connector is made up of two parts: the proxy, which is installed in the client node, and the servant, which is installed in the server node. The receptacle of the client component is connected to the proxy facet, whereas the receptacle of the servant part is connected to the facet of the server component. Between the two parts, the communication is implemented according to the chosen mechanism. The internal code of a connector must deal with different aspects depending on the type of invocation that it supports: • Implementing the marshalling and unmarshalling of the invocation parameters and the results in the case of connecting components that are written in different programming languages. • Providing the threads required for the execution of remote, or local asynchronous, invocations. In these cases, the connector uses the ThreadingService to obtain the threads required to execute the invocations and immediately returns the control flow to the invoking thread. • For the case of synchronous remote invocations, providing the synchronization mechanisms required to suspend the client thread until the end of the invocation execution. Depending on the communication protocol used, the connector may directly use the mechanisms provided by the protocol, or the SynchronizationService. If the synchronous invocation is local, there is no need for implementing this functionality with external mechanisms, since the calling thread directly executes the invocation. • Managing the transmission and dispatching of messages through the corresponding communication network in the case of remote invocations. 3.2.3. Management of scheduling parameters The business code of a component is written independently of the application on which it is going to be executed, and more exactly, independently of the level of concurrency of the application. So, when a component-based real-time application is designed, some kind of assignment policy must be used to define the scheduling parameters with which each service invoked on a component must be executed. In other words, as the business code
of the services provided by a RT-CCM component is passive, it will always be executed by some external thread (created by another component, or by a connector), which should receive an appropriate value of scheduling parameter (priority, deadline, etc.) in order to meet the timing requirements of the application. Examples of this kind of assignment policies are the ones defined in RT-CORBA (OMG, 2005a), ClientPropagated and ServerDeclared, typically used in client-server applications. In RT-CCM, with the purpose of obtaining a more flexible and optimal scheduling of the applications, the execution of the services and the responses to events of a component can be scheduled according to parameters that depend on the specific transaction, and on the state of the control flow of the transaction, in which they are executed. This kind of assignment policy is called TransactionDefined, and it is more flexible than the previous ones, especially when scheduling distributed real-time systems (Gutiérrez García and González Harbour, 1999). In fact, ClientPropagated and ServerDeclared can be seen as particular cases of it. It is especially suitable when timing requirements are associated to intermediate points of the transactions, since in that case the scheduling parameters require independent assignation for each activity executed inside them. For example, two activities corresponding to invocations of the same service of a component, one executed before the timing requirement and the other one after, might need to be executed at a different priority. This contrasts with the general case, when the timing requirements are assigned to the finalization of the transaction, and hence all the activities can execute with the same scheduling parameter. RT-CCM supports this approach obeying the opacity requirement, since the management of the scheduling parameters is carried out by the connectors, and therefore, it is totally invisible to the business code. The applied strategy is based on the usage of the StimulusId, a value which is propagated through the entire chain of invocations made inside a transaction. The value of StimulusId identifies each state of the execution flow inside a transaction, i.e. each invocation of a component service made as part of the execution flow. The analysis of the real-time model of the application obtains the optimal scheduling parameters that must be associated with each invocation, which are mapped to the corresponding StimulusId values. As part of its scheduling configuration, each component instance receives an initial StimulusId value for each of the transactions that it triggers. The transactions are initiated through the activation ports, so an initial StimulusId value is assigned to each activation port, and therefore, to the corresponding thread that starts the transaction. From there on, each time an invocation of a component service is made inside the transaction flow, the corresponding connector maps the current value of StimulusId (inputId) to a new value (outputId), which univocally identifies the current invocation. Fig. 7 attempts to clarify this issue. The Client instance starts the execution of a transaction in the system through its actPort1
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
:InstA
:Client
:InstB
:InstC
StimId = 3 P = 15
OperA
<>
InterfaceA InstA
InterfaceA
InstB
InstAToInstB bind (outputId)
OperB StimId = 1 P = 20
457
Scheduling Service StimId = 2 P = 20
OperC OperB StimId = 5 P=5
StimId = 4 P=5
Service
inputId
outputId
operB
2
3
8
9
1
6
operA
StimulusId
Priority
1
20
2
20
3
15
Fig. 7. StimulusId and SchedulingService: management of scheduling parameters in RT-CCM.
port. As part of it, it executes OperA() on InstA, which in turn executes OperB() on InstB and OperC() on InstC. OperC() invokes OperB() on InstB again. The picture shows the StimulusId associated with each invocation, together with its corresponding priority. Using this approach, the two invocations of OperB()made on InstB may be executed at a different priority inside the same transaction. For applying the appropriate StimulusId transformations, each connector receives a configuration table like the one shown in Fig. 7 for the connector between InstA and InstB. For each method implemented, the table contains the possible StimulusId mappings (inputId to outputId). Each time an invocation is received, the connector maps the value, and based on the new one (outputId), uses the SchedulingService to assign the correct value of scheduling parameter to the executing thread. The SchedulingService is configured with all the possible mappings between StimulusId values and scheduling parameters (priorities in this case), which are obtained from the analysis of the real-time model of the application. As happens with the rest of the RT-CCM services, the SchedulingService provides a standard interface, which abstracts the underlying scheduling policy. It provides only a bind() method, which receives the current value of StimulusId as a unique argument, and changes the scheduling parameter of the invoking thread according to it. When a real-time application is executed on a distributed platform, an essential aspect to deal with is the scheduling of the messages that are sent through the communication network. From the analysis of the real-time model of the application, the priorities associated with each message sent during execution are obtained. At run-time, they are also managed by the connectors according to the StimulusId, as is shown in the example in Fig. 8 (as a simplification, only an asynchronous invocation is shown): • When an invocation is received in a connector proxy, it does not transform the current StimulusId value. It only obtains the mapped value (outId), and uses it to get the scheduling parameter with which the message must be sent (sentPrio). This value is obtained from the CommunicationScheduling Service, which similarly to the SchedulingService, maintains a mapping between StimulusId values and communication scheduling parameter values (priorities in the example). • Then, the proxy generates the message to be sent, including the mapped StimulusId value (outId) as part of the message. • The message is sent with the corresponding scheduling parameter (sentPrio in this case). • The connector servant receives the message, decodes the StimulusId value, and using the SchedulingService of the remote node, assigns the corresponding scheduling parameter to the thread that will execute the invocation. In the case of a synchronous invocation, once executed, the value of StimulusId is used again to obtain, from the CommunicationScheduling Service of the remote node, the scheduling parameter
with which the return message must be sent. When the return message is received in the proxy, the execution continues with the current StimulusId and scheduling parameter values, since they were not changed when the invocation was received in the connector. 4. Real-time component-based modelling In our approach, the scheduling configuration that must be assigned in the deployment plan of an RT-CCM application is extracted from the analysis of its real-time model. Due to the opacity required in components management, the application designer has no means of formulating the real-time model of the complete application, since he does not know how the internal code of the components behaves. He needs a modelling methodology that allows him to build the real-time model of an application by composition of the real-time models of the elements that form it, which should have been formulated by the corresponding designer of each element (component or platform resource). With that purpose, RT-CCM is used together with CBS-MAST, an extension of the MAST modelling methodology that adapts it to component-based semantics. The modelling primitives used in MAST are almost the same as the ones proposed in the SAM (Schedulability Analysis Modelling) chapter of OMG’s MARTE (UML Profile for Modelling and Analysis of Real-time and Embedded systems) profile (OMG, 2009). They correspond to low-level modelling entities, such as e.g. Processor, SchedulableResource, MutualExclusionResource, EndToEndFlow. CBS-MAST provides capacity to parameterize them, and besides, it defines higher-level modelling primitives that directly map the elements identified and managed in the design of a component-based application: SoftwareComponent, SoftwareConnector, ProcessingNode, and CommunicationNetwork. These elements represent container modelling elements, which group and relate the low-level modelling primitives used to describe the internal temporal behaviour of the elements that they model. With CBS-MAST, the timing behaviour of these container modelling elements is formulated in a reusable way based on the model descriptor/model instance concepts: • The real-time model of a reusable module (component or element of the platform) is formulated as a parameterized descriptor, which contains the complete information about its temporal behaviour that is required to evaluate the timing behaviour of any application in which the module may be used. There can be two types of parameters in a descriptor: - References to the real-time models of other software or hardware modules that interact with the one being modelled. - Characteristics that change for each instance of the module according to the way in which it is used in a system. • Later, when an application is designed as an assembly of component instances deployed in an execution platform, its final and
458
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
Client Thread Client
Server Thread Communication Scheduling Service
Connector Proxy
Connector Servant
Scheduling Service
Server
OperA() getOutputId(inputId): outId getCommSchedParam (outId): SentPrio
Priority = Waiting Message Priority
Sent (Mssg, sentPrio)
Continue execution
Mssg includes outId
Extract outId bind (outId) OperA() unbind (outId)
Priority = ClientThreadPrio
Priority = Invocation Priority
Fig. 8. Management of scheduling parameters in distributed invocations.
complete real-time model can be generated by composing the model instances of each of the modules (both software components and elements of the platform) that make up the system. A real-time model instance is a final and complete analysable model that describes the timing behaviour of an instance of a module in the context of a specific application. The model instance is generated taking as a reference the corresponding model descriptor, and resolving all its parameters with specific values and references to other model instances, according to the application context.
of steps related by control flow, but it also includes the triggering pattern and the associated timing requirements. • Models of the synchronization (mutex, shared resources, etc.) and concurrency (threads, processes, etc.) resources required by the component to manage concurrent executions of its internal code.
As an example, Fig. 9 represents the real-time model descriptor of the AdaScadaEngine implementation. It includes the following elements:
4.1. Real-time model of a reusable software component The usage of the descriptor-instance strategy solves the problems that appear when trying to formulate the temporal behaviour model of a reusable software component independently of the application in which it is used. The timing response of a component depends on: • The processing capacity of the execution platform where the component is executed. • The timing response of other components whose services are used to implement its functionality. • The availability of resources in the platform, which in turn depends on the rest of applications that are executing concurrently with the modelled application (workload). With the purpose of generating the timing model of any application that uses a component, its model descriptor must include: • Temporal behaviour models of the services offered by the component through its facets. Each service is modelled by describing the set of Steps that are executed in response to it, i.e. the resource usages caused by its invocation. Some steps correspond to internal code executed inside the component, so they are modelled by specifying their corresponding worst, best and average execution times, and their access to mutually exclusive resources. Other steps represent invocations made through the receptacles, so they are modelled by including a reference to the service invoked, which will be resolved in the corresponding model instance. • Models of the Transactions (responses to events) that can be initiated in the component, in the case of active components. The model of an end-to-end flow transaction is also defined as a set
• Models of the getBufferedData and getLastLoggedMssg real-time services, offered through the controlPort facet. • Descriptors of the SamplingTrans and LoggingTrans periodic transactions executed internally by the component, formulated as EndToEndFlow elements. They correspond to the responses to events identified at the component interface level, and they model the code executed by the corresponding update() methods of the samplingTh and loggingTh activation ports, respectively. Each transaction defines the generation pattern of the triggering events (periodic in both cases), the sequence of Steps that are executed in response to those triggering events, the SchedulableResource in which these steps are scheduled (samplingTh and loggingTh, respectively), and the timing requirements that must be fulfilled in each execution. • SchedulableResource elements that model the samplingTh and loggingTh threads obtained through the corresponding activation ports. The model states that they are scheduled by the scheduler of the processor in which the component is installed (whose model is referenced through the HOST reference), and they are managed according to a fixed priority preemptive scheduling policy. The MutualExclusionResource element that models the dataMtx mutex is also included, which is assigned a priority ceiling policy, and which is required by some of the services and internal steps of the component. • Finally, the descriptor includes some internal operations that are used to describe the transaction steps, such as for example generateLoggedMssg or registerValue.
The parameters declared in this descriptor (shown in the top left of the figure) have different origins:
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
459
<> adqPort: ComponentPort[*] <> samplingThPeriod: FLOAT samplingThPrty:PRIORITY loggingThPeriod: FLOAT loggingThPrty:PRIORITY dataMtxCeiling:PRIORITY logPort:ComponentPort HOST: ProcessingNode
<>
SamplingTrans <> samplingTh
samplingTh scheduler= HOST.Scheduler schedParam=FixedPriority(samplingThPrty)
loggingTh scheduler= HOST.Scheduler schedParam=FixedPriority(loggingThPrty)
<>
getBufferedData
samplingEvent
wcet= 2.3E-5 acet= 1.7E-5 bcet=1.3E-5
<>
<>
<>
<>
getLastLoggedMssg
priorityCeiling= dataMtxCeiling <>
updateStatistics wcet= 2.4E-5 acet= 1.7E-5 bcet= 0.8E-5
mutexList= {dataMtx}
loop
for i in 1..size(sensors) <>
readSamples usage = adqPort[ i ].aiReadCode
wcet= 4.5E-6 acet= 3.8E-6 bcet= 3.5E-6 mutexList= {dataMtx}
<>
dataMtx
period= samplingThPeriod
loggingTh <>
loggingEvent period=loggingThPeriod <>
createLoggingMssg usage=generateLoggedMssg <>
<>
updateStatistics oper= updateStatistics
<>
generateLoggedMssg
LoggingTrans
loggingDeadline deadline=loggingThPeriod <>
loggMssg
<>
usage=logPort.log
samplingDeadline wcet= 9.9E-5 acet=6.9E-5 bcet=5.4E-5 mutexList= {dataMtx}
deadline= samplingThPeriod
Fig. 9. CBS-MAST real-time model of the AdaScadaEngine implementation.
• The HOST parameter is common to the descriptor of any software component, and it is used to reference the model of the processing node in which the component is executed. • Those resource usages that consist in invoking services via the component receptacles cannot be specified at component development time, so the descriptor includes only a reference to the port and service invoked. Only when the specific connected component is known, can the actual resource usage be incorporated to the model. This is the case e.g. of the invocation of the log service (made through the logPort receptacle) that is executed as part of the LoggingTrans transaction flow; or the invocations of aiReadCode included in SamplingTrans. There is one invocation of aiReadCode for each magnitude that is supervised, so each one is made through the corresponding adqPort receptacle (remember that adqPort is multiple). • There are some characteristics of the real-time model that are declared as parameters to adapt the temporal behaviour of the component to the specific situation in which it is instantiated. In the example, sampling/loggingThPeriod and sampling/loggingThPrty are parameters of the real-time model since different values can be assigned to them according to the specific application context.
4.2. Real-time model of the execution platform To achieve reusability of the components’ temporal behaviour models, the characteristics inherent to the components’ code must be formulated independently of those that depend on the platform in which the application is executed. The model of a component formulates the amount of processing capacity required to execute each of the steps performed by the component provided services and responses to events. In CBS-MAST, this is formulated as a normalized execution time, which represents the physical time required to execute the step in a specific processing resource, which is taken as the reference one. The model of the platform must include the description of the processing capacity supplied by the resources that make it up. In CBS-MAST, the capacity of any processor is characterized by the value of its speedFactor attribute, whose value represents the speed of the processor relative to the speed of
the reference one, which has a speedFactor equal to 1.0. From the combination of both sets of data, the required physical time of each step can be evaluated by dividing the normalized execution time of the step by the speedFactor of the processing resource in which it is executed. This speedFactor value can be accessed thanks to the HOST parameter that any component model descriptor declares by default. Moreover, the services offered by a platform suffer an additional jitter effect, due to its time granularity and inherent overhead, which effectively reduces the resources capacity. Therefore, the temporal behaviour model of a platform must describe not only the capacity provided by the processing resources to execute the application activities, but also the amount by which this capacity that is reduced due to internal processes or overheads (context switch, timer attention, etc.). 5. Real-time design of a component-based application in RT-CCM Fig. 10 shows the artefacts and agents involved in the process of development of an application based on our methodology. First, the Assembler describes the application as an assembly of component instances, connecting and configuring them to provide the required functionality. The Planner, based on the platform description, deploys the application, choosing concrete implementations for each instance and assigning instances to nodes. The result of this stage is the deployment plan, which describes the complete application and the way in which it is planned to be executed. This plan is used as input by the Executor, which, making use of code generation tools, builds and transfers the executable partitions corresponding to each node, and launches the execution. This constitutes the standard D&C process, but the real-time nature of an application gives rise to new artefacts and tools that have to be managed by the agents in the different phases. They constitute the extensions that have been added in RT-D&C. The RT-D&C development process is explained in the rest of the section.1
1 Again, it must be noted here that the paper is not focused on RT-D&C, so only a general view of the development process is given, without explaining in detail
460
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467 CBS-MAST
Component RT-Model RT-D&C
RT-D&C
Component Interface
Component Implementation
RT-D&C
Deployment plan
RT-D&C
Application’s Reactive Specification
Scheduling configuration
Component Assembly (Application) RT-D&C
Assembler
Real-Time Situation
CBS-MAST
Executable code Executor
MAST RT-D&C
Components search (Based on the responses to events supported by the components)
Real-time design tools
Planner
Platform description
Application rt-Model
Real-time design
Platform RT-Model
Fig. 10. Development process of a RT-CCM application according to RT-D&C.
5.1. Configuration phase of a real-time application When an application has real-time requirements, the structural description of the application as an assembly of components must be complemented with the formulation of its real-time requirements. With that purpose, the Assembler identifies the set of real-time situations of the application. A real-time situation (or analysis context, as it is called in MARTE) is the concept around which the transactional or reactive modelling and analysis approach is organized. It represents an operation mode of the application for which there are real-time requirements imposed, and as a consequence, their fulfilment must be analysed before executing the application. A real-time situation is defined by three elements: • Reactive model: It enumerates the set of end-to-end transactions executed concurrently in the real-time situation. In a componentbased application, these transactions correspond to the ones initiated in the instances of active components. The Assembler duty consists in identifying how many of them are activated in the described real-time situation. • Workload: It describes the generation patterns of the event occurrences that trigger the transactions. The triggering pattern of each transaction is defined by its nature (periodic, sporadic, bursty, etc.) and its corresponding parameters (period, minimum interarrival time, jitter, etc.). In the ScadaEngine example, the nature of the triggering patterns of the two transactions involved is fixed (they are both periodic). However, the corresponding periods are declared as business properties of the component, so the Assembler must assign specific values to them. • Real-time requirements: In the context of each end-to-end transaction, the Assembler must specify the response time restrictions imposed on the application execution. Each timing requirement can be characterized by its kind (global or local, soft or hard, deadline or jitter requirement, etc.), and its specific parameters (deadline value, jitter value, maximum miss ratio, etc.). The Assembler formulates this information for each identified real-time situation on a new descriptor that has been defined in RTD&C. An important aspect to remark here is that in our approach the specification of an application has a reactive nature, i.e. the specification identifies the responses to events that the application
the formats of the involved descriptors, which may be consulted in (López Martínez et al., 2010).
must implement. Thus, differently from the standard case, at the beginning the Assembler must choose the components suitable for the application because they implement the responses to events required in the application, i.e. the appropriate transactions. Then, he may need to choose additional components to satisfy the connectivity requirements of the first ones. As a result, the assembler builds the application assuring that it implements the required functional specification and responses to events. However, there is no guarantee that the timing requirements will be met, since the timing behaviour of the application depends on the execution platform and the specific implementations chosen for each component. It can only be guaranteed that a real-time model of the application can be obtained, by checking the composability of the real-time models of the components involved. From the real-time perspective, two components can be connected is their corresponding real-time models are composable, i.e. if the server component provides a real-time model for all the services invoked by the client component. Appropriate metadata have been added to the RT-D&C component interface descriptors for that purpose. This concept of composability differs from some works, as those that rely on interface-based design (Wang et al., 2005; Chakraborty et al., 2006; Henzinger and Matic, 2006),where the timing behaviour of each port of a component is abstracted away into an interface, which is formulated according to an assumeguarantee reasoning (Chakraborty et al., 2006; Henzinger and Matic, 2006): each component guarantees a maximum latency (or deadline) in the execution of the port service assuming certain invocation patterns and resource availability. Compositional algebraic analysis is then applied at the interface level, so that if all the assumptions are fulfilled for a set of connected components, each of them guarantees its timing behaviour, and so does the global system. In our case, the timing behaviour of the system will be certified in the following step, where the concrete implementations and the execution platform are known (and so, their real-time models).
5.2. Planning phase In the conventional D&C case, the application is completely configured with the business configuration values assigned by the Assembler, which are directly translated by the Planner to the deployment plan descriptor. The Planner is also responsible for the assignment of instances to nodes. However, as was explained before, in the case of a real-time application, new configuration parameters appear at component implementation level: those that
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467 RT - D&C Platform Description MAST environment RT Model Composer Tool
Real-Time Situation Deployment Plan (proposal)
Final Deployment Plan
Planner Scheduling Configuration Tool
Application Real-Time Model
RT Design Tools
RT Analysis Tools
Schedulable Application Real -Time Model
Fig. 11. Real-time design process of a RT-CCM application.
control the scheduling of the activities that are executed when a component is included in an application. The Assembler cannot assign values to these parameters. They are specific to the implementations, they depend on the platform, and a global analysis of the application is required to calculate their values; therefore, the Planner is the agent responsible for the scheduling configuration as all this information is accessible for him. The Planner must know how to evaluate, and incorporate into the deployment plan, the values that must be assigned to the scheduling properties of the component instances and the platform resources. For that purpose, RT-D&C extends both the descriptors of component implementations, with capacity to declare the configurable scheduling parameters; and the deployment plan descriptor, with capacity to assign values to them. This scheduling configuration process in fact constitutes the real-time design process of a component-based application. Even in the case of a simple application, this is a really complex process, so it cannot usually be done by hand. To deal with it, the Planner must be able to build the reactive model that describes the temporal behaviour of the application in each specific real-time situation, so that this model can be analysed by real-time tools to obtain the appropriate scheduling. The real-time design process, represented in Fig. 11, is iterative, model-driven and assisted by tools. The Planner starts each iteration with a deployment plan proposal, from which a real-time model is generated. The model is built by identifying the component instances and platform resources that form the system, and declaring and composing their corresponding CBS-MAST real-time model instances. All the references and parameters of the corresponding descriptors are resolved according to the information provided in the deployment plan: • The HOST parameter of each component model descriptor is resolved according to the assignment of component instances to nodes. • The references to external services are resolved according to the connections between component instances. • Some parameters of the real-time model descriptors can receive values directly from the business configuration of the components. For example, the loggingThPeriod parameter of the model descriptor of an AdaScadaEngine instance is mapped to its corresponding loggingPeriod configuration property (this kind of assignment is declared in the component implementation descriptor, so that it is automatically applied by the model composition tool). Although the generation of the real-time model of the application is guided by the metadata and the parameterized models included in the component packages, the result is a conventional realtime model that can be analysed with standard real-time design and analysis tools; in our case, the tools provided by the MAST
461
environment. The Planner can assign default values to the scheduling properties in the deployment plan proposal, since the generated model is processed by specialized tools that calculate the set of optimal values that must be assigned to the priorities (or other scheduling parameters) in order to make the application schedulable. If the application turns out to be schedulable, the scheduling configuration values are assigned to the deployment plan, and the process finishes. This assignment is carried out by a tool specific for the underlying component technology, RT-CCM in our case. It takes as inputs the deployment plan of the application and the schedulable model obtained from the design tools, and extracts the configuration that must be assigned to the component instances and the resources of the platform in order to execute the application satisfying its timing requirements. In the case of RT-CCM technology, when an underlying fixed priority scheduling policy is used, the results obtained from the real-time design process are: • The priority ceilings that must be assigned to each mutex or condition variable (each condition variable has an internal associated mutex) required by a component instance. • The priority of execution for each invocation received in a component service in the context of the transaction in which the invocation is made. In fact, the tool obtains the StimulusId value that identifies univocally each invocation received in a component and its corresponding priority. • The StimulusId values that the active components must use to identify each transaction that is triggered on them, and their corresponding priorities. • The StimulusId and priority of each of the messages sent through the communication network. Thanks to the extensions defined in RT-D&C for deployment plan descriptors, this information is formulated in a modular way in the deployment plan, associated with the corresponding component instances, connectors, or RT-CCM framework services: • Each declared component instance receives the following configuration data: - For each synchronization port, its corresponding priority ceiling. - For each activation port, the initial StimulusId with which its execution must start. • Each connection between component instances (i.e. each connector) receives: - The set of StimulusId mappings that it must apply, organized per provided service. • Each node receives: - The mappings between the StimulusId and priority values corresponding to invocations executed on it. They are used to configure the SchedulingService. - The mappings between the StimulusId and priority values corresponding to the messages sent from it. They are used to configure the CommunicationSchedulingService. - The maximum number of threads required by the system, which is used to configure the ThreadingService. 5.3. Launching phase Based on the information provided in the deployment plan, the launching tool: • Generates the code of the containers, based on the metadata provided by the RT-D&C component descriptors.
462
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
• Generates the code of the corresponding connectors based on the configuration of the connections among components. • Instantiates the components, applying the configuration values assigned in the deployment plan (both the business and the scheduling ones). • Instantiates the connectors with their corresponding scheduling configuration. • Connects the component and connector instances. • Configures the platform services in each node. • Starts the execution of the component instances, and thus, the execution of the application itself. • With this strategy, the RT-CCM technology satisfies the objective of configuring the scheduling of an application while respecting the opacity of the components. 6. Case study: ScadaExample application This section shows an example of a component-based real-time application designed according to the approach proposed in this paper. It is called ScadaExample, and its functionality consists in supervising a set of analogue environment magnitudes, and storing statistical data about them in a logger. The operators can choose the magnitudes to supervise through the keyboard, and they can also choose one of them to be periodically refreshed on the monitor. This is a typical example of an application where a componentbased strategy can provide a great benefit. It involves several different application domains such as logging and analogue data acquisition. Most of the components used in the application are taken from a repository of generic reusable components, i.e. components that have not been designed specifically for this SCADA application. The application designer only has to choose the appropriate components to define the assembly, but he is not involved in writing their code, which may require a deep knowledge of very different issues. He is not responsible either for formulating the real-time models of the components, which is addressed by each component designer at the same time the code of each component is written. 6.1. ScadaExample configuration phase According to the approach presented in this paper, the functionality of an application is specified in a reactive way, as the set of responses to external or timed events that are executed concurrently in it. Four different events are processed by the ScadaExample application: • The periodic timed event that triggers the sampling and statistical processing of the magnitudes. The sampling period is one of the configuration parameters of the application. • The periodic timed event that triggers the registration of data about the supervised magnitudes in the logger. The logging period is also a configuration parameter of the application. • The periodic timed event that triggers the activity in charge of showing the last set of data sent to the logger for one of the magnitudes on the monitor. The magnitude is chosen by the operator and it can be changed during the application execution by pressing the appropriate key on the keyboard. The display period is also configurable. • The sporadic external event that corresponds to the command introduced through the keyboard by the operator to change the displayed magnitude.
functionality (responses to events). The first component chosen is the ScadaEngine component, whose external interface was introduced in Section 3. It is chosen because it implements two of the required responses (the first two). The connectivity requirements of the ScadaEngine component (adqPort and logPort receptacles) must be satisfied, so the Assembler searches the repository again for components that provide the appropriate functionality. As a consequence, the IOCard and Logger components are added to the assembly. The IOCard component is a leaf component that manages input/output acquisition cards, which in this case are used to acquire the values of the magnitudes. The Logger component is also a leaf component with capacity to register data with timing marks in a permanent data base. Differently to the ScadaEngine component, which is typical from a SCADA domain, both Logger and IOCard components are generic and reusable in different kind of applications. To complete the three-tier architecture of the application, the Assembler orders (to a domain expert) the development of the ScadaManager component. It implements the specific functionality and the user interface of this specific application, processing the commands introduced by the operator and periodically displaying the results of one of the magnitude (it implements the other two responses to events required in the application). Thus, it is the only component that is not reusable. The ScadaEngine component, although it is characteristic of the SCADA domain, is also reusable, as was explained in Section 3, since it is independent of the storing and logging mechanisms, and can be configured to control a variable number of magnitudes. Fig. 12 shows the final architecture of the specific ScadaExample application. It is formed of five component instances: manager, of ScadaManager type; engine, of ScadaEngine type; register, of Logger type; and two IOCard instances, sensorA and sensorB. There are two instances of IOCard since the application will supervise three magnitudes, one acquired through sensorB and the other two through sensorA. The sampling period is 10 ms, the logging period is 100 ms and the display period is 1 s. As shown in Fig. 12, these values are assigned to the configuration parameters of the corresponding instances.Besides defining the structural description of the application, the Assembler formulates the real-time situations that will be analysed, using the RT-D&C descriptor defined with that aim. Fig. 13 conceptually shows the information that is included in that descriptor, which describes the real-time situation to be analysed in this case. Four transactions are included, one for each response. Each transaction instance references the component instance in which it is initiated (host), and the type of transaction it corresponds to (instanceOf) among those supported by the corresponding instance. It also receives values for all the parameters declared for that transaction type in the component interface. In this case, the three periodic transactions receive a value of deadline equal to the period (the deadline was defined as a parameter of the corresponding descriptors). It is not necessary to assign values to their triggering periods, since they are internally mapped onto the values of the corresponding configuration properties (loggingPeriod, samplingPeriod and displayPeriod). In the case of the sporadic transaction, commandTrans, the minimum interarrival period between two consecutive triggering events must be defined (0.5 s in this case). Furthermore, the SamplingTrans instance receives the values of the ports through which each supervised magnitude is acquired. This assignment is required since the timing behaviour of the aiReadCode service, which is invoked to acquire the values, may vary among IOCard instances, depending on the implementation chosen for each one (see the descriptor of the transaction in Fig. 9). 6.2. ScadaExample planning phase
Based on the previous specification and on the interfaces of the components available in the repository, the Assembler defines an assembly of component instances that satisfies the required
Fig. 12 also shows the deployment defined for the ScadaExample application, in which the execution platform is formed of
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
Logging
manager: ScadaManager
CentralProc regPort
displayPeriod = 1.0
register: Logger
logPort
scadaPort controlPort
engine:ScadaEngine
engine
manager
loggingThPrty=default samplingThPrty = default dataMtxCeiling=default
displayThPrty=default keyboardThPrty = default
sensorA
register
aiMtxCeiling=default aoMtxCeiling = default
samplingPeriod = 0.01 loggingPeriod = 0.1
ScadaControl
463
adqPort[0]
adqPort[1]
sensorB
AnalogIO
aiMtxCeiling=default aoMtxCeiling = default
analogPort
analogPort
sensorB:IOCard
sensorA:IOCard aiEnded = “SINGLE” daRange=“UNIPOLAR_5V”
aiEnded = “SINGLE” daRange=“UNIPOLAR_10V”
<> speedFactor = 2.0
Fig. 12. Final architecture and business configuration of the ScadaExample application.
a 1.5Ghz PC node executing MaRTE OS, called CentralProc. The planner declares the corresponding instances and assigns them to CentralProc in the deployment plan. He gives default values to the scheduling parameters of the component instances, since they will be calculated by the real-time design tools. However, in order to obtain the real-time model of the application, some other parameters must be adapted to the current situation. For example, the speedFactor of the CentralProc real-time model instance. The processor used in this case has double the capacity of the processor taken as reference for specifying the normalized execution times in the real-time models of the components (750 MHz), so a value of 2.0 must be assigned to the speedFactor real-time property of the node in order to adapt its model descriptor to the current situation (this value is assigned in the deployment plan). Once the deployment plan has been formulated and properly configured, the Planner starts the real-time design process. Fig. 14 shows the final MAST model that is derived from the previous deployment. The flat control flow of the four transactions involved is shown. As can be seen in the figure, this model has default values (equal to 1) assigned to all the scheduling parameters, which in MAST correspond to the priority attributes of the SchedulableResource elements, and the ceiling attributes of the MutualExclusiveResource elements. MAST provides a set of tools for real-time analysis, which include an optimal priority assignment tool that automatically obtains appropriate values for these attributes (if it finds a schedulable solution for the
ScadaExampleRTSituation:RTSituation <> samplingTransaction host= engine instanceOf= SamplingTrans samplingDeadline = 0,01 adqPorts = sensorA.analogPort sensorA.analogPort sensorB.analogPort <> loggingTransaction host= engine instanceOf= LoggingTrans loggingDeadline = 0,1
<> commandTransaction host= manager instanceOf= CommandTrans commandInterarrival = 0.5 <> displayTransaction host= manager instanceOf= DisplayTrans displayDeadline = 1.0
Fig. 13. ScadaExample real-time situation declaration.
system analysed). These values will then be mapped onto the corresponding ones in the deployment plan. For example, the attribute ceiling of the engine.dataMutex element of the MAST model corresponds to the corresponding ceiling of the dataMtx synchronization port of the engine instance, or the priority attribute of the engine.samplingTh MAST element must be applied to the thread that starts the execution of the samplingTh activation port of the engine instance. Tables 1 and 2 show the scheduling configuration finally obtained from the analysis of the ScadaExample model shown in Fig. 14. Table 1 shows the StimulusId and priority values that must be applied per transaction and service invocation. samplingTransaction and displayTransaction have deadlines associated with their ends, so the resulting priority is the same for all the activities executed inside them (higher for the activities executed in samplingTransaction since it has a more restrictive deadline). However, loggingTransaction has a deadline associated with an intermediate state, so the register.log invocation that is executed after it, receives a lower priority than the activities executed previously. From these tables, the configuration of the SchedulingService, the connectors and the active component instances is obtained (as was explained in Section 5). Table 2 shows the values obtained for the priority ceilings of the mutexes involved in the transactions, which are used to complete the configuration of the corresponding component instances. The ceilings of the aiMtx ports of both sensorA and sensorB instances still have their default value (1) since they are not used in this application. As a summary, Table 3 shows the different design decisions taken during the development process of the ScadaExample application, distinguishing which agent is responsible for each of them. The Assembler assigns only the connection and functional (business) configuration properties, whereas the Planner assigns the properties related to deployment and communication aspects. The last column shows some of the values that are automatically obtained by the analysis tools, and assigned in the deployment plan (together with the values shown in Tables 1 and 2).
7. Discussion and conclusions The methodology proposed in the paper provides a design strategy for hard real-time component-based applications that makes compatible the opacity and reusability typical from components, with the capacity of building a real-time model of the applications that can be used to analyse their schedulability, and to obtain
464
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
<>
<>
centralProc.processor
scheduler= centralProc.scheduler priority=1
speedFactor = 2.0 <>
centralProc.scheduler host= centralProc.processor schedPolicy = FixedPriority minPriority = 1 maxPriority = 30
<>
engine.dataMtx
manager.displayTh ceiling= 1
scheduler= centralProc.scheduler priority=1
<>
sensorA.aiMtx
<>
engine.loggingTh scheduler= centralProc.scheduler priority=1
<>
<>
engine.samplingTh
manager.keyboardTh
ceiling= 1
scheduler= centralProc.scheduler priority=1
<>
sensorB.aiMtx ceiling= 1
<>
engine.samplingTransaction
<>
<>
<>
engine.loggingTransaction
manager.displayTransaction
manager.commandTransaction
engine.loggingTh
manager.displayTh
engine.samplingTh samplingEvent
loggingEvent kind= Periodic period= 100 ms
kind= Periodic period= 10 ms sensorA.aiReadCode wcet=36μs, acet=17 μs, bcet=15 μs mutexList = sensorA.aiMtx
displayEvent kind= PeriodicEvent samplingPeriod= 1s
engine.generateLoggedMssg wcet=176μs, acet=52.0μs, bcet=24.0μs mutexList = engine.dataMtx
sensorA.aiReadCode
loggingDeadline
wcet=36μs, acet=17 μs, bcet=15 μs mutexList = sensorA.aiMtx
commandEvent kind= BoundedSporadiEvent minInterarrival= 0.5s
engine.getLastLoggedMssg
manager.processCommand
wcet=90.0μs, acet=76.0μs, bcet=70.0μs mutexList= engine.dataMtx
wcet=165μs, acet=52.0μs, bcet=24.0μs
manager.buildMssgToDisplay
kind=HardGlobalDeadline references: loggingEvent deadline=100ms
wcet=46.0μs, acet=34.0μs, bcet=26.0μs
sensorB.aiReadCode wcet=36μs, acet=17 μs, bcet=15 μs mutexList = sensorB.aiMtx
manager.keyboardTh
manager.printMssg register.log
wcet=235μs, acet=125μs, bcet=121μs
wcet=9.2ms, acet=9.0 ms, bcet=7.6 ms
engine.updateStatistics
displayDeadline
wcet=23.1μs, acet=16.8 μs, bcet=8 μs mutexList = engine.dataMtx
kind=HardGlobalDeadline references: displayEv deadline=1s
samplingDeadline kind=HardGlobalDeadline references: samplingEvent deadline=10ms
Fig. 14. ScadaExample final MAST model.
Table 1 Assignment of StimulusId and priorities of ScadaExample. Transaction
Operation
Input StimId
Output StimId
Priority
samplingTransaction
engine.samplingTh.update sensorA.aiReadCode sensorB.aiReadCode
10 10
10 11 12
30 30 30
loggingTransaction
engine.loggingTh.update register.log
20
20 21
20 8
displayTransaction
manager.displayTh.update engine.getLastLoggedMssg
30
30 31
10 10
commandTransaction
manager.keyboardTh.run
40
4
the configuration of the components and the underlying framework that guarantees the schedulability of the applications. The methodology relies on RT-CCM, a real-time component technology where a container/component model has been applied to transfer the scheduling management from the opaque business code of
Table 2 Assignment of priority ceilings to the synchronization ports of ScadaExample. Instance
Synchronization port
Ceiling
engine
dataMtx
30
sensorA
aoMtx aiMtx
30 1
aoMtx aiMtx
30 1
sensorB
the components, to the containers that adapt them to the execution platform. This improves the reusability of components, whose business code does not depend on the concurrency design of the applications where they are used. The methodology associates to each component the metadata related to the timing behaviour of its business code, so that they can be composed with the metadata of the rest of the components that form an application to build the real-time model used to analyse the schedulability of the application. RT-CCM has been defined at a platform-independent model level, i.e. it is independent of the underlying platform, the programming languages, etc. Adapting it to a specific environment requires implementing the framework services and the code generation tools for containers and connectors. Moreover, thanks to the standard interfaces defined for the services, the code of the containers is the same for all the execution platforms. It only has to be
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
465
Table 3 Design decisions made during ScadaExample development. Instance
«Property type» property
Assembler
manager
«Connection»> scadaPort «Deployment» node «BusinessConfig» displayPeriod «Scheduling» commandThPrty «Scheduling» displayThPrty
engine.controlPort ? 1s ? ?
«Connection»> adqPort «Connection »> logPort «Deployment» node «BusinessConfig» samplingPeriod «BusinessConfig» loggingPeriod «Scheduling» samplingThPrty «Scheduling» loggingThPrty «Scheduling» dataMtxCeiling
sensor.analogPort register.regPort ? 10 ms 100 ms ? ? ?
engine
Planner
MAST analysis
CentralProc default default
4 10
CentralProc
default default default
30 20 30
register
«Deployment» node
CentralProc
sensorB
«Deployment» node «Scheduling» aoMtxCeiling
CentralProc default
30
sensorA
«Deployment» node «Scheduling» aoMtxCeiling
CentralProc default
30
adapted to the corresponding programming language. Of course, to be really suitable for real-time applications, the technology must be executed on top of a real-time execution platform. A version of RT-CCM implemented using Ada 2005 and on top of an execution platform with the MaRTE OS operating system has been developed as a proof of concept of the approach. The main characteristic of our approach that differentiates it from other related work is that it relates the low-level runtime behaviour of the application to the model that abstracts this behaviour. This model enables analysing the schedulability of the application and permits the configuration of these run-time mechanisms to be obtained. The modelling methodology (MAST) and schedulability analysis techniques in which it relies, whose validity is well proven, have not required any kind of transformation or adaptation to be applied in the analysis of RT-CCM applications. CBS-MAST defines the set of rules that must be followed to formulate the real-time models of the components in a reusable manner that allows the application designers to automatically obtain the real-time model of an assembly-based application. Therefore, if the timing behaviour of the components is characterized correctly in their corresponding models, and the executing platform provides also a predictable behaviour (i.e. it relies on a RTOS, such as MaRTE OS), the schedulability analysis guarantees that if the application results schedulable, the timing requirements will effectively be met when the application is executed. Another special characteristic of our approach is the granularity of components, since they can have an arbitrary internal complexity. The only restriction that is imposed on them is that they follow a reactive or transactional approach, so their functionality is defined in terms of the responses to events that they implement. Of course, the approach presents some limitations. One is the strategy used to model the timing behaviour of the components independently of the platform where they are used. MAST uses the speed factor concept to address this independence, but there are cases in which such a simple model is not completely reliable. Of course, a component could provide different real-time models, one for each platform in which it may be executed. This would be an easy solution for this problem, but an interesting line of work is to search for other ways of modelling this aspect. From the point of view of distributed applications, the success of the technology depends on the availability of different kinds of connectors for implementing remote invocations. Our tests on a real-time platform have been done with connectors that use directly RT-EP (Martínez and González, 2005), a real-time protocol for Ethernet,
?
for the communication between remote components, but support for different communication protocols or middleware must be addressed. Adding support for a new kind of connector implies building the tools that generate both the code and the real-time model of the connectors based on the information provided in the deployment plan. Another restriction is that the current implementation of the approach is oriented only to fixed priority platforms. One of the main lines of future work consists in adapting it to applications that run on open platforms. Traditional real-time design strategies are based on complete knowledge of the workload that is supported by the execution platform. This is the approach taken in this paper, but in an open platform there is no information about the rest of applications that will compete with the application that is being designed. In this kind of platforms, the schedulability of the applications can be guaranteed by relying on a resource reservation paradigm (Rajkumar et al., 1998). The execution of real-time applications on this kind of platforms relies on the availability of a resource reservation middleware as the one proposed in Aldea et al. (2006), which guarantees a fraction of the processing capacity to each application. Using this approach, each application establishes its requirements in the form of a virtual platform, formed of the set of virtual schedulable resources that describe the capacity required by the application to be executed. Each virtual resource is described by a resource reservation contract. When the application is to be installed, it first negotiates with the platform. If the workload can be taken on by the platform, the application is accepted and it can be executed with guarantees that it will fulfil its timing requirements. Adapting the methodology to this approach requires some extensions both in the component technology and the modelling methodology. Thanks to the internal structure of RT-CCM, adapting it to this scheduling policy is very simple since it only affects the SchedulingService. Instead of holding a mapping between StimulusId and priorities, it will hold the mapping between StimulusId and the specific contracts that must be used to execute each invocation. Based on that mapping, the service will use the resource reservation middleware installed in the platform to bind the invoking thread to the corresponding contract. From the modelling methodology point of view, new modelling elements should be included to provide support for the concept of virtual platform, and to support the new kind of analysis that must be carried out in order to obtain the set of contracts required by an application to execute.
466
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467
Acknowledgements This work has been funded by the EU under contracts FP7/NoE/214373 (ArtistDesign) and by the Spanish Ministry of Science and Technology under grant TIN2008-06766-C03-03 (RTMODEL).
References Åkerholm, M., Carlson, J., Fredriksson, J., Hansson, H., Håkansson, J., Möller, A., Pettersson, P., Tivoli, M., 2007. The SAVE approach to component-based development of vehicular systems. Journal of Systems and Software 80 (5). Aldea, M., Bernat, G., Broster, V., Burns, A., Dobrin, R., Drake, J.M., Fohler, G., Gai, P., González Harbour, M., Guidi, G., Gutiérrez, J.J., Lennvall, T., Lipari, G., Martínez, J.M., Medina, J.L., Palencia, J.C., Trimarchi, M.,2006. FSF: a real-time scheduling architecture framework. In: Proceedings of the 12th IEEE Real-time and Embedded Technology and Applications Symposium. IEEE Computer Society Press. Alvarez, J.M., Diaz, M., Lopis, L., Pimentel, E., Troya, J.M., 2003. Integrating schedulability analysis and design techniques in SDL. Real Time Systems Journal 24, 267–302. Angelov, C., Sierszecki, K., Zhou, F.,2008. A software framework for hard real-time distributed embedded systems. In: Proceedings of the 34th Euromicro Conference Software Engineering and Advanced Applications. IEEE Computer Society Press. Bondarev, E., de With, P., Chaudron, M.,2006. Compositional performance analysis of component-based systems on heterogeneous multiprocessor platforms. In: Proceedings of the 32nd Euromicro Conference on Software Engineering and Advanced Applications. IEEE Computer Society. Chakraborty, S., Liu, Y., Stoimenov, N., Thiele, L., Wandeler, E.,2006. Interface-based rate analysis of embedded systems. In: Proceedings of the 27th IEEE International Real-time Systems Symposium. IEEE Press. 2011. Component Integrated ACE ORB., http://www1. CIAO, cse.wustl.edu/∼schmidt/CIAO.html Crnkovic, I., 2004. Component-based approach for embedded systems. In: Proceedings of the Ninth International Workshop on Component-Oriented Programming, Norway. Crnkovic, I., Larsson, S., Chaudron, M.,2006. Component-based development process and component lifecycle. In: Proceedings of the First International Conference on Software Engineering Advances. IEEE Computer Society. Deng, G., Balasubramanian, J., Otte, W., Schmidt, D.C., Gokhale, A., 2005. DAnCE: a QoS-enabled component deployment and configuration engine. In: Proceedings of 3rd Working Conference on Component Deployment, Lecture Notes in Computer Science, vol. 3798/2005, pp. 67–82. Díaz, M., Garrido, D., Llopis, L., Rus, F., Troya, J.M., 2008. UM-RTCOM: an analyzable component model for real-time distributed systems. Journal of Systems and Software 81 (5). Dubey, A., Karsai, G., Mahadevan, N., 2011. A component model for hard real-time systems: CCM with ARINC-653. Journal of Software: Practice & Experience 41 (12), 1517–1550. Gokhale, A., Balasubramanian, K., Balasubramanian, J., Krishna, A.S., Edwards, G., Deng, G., Parsons, J., Schmidt, D.C., 2008. Model driven middleware: a new paradigm for developing and provisioning distributed real-time and embedded applications. Journal of Science of Computer Programming, Special Issue on Foundations and Applications of Model Driven Architectures 73 (1), 39–58. Gomaa, H., 1989. A software design method for distributed real-time applications. Journal of Systems and Software 9 (2), 81–94. Gomaa, H., 2000. Designing Concurrent, Distributed, and Real-time Applications with UML. Addison-Wesley Professional, USA. González Harbour, M., Gutiérrez, J.J., Palencia, J.C., Drake, J.M.,2001. MAST: modeling and analysis suite for real-time applications. In: Proceedings of the 13th Euromicro Conference on Real-time Systems. IEEE Computer Society Press, pp. 125–134. Gutiérrez García, J.J., González Harbour, M., 1999. Prioritizing Remote Procedure Calls in Ada Distributed Systems. In: ACM Ada Letters, XIX, vol. 2, pp. 67–72. Håkansson, J., Carlson, J., Monot, A., Pettersson, P.,2008. Component-based design and analysis of embedded systems with UPPAAL PORT. In: Proceedings of the 6th International Symposium on Automated Technology for Verification and Analysis. Springer-Verlag, pp. 252–257. Hanninen, K., Maki-Turja, J., Nolin, M., Lindberg, M., Lundback, J., Lundback, K.-L.,2008. The Rubus component model for resource constrained real-time systems. In: Proceedings of the 3rd IEEE International Symposium on Industrial Embedded Systems. IEEE Computer Society Press. Henzinger, T.A., Matic, S.,2006. An interface algebra for real time components. In: Proceedings of the 12th IEEE Real-time and Embedded Technology and Applications Symposium. IEEE Computer Society Press. Hissam, S., Ivers, J., Plakosh, D., Wallnau, K.C., 2005. Pin Component Technology (V1.0) and Its C Interface. Technical Note CMU/SEI-2005-TN-001. Software Engineering Institute, Pittsburgh, PA. Hu, J., Gorappa, S., Colmenares, J.A., Klefstad, R.,2007. Compadres: a lightweight component middleware framework for composing distributed, real-time, embedded systems with real-time Java. In: Proceedings ACM/IFIP/USENIX 8th International Middleware Conference. Springer.
Kavimandan, A., Gokhale, A.,2008. Automated middleware QoS configuration techniques for distributed real-time and embedded systems. In: Proceedings of the 14th IEEE Real-time and Embedded Technology and Applications Symposium. IEEE Computer Society Press. Ke, X., Sierszecki, K., Angelov, C.,2008. COMDES-II: a component-based framework for generative development of distributed real-time control systems. In: Proceeding of the 13th Real-time Embedded and Real-time Computing Systems. IEEE Computer Society Press. Klein, M., Ralya, T., Pollak, B., Obenza, R., González Harbour, M., 1993. A Practitioner’s Handbook for Real-time Systems Analysis. Kluwer Academic Publisher, Norwell, USA. Kopetz, H., Damm, A., Koza, C., Mulazzani, M., Schwabl, W., Ralph Zainlinger, C., 1989. Distributed fault-tolerant real-time systems: the Mars approach. IEEE Micro 9 (1), 25–40. Liu, J.W.S., 2000. Real-time Systems. Prentice Hall Inc., New York. López, P., Drake, J.M., Medina, J.L.,2006. Real-time modelling of distributed component-based applications. In: Proceedings 32nd Euromicro Conf. Software Engineering and Advanced Applications. IEEE Computer Society Press, pp. 92–99. López Martínez, P., Drake, J.M., Pacheco, P., Medina, J.L.,2008. Ada-CCM: componentbased technology for distributed real-time systems. In: 11th International Symposium on Component-based Software Engineering, LNCS, vol. 5282. Springer, pp. 334–350. López Martínez, P., Cuevas, C., Drake, J.M.,2010. RT-D&C: deployment specification of real-time component-based applications. In: 36th Euromicro Conference on Software Engineering and Advanced Applications. IEEE Computer Society Press, pp. 147–155. MAST, 2011. Web page: http://mast.unican.es Martínez, J.M., González, M., 2005. RT-EP: a fixed-priority real time communication protocol over standard ethernet. In: Vardanega, T., Wellings, A.J. (Eds.), AdaEurope 2005. LNCS, vol. 3555. Springer, pp. 180–195. Microsoft, 2006.NET Home Page: http://microsoft.com/net/ Moreno, G.A., Merson, P., 2008. Model-driven performance analysis. In: Proceedings of the 4th International Conference of Quality of Software Architectures, Springer, Germany. Nierstrasz, O., Arévalo, G., Ducasse, S., Wuyts, R., Black, A., Müller, P., Zeidler, C., Genssler, T., van den Born, R., 2002. A component model for field devices. In: Proceedings of the First International IFIP/ACM Conference on Component Deployment. Object Management Group, 2005a. Real-time CORBA Specification. OMG doc. number: formal/2005-01-04. Object Management Group, 2005b. UML Profile for Schedulability, Performance, and Time Specification, Version 1.1, OMG doc. number: formal/05-01-02. Object Management Group, 2006a. Corba Component Model Specification, OMG doc. formal/06-04-01. Object Management Group, 2006b. Deployment and Configuration of ComponentBased Distributed Applications Specification, version 4.0, OMG doc. formal/0604-02. Object Management Group, 2009. UML Profile for Modeling and Analysis of Realtime and Embedded systems (MARTE) version 1.0, OMG doc. formal/2009-1102. PACC, 2011. Predictable Assembly from Certifiable Code (PACC). Web page: http://www.sei.cmu.edu/pacc/ Panunzio, M., 2011. Definition, realization and evaluation of a software reference architecture for use in space applications. Ph.D. thesis. University of Padua, Italy. Panunzio, M., Vardanega, T.,2009. On component-based development and highintegrity real-time systems. In: Proceedings of the 15th International Conference on Embedded and Real-time Computing systems and applications. IEEE Press. Panunzio, M., Vardanega, T.,2010. A component model for on-board software applications. In: Proceedings of the 36th Euromicro Conference on Software Engineering and Advanced Applications. IEEE Press. Plˇsek, A., Loiret, F., Malohlava, M., 2012. Component-oriented development for realtime Java. In: Distributed, Embedded and Real-time Java Systems. Springer, pp. 265–292. Rajkumar, R., Juvva, K., Molano, A., Oikawa, S., 1998. Resource kernels: a resourcecentric approach to real-time and multimedia systems. In: Proceedings of the SPIE/ACM Conference on Multimedia Computing and Networking. Szyperski, C., 1998. Component Software: Beyond Object-oriented Programming. Addison-Wesley and ACM Press, New York. Sentilles, S., Håkansson, J., Pettersson, P., Crnkovic, I., 2008a. Save-IDE – an integrated development environment for building predictable component-based embedded systems. In: Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering. Sentilles, S., Vulgarakis, A., Bures, T., Carlson, J., Crnkovic, I., 2008b. A component model for control-intensive distributed embedded systems. In: Proceedings of the 11th International Symposium on Component Based Software Engineering, Springer, Germany, pp. 310–317. Subramonian, V., Deng, G., Gill, C., Balasubramanian, J., Shen, L-J., Otte, W., Schmidt, D.C., Gokhale, A., Wang, N., 2007. The design and performance of component middleware for QoS-enabled deployment and configuration of DRE systems. Journal of Systems and Software, Special Issue Component-Based Software Engineering of Trustworthy Embedded Systems 80, 668–677. Sun Microsystems, 2001. Enterprise JavaBeans Specification. Home page: java.sun.com/products/ejb/docs.html.
P. López Martínez et al. / The Journal of Systems and Software 86 (2013) 449–467 van Ommering, R., 2002. The Koala component model. In: Building Reliable Component-based Software Systems. Artech House Publishers, Norwood, USA, pp. 223–236. Wang, S., Rho, S., Mai, Z., Bettati, R., Zhao, W.,2005. Real-time component-based systems. In: Proceedings of the 11th IEEE Real-time and Embedded Technology and Applications Symposium. IEEE Computer Society Press. Patricia López Martínez received her PhD degree from the University of Cantabria (Spain) in 2010 with a thesis focused on the development of component-based real-time applications. She works as an associate professor at the Computers and Real-Time group of the University of Cantabria. Her main research interests are focused on applying software engineering approaches (model-based development, component-based development, etc.) to real-time systems, mainly in the modelling and specification areas. Laura Barros received her B. in Electrical Engineering and her M.Sc. in Science, Technology and computation from the University of Cantabria, Spain, in 2005 and 2008, respectively. He is currently an assistant professor at the Computers and
467
Real-Time Group at the University of Cantabria. She is working in the last part of her PhD. Her PhD thesis on implementing hard-real time applications in open platforms based on resource reservation contracts lies on provide a methodology to the development of these kind of applications and resolve some of the problems in these platforms where the workload is unknown. José M. Drake received a PhD in Science from Seville University in 1976. He has been professor in the Spanish universities of Seville and La Laguna. He is currently full professor of Cantabria University and team head of the Computer and Real-Time research group. Primary interests are in the software engineering for distributed real-time systems. He has dealt with conceptual aspects, as specification and modelling of real-time systems, new paradigms of design, such as component-based systems and resource allocation, and the middleware specification and implementation to support these paradigms on distributed systems. In collaboration with industrial companies, he has developed a wide range of projects using real-time engineering for nuclear industry and electrical power control and monitoring.