Repository and meta-data design for efficient component consistency verification

Repository and meta-data design for efficient component consistency verification

JID:SCICO AID:1795 /FLA [m3G; v 1.134; Prn:23/07/2014; 15:37] P.1 (1-17) Science of Computer Programming ••• (••••) •••–••• Contents lists availabl...

2MB Sizes 4 Downloads 125 Views

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.1 (1-17)

Science of Computer Programming ••• (••••) •••–•••

Contents lists available at ScienceDirect

Science of Computer Programming www.elsevier.com/locate/scico

Repository and meta-data design for efficient component consistency verification ✩ Premek Brada ∗,1 , Kamil Jezek 2 Faculty of Applied Sciences, University of West Bohemia, 30614 Pilsen, Czech Republic

a r t i c l e

i n f o

Article history: Received 21 April 2013 Received in revised form 11 March 2014 Accepted 16 April 2014 Available online xxxx Keywords: Software component Resource-constrained devices Compatibility Repository Meta-data

a b s t r a c t Composing a complete component-based application as well as updating a subset of its components may involve complex issues of maintaining application consistency. Its verification is a computationally-intensive problem, especially for behavioural compatibility or extra-functional property assessment. This poses serious challenge on resourceconstrained devices which represent an important future computing platform. This work describes an approach that addresses this challenge by separating the tasks of obtaining the results of component consistency evaluation and using them in deployment and update processes. The first task in our approach is performed by a repository with sufficient computational resources. The results are transformed into rich, remotely accessible metadata which are easily checked by the component frameworks and application management agents on the devices. Experiences with a prototype implementation called CRCE as well as validation measurements suggest that the approach can make application consistency evaluation feasible in resource-constrained scenarios. © 2014 Elsevier B.V. All rights reserved.

1. Introduction Component-Based Software Engineering (CBSE) uses a tacit premise that component repositories are available from which prefabricated components can be obtained during application design and composition. Such repositories, in the cases where they are used, provide infrastructural support to store the components, find them and download their distribution packages. However, the suitability of the components for concrete applications must be determined by clients. In particular, clients must perform these steps: (1) select a component from the repository, (2) verify that it suits a concrete application, (3) if the verification fails, try another component. Such a process fits well the original idea of CBSE [1] in which a system is incrementally composed from components first. Then, the completed and tested system is distributed as a whole. This process is supported by modern component models and frameworks. For instance, OSGi [2] loads components composing an assembly and verifies their dependencies. If the process finds an incompatibility, a problematic component must be manually replaced and the process repeated. If the components are compatible, a user gets a running and working system.

✩ This work was supported by the Czech Science Foundation under the grant number P103/11/1489 “Methods of development and verification of component-based applications using natural language specifications”. Corresponding author. E-mail addresses: [email protected] (P. Brada), [email protected] (K. Jezek). 1 Department of Computer Science and Engineering. 2 NTIS – New Technologies for the Information Society, European Centre of Excellence.

*

http://dx.doi.org/10.1016/j.scico.2014.06.013 0167-6423/© 2014 Elsevier B.V. All rights reserved.

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.2 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

2

Technological trends in miniaturisation and steadily increasing computational power of small hardware devices already allow to run component frameworks on tablets, mobile phones, pocket GPS, etc. Therefore, application development and maintenance for these devices can start to benefit from component-based programming. In a near future, this will become true even for resource constrained devices like embedded computers. As the applications evolve, updates need to be delivered to these devices. The speed and the price of e.g. mobile network connection limits the options for upgrades of whole applications running on these devices, leading to a preference for partial application upgrades. This introduces the need for much stronger quality assurance in these component integration scenarios (especially for safety-related applications) but performing consistency verification of the updated application directly on the devices is a challenging task due to their limited computational resources. The approach to addressing these issues which we apply in our work is to move the expensive and repetitive computation away from the resource constrained devices and allow to replace only those components for which knowingly compatible new versions have been released. In this work we describe the design and performance evaluation of our repository-based system which enables to pre-verify the components once and then provide to individual devices just those matching their concrete configuration. The text is structured as follows. The next section analyses related research work and industrial tools and Section 3 presents a motivating experiment. Sections 4 and 5 that follow describe first the general requirements on a system supporting low-cost verification and second a repository design supporting such a system. Section 6 discusses measurements that validate these general concepts, followed by concluding remarks. Throughout the text, examples are provided using the OSGi framework [2]. The choice of this technology was driven by several reasons. It reasonably well conforms to the general desiderata of a black-box component model [3] – use of metadata for component specification, good set of component interface features, ability to introspect component implementation statically. Second, its way of binding the components introduces important challenges and opportunities – use of both static (via packages) and dynamic (services) dependencies, possibility to update component at application run time, ability to extend the set of component features (at meta-model level) and modify their container lifecycle processing. In many of these aspects it is comparable to advanced research models like Fractal [4] or Palladio [5] yet it also has a clear industrial relevance. We have therefore been using OSGi for a considerable time as a representative technology for both research and tool implementation purposes. 2. Current approaches to component repositories, meta-data and compatibility verification This section surveys the approaches related to the use of pre-verified components and the role of repositories in this process. We examine both research and industrial efforts in these areas. The composition of applications from pre-existing components or services involves the following aspects [6]: repository or registry containing the components or service references; description of their properties on various contract levels (functionality as signatures and behavioural specifications, extra-functional properties, semantics e.g. using an ontology) within the “intentional” and “operational” perspectives [7]; and finally selection (matching and ranking) methods to retrieve the correct components with respect to the composition. 2.1. Component repositories Service-oriented application composition in general relies on registries of services or service components and some research teams work on their integration into overall software development with components [8,9]. Iribarne et al. [10] propose an inspirative set of requirements on service traders which includes support for heterogeneous component models and matching on all contract levels, including unforeseen ones. Their proposed information stored about services (used also in queries) includes both technical and marketing meta-data in an extensible XML format. Ali et al. [11] also propose meta-data that are provided together with SOA service registries. Their ServicePot is a proxy to the SOA UDDI registry containing choreography information. Dealing with the dynamic services, they propose a pluggable architecture, implemented in OSGi, to adopt choreographies to different context and requirements. Similar thorough efforts seem to be lacking in the field of deployable components. Only some research component frameworks integrate repository for application composition, like SOFA2 [12], or use them for specific purposes, e.g. the CRCE repository presented in this paper. In the industry, component repositories like Maven,3 OSGi Bundle Repository (OBR) or Linux package distributions are used for storing components together with descriptive files (XML or plain text) to define their features and dependencies. They are often federated and integrated into application development process (usually through IDE support) as well as in the application composition/deployment stage. Apache ACE4 allows to centrally manage components together with their configuration and related artefacts for different target systems. The components are grouped to a set of targets where each target is a unit of deployment. Hence, Apache ACE provides a way to prepare once the installation of an application that is deployed multiple times to respective targets.

3 4

http://maven.apache.org/. http://ace.apache.org/.

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.3 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

3

2.2. Meta-data and component retrieval Descriptive information about components is stored in meta-data of various formats and underlying models. Industrial repositories use basic meta-data, usually key–value pairs, to describe component features, dependencies, compatibility and business aspects (licensing etc.) mostly at technology level. Research component frameworks use Architecture Description Languages (ADLs) [13] for similar purpose. Semantic web service systems capture additionally the essential aspects by ontology-based semantics in various formats (see survey [14]). Ye and Fischer [15], who argue for better reuse support in component repositories, index the repository contents in advance and combine latent semantic analysis with signature matching algorithms to search for matching components. These approaches tend to focus on information retrieval and their meta-data are not well suited for advanced consistency checking [16]. Based on a deep analysis of Debian Linux package dependencies, Artho [17] provides a motivating case that such metadata should be enriched to prevent conflicts during application composition. Industrial approaches are however mostly limited to the use of version identifiers created (in the better case) according to a defined semantic versioning scheme [18]. By themselves even such identifiers do not carry sufficient information about the nature of differences between consecutive versions. The use of pre-computed compatibility information can be found in several works in the field of component and serviceoriented architectures. Research systems often use lattice structures to represent the compatibility relation as in works by Arevalo et al. [19] (based on syntactic compatibility between all pairs of component interfaces), Azmeh [20] or Driss [7] (extra-functional properties combined with functional signatures matching). However, these approaches do not provide sufficient information about the details of (in)compatibilities that would support human analysis of alternative application compositions. The corresponding checks therefore need to be repeatedly invoked when compatibility information is queried. 2.3. Consistency checking methods Due to multiple levels of component contract specifications [21], consistency verification methods needed during component composition and replacement range from type matching [22] through formal model checking [23,24] to test suite execution [25]. Signature matching augmented with weight and penalty values is used in [26] to indicate the importance of non-functional properties attached to interfaces/components. In addition, inference rules on semantic descriptions [27] or text similarity measures [15,28] are used with semantic descriptions; see also [14] for a thorough survey. Industrial systems tend to use the version identifiers as discussed above, added mostly manually to the component meta-data, for checking compatibility of both vertical (component update) and horizontal (bindings) type. For instance, the OSGi resolving process [2] uses the semantics of bundle and package version identifiers to check whether a provided feature falls into a compatibility range declared by the required counterpart. The work proposed by Navas, Babau and Pulou [29] optimises components on embedded devices. To decrease resource needs, particularly memory consumption, they use two methods: (1) optimisation of the glue code that connects the components and (2) detaching selected components off-site, to a remote server. Data are transferred between the embedded device and the server by agents installed on both sides using a messaging mechanism. Their work uses the Fractal/THINK framework in which application models are materialised to a binary code. They see an opportunity for optimisation in this process but they do not directly target compatibility checking as application consistency in their approach is ensured by correct models. 3. Motivation To assess the feasibility of the device-based verification, we performed a measurement of the resource demands of typebased component compatibility checks. This method was chosen for being computationally less complex while sufficiently strong compared to more involved methods (e.g. semantics and behavioural verification) as well as applicable to a wide variety of component technologies (since it effectively verifies system consistency on type signature level, similarly to ordinary compilation of monolithic systems). The verification was performed on OSGi package resolution process [2] which entails finding, within the installed set of bundles, a single compatible exported package for each declared package import. We determined the compatibility of each candidate export–import package pair using (1) the default OSGi method of matching package meta-data (version numbers/ ranges), (2) type-based comparison integrated into the OSGi resolver. The second variant represents the case when the consistency verification needs to be performed in full since the compatibility meta-data are either missing or not trustworthy. This situation is unfortunately quite common in reality where component compatibility information is mostly created manually, posing a serious problem for systems which need to be kept consistent throughout unattended upgrades (embedded applications, remote systems).

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.4 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

4

Table 1 Time to resolve dependencies [ms].

Mobile (meta-data) Mobile (type-based) PC (type-based)

Null

Car park

CoCoME

1978 3615 256

4409 12 165 1345

59 382 282 829 10 437

Measurement setup In both cases we measured the time and memory consumed by reading and analysing the meta-data of bundles and – in the second variant – also bundle contents. Three applications built for the OSGi component framework were used in the tests:

• Null application – a trivial application consisting of only two components, a client and a server, each consisting of a single class.

• Car park – a small synthetic application5 which simulates a parking lot system. It consists of six bundles which are wired through 7 packages forming 10 binding pairs.

• Common Component Modelling Example (CoCoME) [30] – an internally developed implementation consisting of 15 bundles with 108 candidate export–import package pairs, representing a realistic medium-size application. The tests were run on two platforms, a low-end Android mobile phone (528 MHz CPU and 128 MB RAM, representing a resource-constrained device) and a desktop computer. Apache Felix OSGi implementation6 extended with our OBCC tool7 integrated into the bundle resolver was installed on both devices to run the verifications. Results and discussion The data on the wall-clock time needed to verify package compatibility are shown in Table 1. Typebased verification was measured on the PC platform for reference. The memory consumption was measured on the mobile phone to assess type-checking feasibility. The verification of the Null application required 2 MB of memory, verification of the two largest packages of CoCoME required 7 MB (package with 20 classes) and 6.9 MB (6 classes). We also measured the memory consumption of checking compatibility with the org.osgi.framework system package (33 classes) which is imported by many bundles; this required 12.3 MB of memory. The results provide basic insight into the feasibility of component compatibility verification on resource constrained devices. The duration of the CoCoME verification is a relatively long given the nature of the verification method; similar times might be prohibitive in some domains and could mean noticeable energy demand (battery drain). More importantly however, the memory consumption for complete application verification in the order of hundreds of megabytes, indicated by the obtained numbers, poses a challenge for such devices; indeed one OutOfMemoryError was thrown while performing the type checking of CoCoME. This experiment therefore provides a case for an approach which would allow to offload compatibility verification to a dedicated service and make its results repetitively available. 4. General concepts of safe updates using pre-verified components In this section we discuss the various aspects of an approach that is designed to provide a high degree of fidelity and reliability of consistency checks while moving the task of their computation from the target platform where the componentbased application is deployed and executed. We start with general concepts of the approach and then describe its realisation in the form of a component repository with support for an extensible set of verification methods. 4.1. General concepts The key idea of the proposed solution is to move the computationally expensive verification from the target device to a component repository that runs on a machine with sufficient computational power. The repository stores information about the compatibility verification results and provides this information to its clients. Since the repository can anticipate neither the type and number of clients nor the possible compositions of the stored components into applications, the information needs to be general enough to cover any possible target platform and component composition. A repository client on a device – which manages its concrete application(s) – can then use this information to determine the compatibility of a prospective component update with the same degree of reliability as if it ran the corresponding

5

Available at https://github.com/pbrada/obcc-parking-example. OSGi implementation, available at http://felix.apache.org/. This framework was embedded into a Java program and invoked on the Android Dalvik runtime. 7 http://www.assembla.com/spaces/show/obcc. 6

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.5 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

5

Fig. 1. Safe updates of pre-verified components.

verifications itself. Thus the compatibility tests do not have to be repeated on each client and it is the repository, not the component creator or application tester, who drives the verification process. This approach results in a modified workflow of working with the repository as depicted in Fig. 1. When a new version of component A (denoted by asterisk in the figure) is uploaded, the verification process determines its compatibility with all components to which it can be potentially connected (B in the figure). The results are stored alongside the component itself and used by the process of updates with verified components. In the example, the clients on the respective devices determine – by querying the repository – that the “telephone” application can be upgraded since version 2.0 of A is known to be compatible with version 1.0 of B, unlike the case of the “clock” application. 4.2. Discussion of system features This general approach involves the following aspects, partly inspired by the work by Iribarne et al. [10]: 1. 2. 3. 4.

types of resources managed; contract types verified and methods used; meta-data contents and format; modes of interaction with the repository.

Each of these aspects is discussed below. 4.2.1. Types of resources managed Ordinarily, component repositories manage just the artefacts which represent the components as such, i.e. their distribution packages, and provide mainly the store and retrieve operations. The proposed approach concerns complex componentbased applications and involves advanced consistency verification methods, discussed below in the subsection on contract types, as well as more involved query and client interaction operations. Two kinds of additional resources are required to support these advanced functionalities: configuration data which set particular component properties and describe application composition, and detailed artifacts which result from the verification process and which back the compatibility meta-data. Both these kinds of resources need to be managed by the repository besides the component distribution packages. 4.2.2. Contract types verified Advanced component models provide several levels of interface contracts [21] which enable more precise specifications but require different methods of compatibility or compliance verification. While relatively straightforward type-checking methods are sufficient for the syntactic level, semantic and interaction specifications often require state-based or algebraic model checking. Checking of extra-functional properties may necessitate performance or security tests to be performed. Checks of type conformance and basic functional tests are achievable even on resource constrained devices but the other methods are known to pose computational and/or configuration challenges even for high-performance systems. Furthermore,

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.6 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

6

due to the variety of contract specification means and formal models, the repository needs to support an extensible set of engines performing the verification. 4.2.3. Meta-data contents and format While other repositories use simple storage and meta-data structures (for instance, Maven uses a filesystem directory tree with descriptive XML meta-data), the proposed approach requires a more robust meta-data design. In particular, semantically rich and strong meta-data holding information on the stored components and the results of their compatibility verification are necessary. The desired contents of the meta-data comprises:

• concise statements about component compatibility for each (a) pair of revisions of the same component, (b) potential client–supplier bindings determined by matching provided and required features;

• detailed data explaining the reasons for indicated (in)compatibility, for example results of recursive type matching or test data plus resulting logs or reports from tests of extra-functional properties. The statements have to be sufficiently high-level yet complete to allow the client to determine the impact of a prospective configuration change in O (n) time, where n is the number of bindings (irrespective of their contract level and therefore compatibility verification method). The details are meant to allow human users to drill down and analyse causes of differences and incompatibilities. 4.2.4. Interaction modes The general idea to “offload” consistency verification to the repository permits several variants of the process, targeting different usage scenarios. They in turn affect the APIs and internal design of the repository. Firstly, the actual compatibility verification can be performed either by the client, using repository-provided meta-data, or by the repository itself, using client-provided information about application configuration. Thus in the former case the client asks for the meta-data of (a set of) update component versions and decides whether they form a consistent application upgrade, while in the latter case it may simply ask the repository to provide it with e.g. “the latest compatible versions of all components” while sending the information about the versions currently installed. Secondly, the system should handle cases when the client obtains a component or its update version from a 3rd party source (bypassing the repository). In this case, the repository should be able to compute basic compatibility assessment on demand and store its results together with the component for reuse in later requests. 4.3. Expected benefits Benefits of this approach increase in their importance if many devices or many (complex) applications are updated. For the first case, an example may be a corporation having hundreds of mobile phones running some particular applications used by employees. An internal policy may require to update the applications in bulk to prevent incompatibility between the phones. Obviously, a compatibility repeatedly verified on each device is wasteful since the same application is updated on a small set of device types. Instead, a compatible update is verified and prepared in the repository and the phones are updated in a relatively simple transaction. For the other case, an example may be a software vendor selling a set of products from one product line. The products are configured from a large set of internally managed components. If a given component is updated, several applications are influenced. An approach to let all clients download the update and verify its compatibility is again not practical. Instead, all products from the product line may be re-verified on the repository level and clients may download the verified and updated applications. 5. Design of the repository: CRCE project The above discussion of the general concept results in a particular set of design requirements concerning the repository. In order to validate the concept, we have designed an implementation called Component Repository supporting Compatibility Evaluation8 (CRCE). This section describes its concrete features in the light of the general concepts. CRCE implementation is modularised and built as a pluggable framework which results in a loosely coupled extensible architecture shown in Fig. 2. The main building blocks are Component Store, Meta-data Store and Results Store, all accessible via well-defined APIs and a web user interface. This design leads to a streamlined runtime and communication channels, enabling CRCE extensibility. (Non-functional aspects of security, performance and scalability are not among the design goals of the current repository version.) Component Store represents the core component storage; its implementation currently uses a file system directory tree augmented by XML descriptive files. This mechanism and basic structures have been inspired by OSGi Bundle Repository

8

Project website: http://www.assembla.com/spaces/crce/.

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.7 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

7

Fig. 2. CRCE architecture.

Fig. 3. CRCE components life-cycle.

(OBR) with modifications improving generality of component models supported, data type safety and the user comfort of the APIs. The Meta-data and Results modules are unique and will be described in subsections below. To the clients, CRCE presents a standard OBR interface plus several extensions introduced to fulfil the above discussed requirements. 5.1. Extensible compatibility verification The challenge of dealing with the heterogeneity of component models, contract types and verification methods is addressed by the CRCE plugin architecture. Each model and form of contract specification is processed by a set of plugins which employ appropriate verification methods to evaluate the corresponding part of a component’s content and update related parts of meta-data; they can also generate related auxiliary artifacts. The indexation and artifact storing is integrated into the repository by the APIs of the Meta-data and Results modules. To drive this processing, CRCE uses a component life-cycle shown in Fig. 3. When a component is uploaded to the repository, it first goes through a buffer stage during which its internal consistency is checked, the content indexed and basic compatibility verification is performed. Corresponding meta-data can be created and saved into the repository during the process. These actions (implemented by chained plugins which execute their code on the component or by external tools which download its archive) are assumed to require relatively short time to complete. If the component passes through the buffer stage with no errors, it is committed to a permanent storage from which it is available to the CRCE clients for downloading together with its related meta-data. Also, further actions may be executed on the component like housekeeping operations and advanced verification (interaction protocol compliance using model checking, behavioural and extra-functional properties using test suites or simulation verification, etc.). These actions may involve interacting with external tools (e.g. performance testers or model checkers) and converting their outputs to meta-data and artefacts stored in CRCE. Because of the potentially long computational times, asynchronous operation and communication with CRCE is allowed for these plugins. Indexing and verification plugins are technically implemented as OSGi bundles that register listeners on the CRCE lifecycle events. These are provided by the Plugin API from Fig. 2. The plugins use the APIs of other CRCE modules to access the resources and meta-data stored in the repository. This design allows to introduce and manage plugins for any actions to be taken when a component is evaluated by the repository. For instance, our currently ongoing implementation prepares a plugin indexing extra-functional properties from our other project [31], applied to OSGi bundles, as CRCE meta-data.

JID:SCICO AID:1795 /FLA

8

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.8 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

Fig. 4. CRCE meta-data model: the core elements.

Fig. 5. Component’s provided features in CRCE meta-data.

5.2. Meta-data structures for component description Results of component indexing and compatibility verification are stored as meta-data items related to the verified elements, which may be individual features on the component interface, their sets or the whole component. This data (generated by CRCE plugins) is the system’s key element which enables computationally cheap consistency verification by CRCE clients. It is managed by the Meta-dada module of CRCE which uses Meta-data Store as its persistence layer and calls the Component Store module to link the meta-data with physical components. For use by clients, it is converted to an external format (XML in the current version) by the REST API module; it is also presented in a human readable form by the WebUI. The design of meta-data structure in CRCE is based on the general concepts of OBR requirements and capabilities which represent component required and provided features in terms of both content and format – see Fig. 4. Each concrete kind of feature is denoted by a namespace identifier, using a set of descriptive names. Any number of other attributes (which essentially are simple named key–value pairs) can be added. This model allows to describe a wide spectrum of interface features at any contract level – both simple well-known features like exported packages and complex or specialised ones like sub-components (thanks to the recursive capability element) or extra-functional properties – as well as descriptive properties attached to components and their provided and required features. For instance, the snippet of bundle meta-data shown in Fig. 5 describes a provided package of an OSGi component and an extra-functional property (using the EFFCC system [31]) bound to the component’s provided service. The second major part of the meta-data defines component dependencies. Fig. 6 shows how another component would declare a dependency on the org.parking.gate.stats package within a particular version set.

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.9 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

9

Fig. 6. Component dependency in CRCE meta-data.

Fig. 7. CRCE meta-data model: the compatibility information.

5.3. Compatibility information and its use As described in the related work (Section 2), advanced use of formally strong compatibility data is rare in package and component repositories. CRCE addresses this issue by creating and storing semantic version identifiers plus additional difference information in meta-data defined in Fig. 7. This meta-data model reflects the need to evaluate various contract levels and specifications for mutual conformance and represent the results in machine readable format. The concrete realisation follows the scheme proposed in [32] by describing the type of the change – the DiffValue in Fig. 7 – for both individual features and their aggregates (e.g. package, the whole bundle) as spec-ialisation/gen-eralisation (subtype/supertype), mut-ation (type incompatibility) and none (no change on the public API level). The differences are linked to the features by referencing their names. For example, the data shown in Fig. 8 describes the comparison results of two versions of a component, where the aggregate results (e.g. ) are explained by the constituent parts . The data says the given component version is incompatible with the previous version 1.2.6 because – among other things – the org.parking.gate provided package’s CoreStatsMethods class is missing an operation. For a client which assesses the vertical compatibility of components, this is an indication that it would not be safe to upgrade from 1.2.6 to the above version of the component. In other words, to assess the compatibility of an update version of an installed component, the update agent needs to obtain and compare just this aggregate data, resulting in O (1) complexity. Verifying all component’s (potential) bindings requires checking the difference values for all n of its features in O (n) time. The actual compatibility determination can have considerably higher complexity – for example, O (n ∗ m2 ) for type-based comparison (m being the number of types contained in the feature) due to the need to compare all type pairs, or exponential complexity for model checking of behavioural specifications. The second compatibility block in the example shows a result of using the EFFCC system [31] to directly evaluate the capabilities at the extra-functional contract level. Similarly, a component-based simulation [33] could be used for creating meta-data describing the results of verifying component’s conformance to extra-functional properties. In addition to this hierarchical structure of compatibility information related to the component interface model, the repository uses an aggregate representation for quick lookup of the compatibility data. Namely there is a substitutability data structure S n = {(b, t , d)i }; i ∈ N for each resource identified by name n which keeps the differences d ∈ DiffValue between each pair of its base and target versions (b, t ∈ VersId). In the update scenarios depicted by Fig. 1, the client provides the n and b identifiers and the repository uses the S data structure to simply lookup the compatible t version(s)

JID:SCICO AID:1795 /FLA

10

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.10 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

Fig. 8. Resource structural differences in CRCE meta-data.

Fig. 9. Example of referencing result resources.

of the given component in linear time. This design enables the repository to skip intermediate higher versions incompatible with b and provide an update compatible version; this is useful in such cases when the component provider reintroduces a previously dropped feature. 5.4. Managed resources As Fig. 2 shows there is another kind resources managed by CRCE, apart from component distribution packages and meta-data files – the so called “results”. They represent auxiliary information (data, resources) created by the various indexing and verification operations, like detailed reports of tests performed, configuration properties and logs of simulation runs, or persistent representation of component interface comparison results. These details are useful for humans to analyse the pre-computed compatibility data in order to understand the causes and act according to the findings and as such increase the credibility of the meta-data provided by CRCE. Because they back the declared properties by hard data that can be verified, CRCE can serve as a source of certified components [34]. The volume of result resources can be substantial, for example in the case of logs produced during a test suite run. A single result resource can in addition be related to many components and their properties (capabilities or requirements). For these reasons, results are managed by a separate module rather than by the Component Store module. The snippet of meta-data in Fig. 9 shows how a backing result is linked, via a CRCE property, from a difference description. 5.5. Client access to CRCE CRCE has a web interface to interact with the repository by humans and a REST API for third-party applications which will typically be update managers running on client devices. Due to the general model of CRCE meta-data, both these interfaces are able to provide the results of component indexing and verification regardless of the method which was used to obtain them.

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.11 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

11

Typically, a client obtains the meta-data of a given component by calling the API method GET /metadata/, asks for the distribution package of a component via GET /bundle/ and can find the components which provide a given capability by calling POST /provider-of-capability/ with meta-data snippet of the sought capability in the request data. In addition, GET /replace-bundle?id= method returns the meta-data of all component versions which can safely replace the given component. 6. Evaluation of the approach In order to evaluate the premises and expected benefits of the general approach which CRCE is using, we performed a set of validation tests and measurements which compared the performance of compatibility-ensuring component updates in three verification variants. The first two tests placed compatibility verification on the target devices while the third variant used a remote CRCE to obtain verified components from the repository. 6.1. Experiment design The performed measurement used a straightforward experiment scenario: (1) A set of OSGi bundles (a test application) was uploaded on a device and installed into the Apache Felix framework. (2) For a given bundle, an agent on the device downloaded consecutively each update version from a repository, verified its compatibility with the installed version, and performed the update if the verification was positive. (3) This process was repeated for each bundle of the initial set, starting from the smallest size bundles and continuing up to the bigger ones. This sequence allowed us to observe the approximate threshold where the devices reach their limits. This design is a simplification of the real-world situation in that only the compatibility of each single update bundle against the current version was verified. Normally, a set of components would be updated together and more complex application reconfigurations would be possible. The update verification variants differed in the method used as follows: Basic meta-data Update bundles were obtained from the publicly accessible Maven central repository.9 While Maven uses the triplet {group-id, artifact-id, version} to identify and reference artefacts together with a versioning scheme which encodes compatibility information, we ignored the version identifiers as being unreliable. This way we forced the compatibility verification to be performed on the device, using the basic meta-data embedded with the components. This experiment used the standard OSGi resolver to verify compatibility. This method is fast but cannot determine whether the meta-data are correct with respect to bundle’s binary compatibility. Type-based compatibility This experiment was set up like the previous one but compatibility verification was performed by the resolver enriched10 with a type-based checking mechanism. It integrated the OBCC tool (OSGi Bundle Compatibility Checker, see also Section 3) which implements checks of Java binary compatibility on reverse-engineered OSGi bundles, verifying subtype relation [32] of interface and class method signatures. This method ignores embedded bundle meta-data in order to provide reliable compatibility decisions but it is also more resource consuming. CRCE usage This experiment worked also with OBCC which was, however, integrated into the CRCE server in the form of a bundle versioning plugin. This plugin performs pair-wise comparison of all versions of a given bundle stored in the repository and stores the results in the meta-data as described in Section 5.2. The update agent in the client device used the REST API methods GET /replace-bundle?id= and GET /bundle/ to obtain only the components which have been already verified by CRCE as being compatible with the given one. Thus only the time to download the meta-data and the bundle binaries contributes to the total update time; no time to verify compatibility is required. 6.2. Test data and devices As a data set for the benchmark, OSGi bundles composing the Glassfish Application Server11 were used. Although not a typical application for mobile devices, the selection of Glassfish provided us with a comprehensive test data set of 228 bundles with sizes ranging from 3.0 KB to 11.3 MB, and for each bundle a series of several revisions which in total created a set of 2117 version pairs for compatibility verification. This set realistically models the size and configuration of current applications. Since the bundles were being obtained from the Maven central repository, the download times were relatively constant throughout the test.

9 10 11

Available at http://repo2.maven.org/maven2/. Via OSGi’s standard ResolverHook, see http://www.osgi.org/javadoc/r4v43/core/org/osgi/framework/hooks/resolver/ResolverHook.html. A widely used open-source JEE container, available at http://glassfish.java.net/.

JID:SCICO AID:1795 /FLA

12

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.12 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

Table 2 List of selected test bundles. Bundle 228 base bundles in total

Total

Size (KB) api-exporter.jar hk2.jar admin-core.jar jersey-moxy.jar jaxr-api-osgi.jar backup.jar jta.jar gf-ejb-connector.jar glassfish-naming.jar ——— asm-all-repackaged.jar cluster-cli.jar kernel.jar amx-all.jar console-common.jar ——— dol.jar jaxb-osgi.jar webservices-osgi.jar

Import elements

Export elements

0 0 0 94 4454 494 1752 2331 2154

0 0 15 57 1334 137 317 13 535

3.09 3.15 4.519 13.105 29.527 34.971 65.215 80.312 94.016 — 107.327 123.150 627.755 645.746 990.836 — 1126.070 5891.018 11 886.355





7797 1768 22 981 64 830 3485 — 270 405 29 564 210 252

2565 755 2394 20 655 163 — 28 032 52 199 125 257

Updates Maven 5 139 5 16 1 5 5 5 5 — 141 5 5 2 5 — 5 3 43 2345

Updates CRCE 2 84 3 10 1 2 2 2 2 — 7 2 2 2 2 — 2 0 0 1067

A subset of the bundles which is used to report the evaluation results is listed in Table 2. The Export and Import columns report the number of public methods, fields and constructors defined and invoked, respectively, by the given bundle. The last two columns show, for the given bundle, the total number of versions and the number of compatible updates available (through the CRCE “get update bundle” operation), respectively. Notice that CRCE provided fewer versions in some cases – the missing ones were marked as incompatible at the CRCE level and thus hidden for clients. The key for selecting bundles into the subset was to prefer Glassfish bundles (not libraries) but also show bundles (even libraries) that were threshold for one of the devices. Finally, the selected bundles were grouped for comprehension into small (size < 100 KB), medium (size < 1000 KB) and large ones. Four devices were used for the performance measurement: Mobile 1: a modern Galaxy Nexus mobile phone (ARMv7 1.2 GHz dual-core CPU, 1 GB RAM), Mobile 2: a low-end Huawei-Vodafone 845 mobile phone (Qualcomm 528 MHz CPU, 128 MB RAM), Workstation: Intel(R) Core(TM) i5-2300 CPU, 2.80 GHz, 8 GB RAM; and Laptop: Intel(R) Core(TM) i5, 2.50 GHz, 4 GB RAM. While the two mobile phones represent the resource constrained devices at the opposite ends of performance spectrum, the workstation represents a remote engine the verification are detached to. The laptop was used to obtain updates from CRCE while CRCE itself was installed on the workstation. The Internet connection for the mobile phones and the laptop was made via a low-end Wi-Fi router connected to a 30 Mb/s broadband provider, the workstation was placed on a 1 Gb/s ethernet with a 10 Gb/s backbone connection. 6.3. Obtained measurements The measurement outputs are depicted in Tables 3, 4 and 5 respectively for all three experiments and involved devices. Missing values indicate that the given verification did not finish successfully. In all cases, the reason was insufficient memory because one of the OutOfMemoryError or StackOverflowError Java exceptions has been thrown. Results based on basic meta-data (Table 3) show that the desktop workstation was able to evaluate all 2117 version pairs in reasonable time (about 10 min) which is sufficient for a server-like machine. Mobile 1 analysed data up to the size of about 1 MB which is more than 90% cases. Although it needed a short time to perform each verification (from about 0.2 s to about 3 s per update), a large number of updates finally lead to almost 23 min total time. This number is quite high to be usable on a mobile device. The performance of Mobile 2 proved insufficient as the device was able to process only the first two bundles. Let us note that these bundles are “proxy” bundles: they have no byte-code and only export packages imported from other bundles. In results created with OBCC (Table 4) the time needed by the workstation almost quadrupled (36 min) which is nevertheless still reasonable for the single run of the server side verification. However, the last bundle was skipped because its evaluation took too long (tens of minutes). Mobile 1 reached its threshold for bundles of about 100 KB that was slightly above 50% cases. Mobile 2 was, due to limited memory, unable to perform this experiment. Results for CRCE-based updates (Table 5) show the time needed to download verified components, consisting of obtaining the meta-data of the next compatible bundle plus downloading the bundle package itself. The time required to obtain all updates was almost 4 min for the Laptop and more than 6 min for Mobile 1, considerably lower than in the previous cases.

JID:SCICO AID:1795 /FLA

Table 3 Time to verify compatibility by plain OSGi resolver. Updates

Workstation Total download time (s)

5 139 5 16 1 5 5 5 5 — 141 5 5 2 5 — 5 3 43

Total Total (min) Total update (min)

2345

0.219 6.401 0.220 0.829 0.060 0.348 0.821 0.511 0.615 — 36.149 0.970 1.550 1.071 2.084 — 3.085 0.578 181.558 574.207 9.57

Avg. both times (s)

0.049 1.114 0.032 0.156 0.007 0.039 0.038 0.036 0.042 — 0.995 0.042 0.068 0.026 0.067 — 0.102 0.061 2.783

0.054 0.054 0.050 0.062 0.067 0.077 0.172 0.109 0.131 — 0.263 0.202 0.324 0.549 0.430 — 0.637 0.213 4.287

27.772 0.46 10.03

35.375 0.59

Total download time (s) 0.377 9.393 0.294 1.274 0.095 0.732 1.404 1.173 1.623 — 88.268 1.702

Mobile 2 Total verif. time (s) 0.529 18.946 0.595 2.379 0.121 0.843 0.850 0.763 0.829 — 33.275 1.519

8.696 —

Avg. both times (s) 0.181 0.204 0.178 0.228 0.216 0.315 0.451 0.387 0.490 —

Total verif. time (s)

Avg. both times (s)

3.667 45.655

69.929 649.72

12.266 12.877













49.322 0.82

719.649 11.99 12.82

25.143 0.42

0.862 0.644

3.849 —

Total download time (s)

2.509 —

8.154

5.189

2.669

822.195 13.70

526.858 8.78 22.48

105.124 1.75

13

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.13 (1-17)

api-exporter.jar hk2.jar admin-core.jar jersey-moxy.jar jaxr-api-osgi.jar backup.jar jta.jar gf-ejb-connector.jar glassfish-naming.jar — asm-all-repackaged.jar cluster-cli.jar kernel.jar amx-all.jar console-common.jar — dol.jar jaxb-osgi.jar webservices-osgi.jar

Mobile 1 Total verif. time (s)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

Bundle

JID:SCICO AID:1795 /FLA

14

Table 4 Time to verify compatibility by OSGi resolver enriched with OBCC. Updates

Workstation Total download time (s)

5 139 5 16 1 5 5 5 5 — 141 5 5 2 5 — 5 3

Total Total (min) Total update (min)

2306

0.232 6.498 0.222 0.801 0.123 0.376 0.459 0.606 0.602

Avg. both times (s)

0.051 2.995 0.085 0.996 0.123 0.317 0.755 0.909 0.379

— 34.883 0.710 2.724 0.703 2.252 — 2.903 0.425



392.016 6.53

1771.668 29.53 36.06

0.057 0.068 0.061 0.112 0.191 0.139 0.243 0.303 0.196 —

2.345 1.063 17.764 22.719 31.403 — 17.795 0.092

0.264 0.355 4.098 11.711 6.731 — 4.140 0.172

209.036 3.48

Mobile 2

Total download time (s)

Total verif. time (s)

Avg. both times (s)

0.727 32.865 0.849 5.423 0.191 1.827 3.454 2.861 2.411 — 172.320

0.587 19.103 1.357 4.195 0.346 2.077 3.572 4.707 8.338 — 151.242







662.328 11.04

815.784 13.60 24.64

121.016 2.02

0.263 0.374 0.441 0.601 0.346 0.781 1.405 1.514 2.150 — 2.295

Total download time (s)

Total verif. time (s)

Avg. both times (s)

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.14 (1-17)

api-exporter.jar hk2.jar admin-core.jar jersey-moxy.jar jaxr-api-osgi.jar backup.jar jta.jar gf-ejb-connector.jar glassfish-naming.jar — asm-all-repackaged.jar cluster-cli.jar kernel.jar amx-all.jar console-common.jar — dol.jar jaxb-osgi.jar webservices-osgi.jar

Mobile 1 Total verif. time (s)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

Bundle

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.15 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

15

Table 5 Time to obtain verified components from CRCE. Bundle

Updates

Laptop Total download time (s)

api-exporter.jar hk2.jar admin-core.jar jersey-moxy.jar jaxr-api-osgi.jar backup.jar jta.jar gf-ejb-connector.jar glassfish-naming.jar — asm-all-repackaged.jar cluster-cli.jar kernel.jar amx-all.jar console-common.jar — dol.jar jaxb-osgi.jar webservices-osgi.jar Total Total (min)

2 84 3 10 1 2 2 2 2

2 0 0

2.167 0.000 0.000

0.083 0.060 0.068 0.080 0.123 0.111 0.166 0.165 0.200 — 0.215 0.239 0.834 0.808 0.973 — 1.084 0.000 0.000

1067

229.822 3.83

47.009 0.78



0.165 5.064 0.204 0.801 0.123 0.222 0.331 0.330 0.399

Mobile 1 Avg. download time (s)

— 7 2 2 2 2



1.502 0.477 1.668 1.615 1.946 —

Total download time (s) 0.239 8.967 0.361 1.212 0.191 0.541 0.511 0.480 0.643

Mobile 2 Avg. download time (s)

Total download time (s)









3.503 0.000 0.000

0.120 0.107 0.120 0.121 0.191 0.271 0.256 0.240 0.322 — 0.346 0.373 1.044 1.030 1.906 — 1.752 0.000 0.000

373.466 6.22

74.907 1.25

272.806 4.55

35.175 0.59

— 2.422 0.745 2.087 2.060 3.812 —

0.653 27.338 0.924 3.810 0.346 0.826 0.804

Avg. download time (s) 0.327 0.325 0.308 0.381 0.346 0.413 0.402

This method allowed Mobile 2 to update the highest number of bundles (in comparison to previous experiments) but this device was still too memory limited. 6.4. Discussion Although the type-based comparison used in this experiment has relatively low computational and memory complexity, still it was only partially feasible for a high-end mobile device. The speed-up gained in the CRCE-based experiment results from two factors: (1) client–side verification of compatibility could be avoided completely, and (2) the number of available updates was radically decreased (46% of all updates provided by the Maven repository) since the “replace bundle” method skips intermediate non-compatible components. There are however other benefits of the repository-based application verification: consistency of the verification results across devices, since the methods are executed in a central location; the possibility to easily add and configure new verification methods; and also reduced energy consumption of the target devices while keeping a strong assurance of consistent applications. The numbers in this experiment do not include the time needed to perform the verification and creation of meta-data on the CRCE server side. This time can be approximated by the total workstation time in the second (OBCC) experiment since its major component is the type-based comparison which is run once for each bundle version pair. Its results are stored in persistent meta-data indexed by bundle identifiers and are served by CRCE on client request using a single database query, i.e. the time needed to obtain the verification results is negligible once they have been created. Similar evaluation for other kinds of verification is possible once the corresponding repository plugins become available; the CRCE compatibility meta-data is able to accommodate a wide spectrum of verification results. Integration of performance testing which uses simulation-based method and an extra-functional properties compatibility comparison method are currently being considered by our colleagues. Lastly, we might note that the first version of this experiment parsed XML meta-data using the DOM object parser requiring memory that was too high for this device. Reimplementation to SAX improved situation but only about first 50% of bundles were successfully download. After that, XMLs were too big to be parsed by Mobile 2. We see here room for improvement of CRCE which could provide even less memory demanding data such as JSON, CSV or plain strings. To wrap up the results, Fig. 10 provides a chart of times required in the key cases. As the data show, compatibility verification of software components using advanced methods is a challenging tasks for resource constrained devices and the repository-based approach provides clear benefits. 7. Conclusion In this paper we discussed the tension between the need to ensure architectural consistency of component-based applications deployed on resource constrained devices on one hand, and the limited computational power available to perform the appropriate verifications on the other hand.

JID:SCICO AID:1795 /FLA

16

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.16 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

Fig. 10. Average times to verify updates.

The analysis of the desired solution’s properties resulted in the design of our proposed approach which delegates the verification to a server-based repository. This stores both the components and the results of their compatibility verification in semantically rich meta-data. These pre-computed results can be used by client devices during repeated consistency verification, significantly reducing its computational demand. The viability of this approach has been validated by an implementation of the repository called CRCE which fulfils most of the desired requirements. One of its characteristics, an open design extensible by plugin components, enables to pre-compute and store verification results for different component models on one side and an open set of contract specifications on the other side. The benefits of the CRCE-based compatibility verification were validated by a set of measurements using a real life application. CRCE currently supports component storage and indexing for the OSGi framework and compatibility verification on two contract levels: signature (types and services) by integrating the OBCC tools and extra-functional, via the EFFCC system. Under active research are the means to integrate it with simulation-based component testing to provide a means for adding a different mechanism of verifying component compatibility and compliance with both functional and extra-functional specifications. Acknowledgements We would like to thank Jiri Kucera, Zuzana Buresova, Jan Reznicek and Jakub Danek for their work on tool implementation and experiments. References [1] C. Szyperski, Component technology – what, where, and how?, in: Proceedings of the 25th International Conference on Software Engineering (ICSE 2003), Portland, Oregon, 2003. [2] OSGi Service, Platform core specification, The OSGi Alliance, 2011, Release 4, Version 4.3. [3] P. Brada, A look at current component models from the black-box perspective, in: Proceedings of 35th Euromicro Conference on Software Engineering and Advanced Applications, IEEE Computer Society Press, Patras, Greece, 2009, pp. 388–395. [4] G. Blair, T. Coupaye, J.-B. Stefani, Component-based architecture: the fractal initiative, Ann. Télécommun. 64 (2009) 1–4. [5] S. Becker, H. Koziolek, R. Reussner, The Palladio component model for model-driven performance prediction, J. Syst. Softw. (Special Issue: Software Performance – Modeling and Analysis) 82 (2009) 3–22. [6] S. Pietschmann, C. Radeck, K. Meissner, Semantics-based discovery, selection and mediation for presentation-oriented mashups, in: Proceedings of the 5th International Workshop on Web APIs and Service Mashups, Mashups ’11, ACM, New York, NY, USA, 2011, pp. 7:1–7:8. [7] M. Driss, Y. Jamoussi, J.-M. Jézéquel, H. Hajjami Ben Ghézala, A multi-perspective approach for web service composition, in: Proceedings of 13th International Conference on Information Integration and Web-Based Applications and Services (iiWAS 2011), Ho Chi Minh City, Viet Nam, 2011. [8] S. Pietschmann, A model-driven development process and runtime platform for adaptive composite web applications, Int. J. Adv. Internet Technol. 2 (2010) 277–288. [9] N. Admodisastro, G. Kotonya, An architecture analysis approach for supporting black-box software development, in: I. Crnkovic, V. Gruhn, M. Book (Eds.), Software Architecture, in: Lecture Notes in Computer Science, vol. 6903, Springer, Berlin, Heidelberg, 2011, pp. 180–189. [10] L. Iribarne, J.M. Troya, A. Vallecillo, A trading service for COTS components, Comput. J. 47 (2004) 342–357. [11] M. Ali, G. de Angelis, A. Polini, Servicepot – an extensible registry for choreography governance, in: Proceedings of the 2013 IEEE Seventh International Symposium on Service-Oriented System Engineering, SOSE ’13, IEEE Computer Society, Washington, DC, USA, 2013, pp. 113–124. [12] T. Bures, P. Hnetynka, F. Plasil, SOFA 2.0: balancing advanced features in a hierarchical component model, in: Proceedings of SERA2006, IEEE CS, Seattle, USA, 2006, pp. 40–48.

JID:SCICO AID:1795 /FLA

[m3G; v 1.134; Prn:23/07/2014; 15:37] P.17 (1-17)

P. Brada, K. Jezek / Science of Computer Programming ••• (••••) •••–•••

17

´ E.M. Dashofy, Software Architecture – Foundations, Theory, and Practice, Wiley, 2010. [13] R.N. Taylor, N. Medvidovic, [14] M. Klusch, Semantic web service coordination, in: M. Schumacher, H. Schuldt, H. Helin (Eds.), CASCOM: Intelligent Service Coordination in the Semantic Web, in: Whitestein Series in Software Agent Technologies and Autonomic Computing, Birkhäuser, Basel, 2008, pp. 59–104. [15] Y. Ye, G. Fischer, Promoting reuse with active reuse repository systems, in: W. Frakes (Ed.), Software Reuse: Advances in Software Reusability, in: Lecture Notes in Computer Science, vol. 1844, Springer, Berlin, Heidelberg, 2000, pp. 9–46. [16] V. Tietz, A. Rümpel, C. Liebing, K. Meißner, Towards requirements engineering for mashups: state of the art and research challenges, in: ICIW 2012, The Seventh International Conference on Internet and Web Applications and Services, IARIA, 2012, pp. 123–130. [17] C. Artho, K. Suzaki, R. Di Cosmo, R. Treinen, S. Zacchiroli, Why do software packages conflict?, in: 9th IEEE Working Conference on Mining Software Repositories (MSR), 2012, pp. 141–150. [18] Semantic versioning, Technical whitepaper, The OSGi Alliance, 2010, revision 1.0 edition. [19] G. Arevalo, N. Desnos, M. Huchard, C. Urtado, S. Vauttier, Precalculating component interface compatibility using FCA, in: P.E. Jean Diatta, M. Liquière (Eds.), CLA’07: Fifth International Conference on Concept Lattices and Their Applications, Montpellier, 2007, pp. 237–248. [20] Z. Azmeh, M. Huchard, C. Tibermacine, C. Urtado, S. Vauttier, Using concept lattices to support web service compositions with backup services, in: Fifth International Conference on Internet and Web Applications and Services (ICIW), 2010, pp. 363–368. [21] A. Beugnard, J.-M. Jézéquel, N. Plouzeau, D. Watkins, Making components contract aware, Computer 32 (1999) 38–45. [22] A.M. Zaremski, J. Wing, Specification matching of software components, ACM Trans. Softw. Eng. Methodol. 6 (1997). [23] E. Clarke, Model checking, in: S. Ramesh, G. Sivakumar (Eds.), Foundations of Software Technology and Theoretical Computer Science, in: Lecture Notes in Computer Science, vol. 1346, Springer, Berlin, Heidelberg, 1997, pp. 54–56. [24] F. Plášil, S. Višnovský, Behavior protocols for software components, IEEE Trans. Softw. Eng. 28 (2002). [25] A. Flores, M. Polo, Testing-based process for evaluating component replaceability, in: Proceedings of the 3rd International Workshop on Views On Designing Complex Architectures (VODCA 2008), in: Electronic Notes in Theoretical Computer Science, vol. 236, 2008, pp. 101–115. [26] B. George, R. Fleurquin, S. Sadou, A substitution model for software components, in: M. Lanza, et al. (Eds.), Proceedings of 10th ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE 2006), Universitá della Svizzera Italiana, Lugano, ISBN 88-6101-000-8, 2006, pp. 21–30. [27] M. Paolucci, T. Kawamura, T. Payne, K. Sycara, Semantic matching of web services capabilities, in: I. Horrocks, J. Hendler (Eds.), The Semantic Web – ISWC 2002, in: Lecture Notes in Computer Science, vol. 2342, Springer, Berlin, Heidelberg, 2002, pp. 333–347. [28] Y. Wang, E. Stroulia, Semantic structure matching for assessing web-service similarity, in: M. Orlowska, S. Weerawarana, M. Papazoglou, J. Yang (Eds.), Service-Oriented Computing – ICSOC 2003, in: Lecture Notes in Computer Science, vol. 2910, Springer, Berlin, Heidelberg, 2003, pp. 194–207. [29] J.F. Navas, J.-P. Babau, J. Pulou, A component-based run-time evolution infrastructure for resource-constrained embedded systems, in: ACM SIGPLAN Notices, vol. 46, ACM, pp. 73–82. [30] A. Rausch, R.H. Reussner, R. Mirandola, F. Plasil, The Common Component Modeling Example: Comparing Software Component Models, vol. 5153, Springer, 2008. [31] K. Ježek, P. Brada, Formalisation of a generic extra-functional properties framework, in: L. Maciaszek, K. Zhang (Eds.), Evaluation of Novel Approaches to Software Engineering, in: Communications in Computer and Information Science, vol. 275, Springer, Berlin, Heidelberg, 2013, pp. 203–221. [32] P. Brada, L. Valenta, Practical verification of component substitutability using subtype relation, in: Proceedings of the 32nd Euromicro Conference on Software Engineering and Advanced Applications, IEEE Computer Society Press, 2006, pp. 38–45. [33] T. Potuzak, R. Lipka, J. Snajberk, P. Brada, P. Herout, Design of a component-based simulation framework for component testing using SpringDM, in: Proceedings of the 2nd Eastern European Regional Conference on the Engineering of Computer Based Systems, IEEE Computer Society, Bratislava, Slovakia, 2011. [34] A. Alvaro, E. de Almeida, S. de Lemos Meira, Software component certification: a survey, in: Proceedings of 31st EUROMICRO Conference on Software Engineering and Advanced Applications, IEEE Computer Society, 2005, pp. 106–113.