The Journal of Systems and Software 79 (2006) 871–888 www.elsevier.com/locate/jss
Feature analysis for architectural evaluation methods A. Grima´n a
a,*
, M. Pe´rez a, L. Mendoza a, F. Losavio
b
LISI, Departamento de Procesos y Sistemas, Universidad Simo´n Bolı´var, Apartado Postal 89000, Caracas 1080A, Caracas, Venezuela b Centro ISYS, Laboratorio La TecS, Universidad Central de Venezuela, Caracas, Venezuela Received 16 February 2005; received in revised form 8 December 2005; accepted 17 December 2005 Available online 7 February 2006
Abstract Given the complexity of today’s software systems, Software Architecture is a topic that has recently increased in significance. It is consider as a key element in the design and development of systems and has the ability to promote/punish some quality characteristics. Quality is related mainly to the non-functional requirements expected of the system. Using appropriate methods for Architectural Evaluation can help to reduce the risk of low quality Architecture. The objective of this research is to conduct a Features Analysis for three Architectural Evaluation Methods applied to a single case. This resulted in two significant contributions; the aspects which must be present in an architectural evaluation method which is translated in a set of 49 metrics, grouped into features and sub-features to enable the evaluation of a particular method. Different methods can be compared on the basis of these features; however, the choice of method will depend largely on the requirements of the evaluating team. The second contribution pinpoints the strengths of the methods evaluated. The methods studied were: Design and Use of Software Architectures, Architecture Tradeoff Analysis Method, and Architectural Evaluation Method. The evaluation method employed for this research was Features Analysis Case Study. 2005 Elsevier Inc. All rights reserved. Keywords: Software architecture; Architectural evaluation method; Quality; Non-functional requirement; Features analysis
1. Introduction Software Systems are becoming more complex day by day, and must therefore comply with certain quality characteristics. It is far from easy to assure their quality in terms of these characteristics. Software architecture is characterized by its ability to encourage or discourage some of these quality characteristics (Clements et al., 2002; Bosch, 2000). For Bass et al. (2003), the architecture of a computer program or system is its structure or structures, which includes components, their visible properties and the relation between them. Focusing on the architecture affords the architect certain benefits; for instance, detecting errors early on in the design stage, inferring certain properties of
*
Corresponding author. Tel.: +58 212 9063328; fax: +58 212 9063303. E-mail addresses:
[email protected] (A. Grima´n),
[email protected] (M. Pe´rez),
[email protected] (L. Mendoza), fl
[email protected] (F. Losavio). 0164-1212/$ - see front matter 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2005.12.015
the final software system, a common language among stakeholders, reuse, etc. Appropriate methods can be used to evaluate a particular architecture to ascertain or ensure its adequacy. So far there is no consensus how software architecture should be evaluated, or on the aspects that must be taken into account during the evaluation to ensure effectiveness. There are a number of different architectural evaluation methods, each with its own features; some place emphasis on different quality attributes, beginning at different stages of the design, others use different resources or approaches. Based on these aspects, this research focuses on a Features Analysis of architectural evaluation methods applied to a single case study. This Features Analysis makes two contributions; on the one hand it pinpoints some of the aspects that must be present in an Architectural Evaluation Method and, on the other, it identifies the strengths of the methods evaluated. These strengths are determined by their behavior in response to the features proposed.
872
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
The methods studied were Architecture Tradeoff Analysis Method (ATAM) (Clements et al., 2002), Architectural Evaluation Method (AEM) (Losavio et al., 2004) Design and Use of Software Architectures (DUSA) (Bosch, 2000) applied to a single case study (Proyectos-DID KMS), during the elaboration phase of the architecture for a Knowledge Management System (KMS) for Research Projects at Universidad Simo´n Bolı´var. This article presents the methodology applied, followed by the description of each method studied, before going on to look at the case study to which the three methods were applied, their application and lastly, the analysis of the Features Analysis and the conclusions reached. 2. Application of the DESMET methodology to assess architectural evaluation methods DESMET is a methodology for evaluating methods or tools. It identifies nine evaluation methods and a set of criteria to help the evaluator choose the most appropriate one for his needs (Kitchenham, 1996), based on the different evaluation contexts the evaluating team may have to deal with. The context identified in this research was Choice of methods and tools for an organization on the basis of which initial options could be put forward, followed by a more detailed evaluation of one or more methods or tools; this turned out to be the most appropriate evaluation context for this research, being in line with the object to be studied, as can be seen below. According to DESMET, the object to be studied can be classified in the following order: Tool, Method, Combination of Method/Tool, Project Support Environment. From these a combination of methods and tools was chosen in accordance with the characteristics of the study. DESMET also proposes breaking down the method into two sub-categories: a generic paradigm or a specific approach within a generic paradigm. Although all three architectural evaluation methods have a generic paradigm which is the architectural evaluation of the software, each has its own approach, which makes it specific and differentiates it from the others. Hence the method chosen was the one that takes a specific approach within a generic paradigm. It should be noted that the DESMET methodology makes no distinction between the evaluation of a single method and a set of methods and DESMET implicitly points the evaluator in right direction: 1. If the objects to be evaluated are tools and comparisons are being made between two or more of them, the Features Analysis or Benchmarking are the best methods. 2. If the evaluation seeks to compare a tool that would enable something to be done automatically, and something else to be done manually, a Quantitative Evaluation is necessary. 3. If the objects evaluated are generic methods, an Analysis of Qualitative Effects should be undertaken.
4. If the objects being evaluated are specific methods, a Quantitative Evaluation Method or Features Analysis should be used. For the reasons given prior to these recommendations, the first three suggestions are rejected. The remaining suggestion is highlighted because of its similarity with the objective proposed. As mentioned earlier, one can predict the use of a quantitative evaluation method to render the differences between the three methods measurable; however, the Features Analysis of the methods carries more weight since no evaluation model has been defined to make the comparison between them. When considering how mature the method is, DESMET focuses on whether the method has been; (1) not developed or used in commercial projects, (2) used in products produced by the same organization, or (3) in extensive use throughout the actual organization. The first option was discarded to begin with because the methods to be evaluated have already been developed, making it possible to obtain sufficiently complete information on them. The second option was also rejected as its use and experience for the type of object to be evaluated are limited. In the case of the methods to be evaluated, they have been applied to products created outside the organization. The only thing left then is to analyze the last of the three options and its extensive use in the organization. There is documentation available for each method enabling it to be applied reliably. Lastly, having carried out the activities proposed by DESMET, the method chosen was Features Analysis Case Study. This choice is highlighted in italics in Table 1, showing the conditions present and not present for each method according to DESMET. Note the high percentage for the method chosen compared to the others. The last column shows the percentage of favorable conditions that are present in each method proposed by DESMET. About case studies, they involve the evaluation of a method/tool after it has been used on a ‘‘real’’ software project. However, you can have only limited confidence that a case study will allow you to assess the true effect of a method/tool. Case study is difficult to assess the extent to which the results of one case study can be used as the basis of method/tool investment decisions. Nevertheless they can be incorporated into the normal software development activities. If they are performed on real projects, they are already ‘‘scaled-up’’ to life size. They allow you to determine whether (or not) expected effects apply in your own organizational and cultural circumstances. The next step is to determine the activities of the Features Analysis Case Study (Kitchenham, 1996): 1. 2. 3. 4. 5.
Select candidate methods. Identify the criteria evaluated. Select a pilot project. Test each method in each pilot project. Evaluate the features in each method.
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
873
Table 1 Favorable and unfavorable conditions of each method proposed by DESMET, applied to this research Evaluation method
Favorable conditions present
Favorable conditions not present
Quantitative experiment
Clearly quantifiable benefits I wish to make independent context contributions to the method Staff available to take part in the experiment
Method related to a single question Benefits directly measurable from the results of the tasks
Benefits quantifiable in a single project Stable development procedures Personal with experience in metrics Evaluation time proportional to the time taken to evaluate a normal sized project
Quantifiable benefits prior to withdrawal of the product
80
Benefits not quantifiable in a single project Existence of a project database with information on productivity, quality and methods Projects where there is experience in using the method
33
Many methods to be evaluated
50
Quantitative case study
Quantitative measurement
%Fav. 50
Relatively short learning time
Feature analysis screen mode
Too little time for the evaluation exercise
Feature analysis case study
Benefits observable in a single project Stable development procedures Population limited to users of the method Evaluation time proportional to the time a normal sized project takes Benefits hard to quantify
Feature analysis experiment
Benefits observable in a single project Benefits hard to quantify
Too little evaluation time Varied population of users of the method
50
Feature analysis measurement
Benefits hard to quantify
Varied population of users of the method Benefits cannot be observed in a single project Projects undertaken to learn the method
25
Analysis of qualitative effects
Interest in the evaluation of generic methods Need to combine methods
Availability of expert opinions on the method Lack of stability in the development process
50
Benchmarking
6. Analyze the scoring and presentation of results. 7. Present conclusions on the evaluation. A brief description and its application will be presented for each of them. 3. Features analysis case study Following the steps indicated by DESMET, activities were developed that led to the Features Analysis proposed here. 3.1. Select a set of candidate methods to evaluate The research conducted is based on the Features Analysis and a comparison between different architectural evaluation methods. A wide range of software architecture evaluation methods is now available. According to Clements et al. (2002), until recently there were no useful methods for evaluating software architectures. Those that did exist had incomplete, ad hoc, non-repeatable approaches which did not make them very reliable. Because of this, multiple evaluation methods have been proposed. The main
100
Non-intensive method at the human level Method outputs capable of being compared with a benchmark
0
ones are: Software Architecture Analysis Method SAAM (Clements et al., 2002), Architecture Trade-off Analysis Method—ATAM (Clements et al., 2002), Framework from Software Requirements Negotiation to Architecture Evaluation (In et al., 2001), Active Reviews for Intermediate Designs—ARID (Clements et al., 2002), Design and Use of Software Architectures—DUSA (Bosch, 2000), Architectural Evaluation Method—AEM (Losavio et al., 2004) and Cost Benefit Analysis Method—CBAM (Bass et al., 2003), which begins where ATAM leaves off. In order to undertake the features analysis for different architectural evaluation methods, three of them were selected: Architecture Trade-off Analysis Method—ATAM (Clements et al., 2002), Design and Use of Software Architectures—DUSA (Bosch, 2000), Architectural Evaluation Method—AEM (Losavio et al., 2004), since they were methods on which fuller documentation was available. The methods studied are described briefly below: 1. Architecture Trade-off Analysis Method—ATAM (Clements et al., 2002): consists of 4 phases, corresponding to time segments in which a series of activities take place. Phase 0 corresponds to the creation of the
874
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
evaluation team and the establishment of agreements between the evaluating organization and the organization that owns the architecture to be evaluated. During Phases 1 and 2, the true phases of the ATAM evaluation, the nine steps described in Table 2 are carried out. Phase 1 focuses on the architecture and concentrates on eliciting and analyzing the architectural information. Phase 2 focuses on the stakeholders and concentrates on eliciting its points of view and checking the results of Phase 1. Lastly, in Phase 3 the final report of the architectural evaluation is produced. 2. Design and Use of Software Architectures—DUSA (Bosch, 2000): This method focuses on the explicit evaluation and design of quality requirements in software architectures. According to Chirinos et al. (2001), it is easier to evaluate quality attributes using the method proposed by Bosch for Systems Architecture Design.
The main activities of DUSA are illustrated in Fig. 1. Bosch also proposes one of the following evaluation techniques to be chosen: scenario based, simulation based, mathematical model based, and experience based. In keeping with the purpose of this research, the simulation based evaluation will be used, in which the implementation of high-level systems architecture is used. DUSA works with an initial architecture which is evaluate and transformed in order to enhance it through tree phases: (1) Architectural design based on Functionality, (2) Software architectural evaluation, and (3) Architectural transformation; and affirms that there are four categories of transformation for an architecture: imposition of an architectural style, imposition of an architectural pattern, application of a design pattern and conversion of the quality requirements in terms of functionality.
Table 2 Phases of ATAM (Clements et al., 2002) Outputs steps
1. 2. 3. 4. 5. 6. 7. 8. 9.
Present the ATAM Present business drivers Present architecture Identify architectural approaches Generate quality attribute utility tree Analyze architectural approaches Brainstorm and prioritize scenarios Analyze architectural approaches Present results
Prioritized statement of quality attribute requirements
Catalog of architectural approaches used
Approach- and quality-attributespecific analysis questions
Mapping of architectural approaches to quality attributes
X X X
X
X
X
X
X
Risks and non-risks
Sensitivity and tradeoff points
X X X
X X
X
X
X
X
X
X
X X
Fig. 1. The steps taken by the Bosch method (Bosch, 2000).
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
3. Architectural Evaluation Method—AEM (Losavio et al., 2004): This is a method for evaluating architectural quality using international standard ISO 9126. AEM proposes the following steps: 1. Analyze the main functional requirements and nonfunctional requirements (NFR) for the system, to establish the quality goals. 2. Use the customized ISO 9126-1 quality model (Losavio et al., 2004) for the architecture as a framework. Some of the metrics proposed in the framework for the architectural attributes could be further specified. 3. Present the initial candidate architectures. 4. Construct the comparison table for the candidate architectures. 5. Prioritize the quality sub-characteristics taking into account the quality system requirements. 6. Analyze the results obtained summarized in the table according to the given priorities. 7. Select the initial architecture, among the evaluated candidates, on the basis of the previous analysis. After giving a brief description of the methods to be evaluated, the criteria used to analyze the features were identified. 3.2. Features evaluation In order to identify the features of the evaluation, the steps indicated by Kitchenham (1996) were taken as a reference in order to draw up lists of features. These are: identification of groups of users with different types of requirements, review of the initial list of features that represent different groups that have already been identified, study of the importance of each of the features and lastly, identification of a suitable metric for each of them. This last step was undertaken based on the paradigm Goal Question Metrics (GQM) (Basili, 1992). The group of users that submitted the different requirements for a particular method was made up of researchers
875
from the Information Systems Research Laboratory (LISI) (www.lisi.usb.ve). A specific feature can be broken down into sub-features, which can be broken down further into conceptually simpler elements. These elements can be named attributes (Kitchenham, 1996). The moment was considered right, based on the foregoing and in order to draw up the initial list of features, to take up the main areas again according to the context as aspects on which the Features Analysis should be developed. The tree representing the relationship between the features, sub-features and attributes proposed is shown in Fig. 2; According to ISO, a quality characteristic is a set of attributes in a product for which its quality can be described and measured. A quality characteristic can be refined in a set of sub-characteristic. An attribute is a measurable physical or abstract property of an entity. The definition of the features can be seen in Table 3. Once the features, and their refinements, have been identified, the metrics must be defined in order to estimate the features. Kitchenham (1996) point out that there are two types of metrics: simple and compound. On this occasion it was decided to work with compound metrics. A compound metrics being one in which the degree of help offered by the method must be measured or judged on an ordinal scale of zero (0) to five (5) (Kitchenham, 1996). The description and definition of this is shown in Table 4. Table 5 gives examples of the description of the evaluation sub-features based on the paradigm Goal-QuestionMetric (GQM). As space is limited, the description is only given for the Sub-feature Tools; this selection was made from a total of 49 metrics. In order to lower the evaluation cost (reducing the number of metrics) or provide some level of prioritization of these metrics, the following recommendation are suggested (SEI, 2005): for this domain, the commonalities and differences among related methods of interest can be designated as features and can be depicted in the feature model (Fig. 2). Features in the feature model may be defined as alternative, optional or mandatory. The mandatory features represent the baseline features of a method and the
Fig. 2. Tree of features evaluated.
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
876 Table 3 Definition of the features evaluated Feature
Conceptual definition of sub-features and attributes
General (they are present in any method)
Structure: According to Callaos (1992), a method is a set of well-defined steps, with a starting point and an ending point; whose aim is to attain a particular goal and, in the best of cases, surpass it Approach: A method’s approach is related to how it can be classified; according to Callaos (1992), methods can be classified as: deductive, with an input of general principles and an output of specific conclusions; or inductive, with an input of a specific experience and an output of a general hypothesis Evaluating team: For Kitchenham and Jones (1997), the evaluating team is responsible for applying the evaluation method, undertaking each and every one of the deliverables required by it, in a subjective way, consistent with the consensus of its members, and with the contribution of the collection and analysis of the results obtained Documentation: According to Pressman (2002), the key areas of the process form the management control base of software projects and establish the context in which the technical methods are applied; products are obtained from the work (models, documents, data, reports, forms, etc.) landmarks are set, quality is ensured and the change is managed accordingly Tools: The tools are the objective side of the method, which together with the use techniques represent the subjective side of the method, manage to carry out all the corresponding deliverables in order to attain the objective proposed (Callaos, 1992) Costs: According to Pressman (2002), software feasibility has four solid dimensions: technology, financing, time and resources. From this it can be deduced that the four dimensions represent the costs that determine whether or not the system is feasible
Specific (particular aspects related to architectural evaluation of quality of software)
Software architecture: The software architecture is the product of the architectural design process, which consists of identifying the subsystems and establishing a framework for controlling and communicating with the entities participating in it (Sommerville, 2002) Quality: refers to compliance with the specifically established functional and performance requirements, with explicitly documented development standards and with the implicit characteristics expected of any professionally developed software (Pressman, 2002) • Quality views: based on ISO 9126, two quality views are considered for the architecture: internal view and external view. According to Losavio et al. (2004), when quality characteristics are used for verification, they are based on internal quality; whereas when they are used as destinations for validation, they are based on external quality • Quality requirements: Quality requirements, according to the ISO organization, are those needs that a user wants to be satisfied by the service it uses (ISO/IEC, 2002). For Pressman (2002), the quality of a system is as good as the requisites that define the problem and the design of the solution, among others • Quality measurement: According to ISO (ISO/IEC, 2002), the evaluation metrics is what allows us to qualify quantitatively the satisfaction of a quality characteristic. In order to achieve the evaluation of quality in real time, the engineer must use technical measurements to evaluate quality objectively and not subjectively (Pressman, 2002)
Table 4 Ordinal scale Value given
Description
Definition
5 4 3 2 1 0
Excellent Very good Good Poor Very poor Does not apply
The The The The The The
estimation estimation estimation estimation estimation estimation
of of of of of of
the the the the the the
feature feature feature feature feature feature
relationships between those features. The alternative and optional features represent the specialization or more general features, that is, they represent what changes are likely to occur over time (SEI, 2005). In our case, the mandatory features correspond to the specific metrics depicted in Table 3, while the General ones can be considered as the alternative features. Quality subfeatures can be alternative features depending if a method of evaluation is needed to evaluate a specific quality characteristic, as for instance, Reliability or Maintainability. From these criteria, the evaluator would make the analysis of costs/benefits.
is over and above the expectations exceeds the expectations covers all the expectations does not cover the expectations does not cover the expectations or even come near to doing so cannot be undertaken as it does not apply for the feature evaluated
3.3. Choice of a pilot project: PROYECTOS DID KMS Universidad Simo´n Bolı´var (USB) intends to stimulate applications for financing research projects and also to promote tools to handle these projects once they have been approved. It further intends to foster clarity and precision in the formulation of projects to reduce initial rejections and assure successful projects. The purpose of KMS developed in this research is to encourage the professors at the USB to manage their research projects through a Web interface, thereby fostering collaborative work and facilitating information
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
877
Table 5 Description of the sub-features Feature Sub-feature Conceptual definition
General Tools The tools are the objective side of the method and these tools, together with the use techniques represented by the method’s subjective angle, they produce the corresponding deliverables to attain the objective proposed (Callaos, 1992)
Operational definition 14. Does it propose tools in order to specify quality? 15. Does it propose tools in order to apply the scenario technique? 16. Does it propose tools for the analysis of the evaluation? 17. Are the tools for specifying quality efficient? 18. Are the tools for applying the scenario technique efficient? 19. Are the tools for analyzing the evaluation efficient? 20. Are the tools for specifying quality legible and easy to use? 21. Are the tools for applying the scenario technique legible and easy to use? 22. Are the tools for analyzing the evaluation legible and easy to use?
sharing. The objectives to be met are: to encourage people to apply for Research Project funding, as well as to foster the development of tools to facilitate the handling of these projects once they are approved. A further intention is to foster clarity and precision in the formulation of these projects in order to reduce initial rejections and be able to attain successful projects. All this must be done while supporting the processes that are characteristic of a KMS: capturing, generating, sharing and distributing knowledge. PROYECTOS DID KMS system enables the university to capitalize on and store all this knowledge, giving it a competitive advantage over other organizations. In order to better understand the case study, the system’s functionality should be presented based on the use
cases identified for the KMS, in addition to the NFR and the description of the architectures in which the evaluation methods will be applied. The Functional Requirements (FR) of the KMS can be identified through use cases, which are illustrated in Fig. 3. The NFRs were classified by the stakeholders as a product of brainstorming. They are also strongly related to the architecture by nature and are essential in a system considered a quality one (Jacobson et al., 2000; Bass et al., 2003; Ortega et al., 2003). Taking into account the KMS domain, knowledge is essential and the main functions are sharing and distributing it, the availability of this knowledge must be basic in order to attain its objectives. The Reliability with which it is handled, the Efficiency with which it is
Fig. 3. Use case model.
878
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
Fig. 4. Candidate Architecture 1.
distributed and Maintainability in the event of the KMS evolving, must be considered quality characteristics and encouraged by the architecture. Therefore, the NFRs selected for the KMS are: Reliability, Efficiency and Maintainability. Two candidate architectures were proposed for the KMS and these are shown in Figs. 4 and 5. In order to specify the architecture under the RUP approach (Krutchen, 1999), an initial Class Diagram (Candidate Architecture) was proposed. It includes two of the Design Patterns proposed by Gamma et al. (1995), these being: Chain of Responsibility and Observer. The design patterns are descriptions of communication by objects and classes that are personalized in order to attack a problem in the general plan within a specific context (Gamma et al., 1995). Each design pattern has a particular ObjectOriented approach. Fig. 2 shows the Candidate Architecture with the design patterns used. In the Chain of Responsibility Design Pattern, when a client issues a request, it propagates along the chain until it reaches a ConcreteHandler object, which takes responsibility for handling it. This leads to: reduced coupling, greater flexibility in the assignment of responsibilities and lack of guarantee in the receipt.
In the Candidate Architecture, the Financial Handler, Activity Handler, Mail Handler and Document Handler classes play a similar role to the ConcreteHandler object as these are levels in the Components Handler class that in turn would play the role of an Handler object. The direct advantages of applying this pattern are: better distribution and control of requests and more scalability. In the Observer Design Pattern all the observers are notified when a change occurs in the status of stored objects. This had the following consequences: it reduces the coupling between Data and Observer Objects and it supports broadcast communication and unexpected changes. The second candidate architecture has two design patterns: Chain of Responsibility, once again and Command. This architecture can be seen in Fig. 5. In the Command Design Pattern (Gamma et al., 1995), when a client issues a request it propagates along the chain until it reaches a ConcreteCommand object that takes responsibility for handling it. This leads to the following consequences: reduced coupling, easy addition of new commands, support for recording changes, support for
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
879
Fig. 5. Candidate Architecture 2.
transactions and support for the Undo operation. The structure of Command is shown in Fig. 7. In the Candidate Architecture 2, requests would reach each handler that uses some of the functions defined in the Functions Library, but before executing them the Record of Modifications which in turn checks the feasibility of the function and invokes its execution, is updated. For all these reasons, the Format Handler, the Component Handler, the Project Handler and the Request Handler play a similar role to a Receiver object, while the Function Library class would play the role of a ConcreteCommand object and the Record of Modifications, would play in turn the role of Invoker. Once the system’s candidate architectures and their quality requirements are known, the methods studied should be described and the analysis of the quality characteristics resulting from their application presented. 3.4. Testing each method in the pilot project The application of each method to the case study described above is presented in this section. The first of the methods applied was Architecture Tradeoff Analysis
Method—ATAM (Clements et al., 2002), followed by Design and Use of Software Architectures—DUSA (Bosch, 2000) and Architectural Evaluation Method— AEM (Losavio et al., 2004), respectively. A brief description of each method chosen was given in the section ‘‘Select a Set of Candidate Method to Evaluate.’’ Next that a summary is presented of the evaluation made to the PROYECTOS DID KMS using each of the methods. 3.4.1. Method #1: Architecture tradeoff analysis method (ATAM) Once the ATAM has been applied to the candidate architectures, the results of the evaluation are summarized in Table 6, and each architectures advantages, disadvantages, risks and non-risks associated with the use of different design patterns are pointed out. Having completed all the steps required by the ATAM method for evaluating the architecture, the level of detail was found to be sufficient to begin developing the system with a good degree of certainty as regards the ideal structure of the architecture and the quality characteristics and attributes expected.
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
880
Table 6 Scenario analysis summary for both architecture (Pe´rez et al., 2001) Attributes
Candidate Architecture 1
Candidate Architecture 2 (transformed)
Maintainability
Has a Risk associated with the Observer pattern, since just as updates are diffused, so are errors
• Has a Non-risk condition as no negative consequence related to the use of the Command pattern is known • Provides a significant advantage by permitting better control and distribution of functions
Reliability
Has a Risk condition associated with the Observer pattern, since because the source of the errors is unknown, recovery operations may be hard and difficult to undertake
• Has a Non-risk condition identified for the Reliability attribute • Has the advantage of reducing the risk of data integrity since the previous values are stored, which provides much fuller and more reliable control of modifications
Efficiency
By using the Observer pattern, all the observers are notified when there is a change in the state of the data stored and, depending on the number of Observers; this may reduce response time (disadvantage)
• By using the Command pattern, each transaction encapsulates a set of activities and participants, which has a positive impact on effectiveness, but it may become less Efficient (disadvantage)
Both architectures have a Risk condition related to the Chain of Responsibility pattern, since there is no guarantee of receipt in the event that the chain of transmission of responsibilities has not been properly configured
Having shown the application of ATAM, we shall now present the application of the Design and Use of Software Architectures (Bosch, 2000). 3.4.2. Method #2: Design and use of software architecture (DUSA) Out of the four types of transformation proposed by DUSA, it was decided to use the transformation of the Application of a Design Pattern, as it can only be applied to part of the architecture (Bosch, 2000). This one was also chosen because it increases Maintainability. This transformation makes it easier to replace algorithms in order to execute a specific task (Bosch, 2000), promoting the Maintainability attribute, which is very important within the domain of a KMS. It was therefore decided to replace the Observer pattern of the Architecture in Fig. 1, with the Command Design pattern (Gamma et al., 1995), since this pattern fosters Maintainability better. Fig. 2 shows the candidate architecture (transformed). Note that the design patterns used in their Architecture are: Chain of Responsibility (once again) and Command. Following application of the above steps, Table 7 presents a summary of the most important measurements for both architectures. Remember that the Candidate Architecture 1 is the same Candidate Architecture shown in Fig. 2.
3.4.3. Method #3: Architectural evaluation method (AEM) The objective of this section is to apply this method to PROYECTOS DID KMS and present the results. Table 8 summarizes the results obtained from the evaluation of the architectures. Once the results of the architectural evaluation for each method chosen are known, the analysis of the features for each of the methods is shown, as can be seen in the following section.
3.5. Features evaluation for each method This section focuses on presenting an analysis of the behavior of the features observed in the methods applied. With these features, points of reference can be established between the methods (Kitchenham, 1996). The features that enable the methods to be compared were presented in the section ‘‘Identify the Criteria Evaluated’’. The method evaluation for Sub-features Tools one shows in Table 9. Based on the results obtained in the evaluation, the next step is to analyze those results, enabling the strengths of each method to be identified.
Table 7 Summary of the measurements taken for both architectures (Pe´rez et al., 2002) Profile
Metric
Sub-characteristic
Candidate Architecture 1
Candidate Architecture 2 (transformed)
Maintainability
Control structure system Level of coupling Control structure system
Number of modules Number of interconnections Depth of the trace
Addition of one module Average: 2 interconnections Average: 52 levels
Addition of two modules Average: 2.14 interconnections Average: 51.5 levels
Reliability
Fault tolerance
Functional dependence
1
2
Efficiency
Behavior over time Use of resources
Use of CPU and memory Response time
Average: 52
Average: 51.5
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
881
Table 8 Results of the evaluation proposed by AEM (Casal, 2003) Quality Characteristic
Quality Sub-characteristic
Candidate Architecture 1
Candidate Architecture 2
Reliability
Maturity Fault tolerance Recovery
Observer Pattern
Command Pattern
0 0 A. 0 B. 0 C. 0
1 1 A. 1 B. 1 C. 1
Functionality
Security
1
1+
Efficiency
Performance Use of resources
Server response time Time: From start to finish Space BD of projects
Subscription time + server response time Time: Momentary Space BD of subscription + BD of projects
Maintainability
Analysis Modification Stability Testability Interaction comp. Modularity
0 0
1 1 1 = 1 0
1 = 0 1
i. The results are highlighted with: if they are less favorable/if they are the most favorable. ii. The evaluation metric: 0 Absence of the characteristic/1 presence of the characteristic = same in both architectures/1 + present but less favorable than in the other architecture. iii. A—Capacity to restore the performance level. B—Data recovery capacity. C—Little effort and recovery time.
3.6. Analysis of the scoring and presentation of results The respective analysis of the evaluation of the features in the three methods was conducted in accordance with the features and sub-features tree shown in Fig. 2. The starting point is the general features and their sub-features. On top of the scoring, the methods were categorized and prioritized according to different aspects. Then, a global analysis is identified for each method. 3.6.1. General features 3.6.1.1. Analysis of the sub-feature structure. As can be seen, the three proposals are mature as far as the construction of a method is concerned. This is because structurally it has a clearly defined starting and finishing point. On the one hand it meets the objective sought satisfactorily and its stages are clearly defined. From the quantitative point of view, based on the scores obtained by each method, all of them have the highest scores for each metric; which shows that they all reflect a good level of structuring. Since the results of three methods are the same in this evaluation, it is not considered necessary to prioritize them by result (Table 10). 3.6.1.2. Analysis of the sub-feature approach. As can be seen again, similar values are obtained for each metric in the
three methods. One can also see that the scenario is the architecture’s emerging evaluation paradigm; in other words, all the architectural evaluation methods tend to use scenarios as the principle behind their approach. Similarly to the previous features, the methods are not prioritized according to their scores (Table 11). 3.6.1.3. Analysis of the sub-feature evaluating team. As can be appreciated, both AEM and ATAM are better methods for proposing roles to the evaluating team, as they indicate them explicitly, making them more organized and better documented for application (Table 12). 3.6.1.4. Analysis of the sub-feature documentation. The deliverables are defined in the three methods; however, ATAM is characterized by being fuller and more consistent, as well as clearly defined. In some sections information is lacking for AEM and DUSA, which may cause confusion and mean that the evaluating team defines takes on the definition of what is asked, which may not be adequate and may lead to undesired results as a result of this decision (Table 13). 3.6.1.5. Analysis of the sub-feature tools. The tools needed in an architectural evaluation method can be classified under three types (Table 14):
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
882
Table 9 Evaluation of methods for the sub-features tools Feature Sub-feature Metrics
General Tools Method
Result
Justification
ATAM DUSA
4 3
AEM
5
Attempts to break it down, but does not specify the tool; does so at a high level Its proposal is not very acceptable, as it does not propose a tool and is based on 4 quality features, indicated by the same method Yes, it is based on an international standard and proposes metrics
ATAM DUSA
5 4
AEM
4
ATAM DUSA
5 3
AEM
4
Yes, the ABAS (attribute based architectural style) Yes, it proposes some tools for the analysis of the evaluation, but to a lesser extent than the remaining ones Yes, it proposes a table where the attention points are shown
17. Are the tools for specifying quality efficient?
ATAM DUSA AEM
4 2 5
Yes, it is efficient since it specifies the utility tree Yes, but at a minimum level of acceptability Yes, it is so efficient because the tools are based on international standards
18. Are the tools for applying the scenario technique efficient?
ATAM DUSA AEM
4 2 4
Yes, since the ATAM is very specific Yes, but at the minimum level of acceptability Yes, because it guides the evaluator on the use of the table
19. Are the tools for analyzing the evaluation efficient?
ATAM DUSA AEM
4 2 4
Yes, since the ATAM is very specific Yes, but with a minimum level of acceptability Yes, because it guides the evaluator on how to use the table
20. Are the tools for specifying the quality of the architecture legible and easy to use?
ATAM DUSA
4 3
AEM
5
Yes, it is easy and legible. It has five quality characteristics to be evaluated Yes, it is easy and legible but at an acceptable level. It only has four quality characteristics to be evaluated Yes, its legibility and simplicity meet international standards. It has the largest number of quality features to be evaluated, since it is based on the ISO standard
21. Are the tools for applying the scenario technique legible and easy to use?
ATAM DUSA AEM
4 3 5
Yes, it is easy and legible. It is specific in the utility tree Yes, but because it is not precise, its level is acceptable Yes its legibility and simplicity meet those of an international standards
22. Are the tools for analyzing the evaluation legible and easy to use?
ATAM DUSA
4 3
AEM
5
Yes, the method is precise and specific for the analysis of the evaluation Its level is acceptable, since it calls for a degree of preparation in order to carry out certain activities The framework presented in the method is fairly simple to use
14. Does it propose tools to specify quality?
15. Does it propose tools to apply the scenario technique?
16. Does it propose tools for the analysis of the evaluation?
Yes, the utility tree Yes, it uses the ‘‘profiles’’ (set of scenarios generally with some relative importance associated) Uses the scenario technique, though not explicitly. Works with use cases and ponders them
Table 10 Evaluation of the sub-feature structure Operational definition 01. 02. 03. 04.
ATAM
Has it a well-defined starting point? Has it a clearly defined finishing point? Is the objective sought by the evaluation method reached satisfactorily? Are its stages clearly defined?
Total
DUSA
AEM
5 5 5 5
5 5 5 5
5 5 5 5
20
20
20
Table 11 Evaluation of the sub-feature approach by the method Operational definition 05. Is it a deductive method? 06. Is it an inductive method? 07. Is it based on a principle, paradigm or specific direction? Total
ATAM
DUSA
AEM
5 1 5
5 1 5
5 1 5
11
11
11
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
883
Table 12 Evaluation of the sub-feature evaluating team Operational definition
ATAM
DUSA
AEM
08. Does it propose roles to the evaluating team?
5
2
4
Total
5
2
4
Table 13 Evaluation of the sub-feature documentation Operational definition 09. 10. 11. 12. 13.
ATAM
Does the method have clearly defined deliverables? Does it enable attention points to be identified and documented for the software architects? Are the help systems easy to access? Is the documentation complete? Is the documentation consistent?
Total
DUSA
AEM
3 5 5 5 4
2 1 4 4 4
3 3 4 3 4
22
15
17
Table 14 Evaluation of the sub-feature tools Operational definition 14. 15. 16. 17. 18. 19. 20. 21. 22.
Does it propose tools to specify quality? Does it propose tools to apply the scenario technique? Does it propose tools to analyze the la evaluation? Are the tools for specifying quality efficient? Are the tools for applying the scenario technique efficient? Are the tools for analyzing the evaluation efficient? Are the tools for specifying quality legible and easy to use? Are the tools for applying the scenario technique legible and easy to use? Are the tools for analyzing the evaluation legible and easy to use?
Total
• tools for specifying quality, • tools for applying the scenario technique, • tools for analyzing the evaluation. The results for each type of tool are shown below: • Tools for specifying quality: In relation to the results of the first type of tool, AEM has the advantage of being based on an international standard and it also proposes a set of metrics for evaluating those features. ATAM attempts to propose a tool to specify quality, but it does so at a high level. The lack of formalism in that respect makes it less efficient, lacking in legibility and difficult to use. However, ATAM shows an interest in wanting to break down and specify quality, so the values are acceptable for the score given to this features. Lastly, DUSA does not propose a tool for specifying quality, it simply mentions the use of four quality attribute it considers important; however, there is no definition by the method in that respect. The disadvantage is that it lacks efficiency and legibility and is not easy to use, and therefore scores less than the other two methods. • Tools for applying the scenario technique: The second type of tool, ATAM is the one scoring highest in the evaluation although as far being easy to use,
ATAM
DUSA
AEM
4 5 5 4 4 4 4 5 5
3 4 3 2 2 3 3 3 3
5 4 4 5 4 3 5 4 4
40
26
40
AEM comes out better. This is because ATAM not only proposes a tool but in turn breaks down the specification of the scenario into stimulus, environment and response. Because more documentation is available on the scenarios, they are easier to use, more legible and more efficient. AEM has a tool for applying the scenario technique. It works with use cases and ponders them, but not consciously. It ranks as acceptable as far as being efficient, easy to use and legible. The causes attributed to the results of DUSA stem from similar reasons to those of AEM; the difference being that this method works with scenarios through profiles. The scores for the metrics are below those for the previous methods, efficiency being one of the greatest disadvantages. • Tool for analyzing the evaluation: This tool must provide a structure for the input, analysis and presentation of evaluation data. ATAM gives the best results in this tool’s proposal, enabling not only the evaluation to be analyzed, but also allowing tradeoffs, sensitivity points, risk points, etc. to be identified. It should be noted that the tool proposed by this method, which is called ABAS, is the same one that supports the three requirements mentioned; in other words, through this tool ATAM specifies the quality, applies the scenario technique and, lastly, analyzes the
884
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
evaluation. As far as being easy to use, legible and efficient, the level is acceptable for the evaluation. AEM proposes for the analysis of the evaluation the use of a tool that enables the focal points to be documented. As far as being easy to use, legible and efficient, acceptable values for the evaluation are exceeded, and it also has an advantage over the rest of the methods evaluated. DUSA meets the acceptable values but does not indicate a structure for it. Instead it stipulates that the results should be analyzed according to the transformation technique being used. As far as being easy to use, legible and efficient, these are acceptable for the evaluation, but below the level of the other methods. Having conducted an overall analysis of the results, both AEM and ATAM score highest as far as the tools proposed being easy to use, legible and efficient; this reduces costs (time, investment trained staff, etc.). 3.6.1.6. Analysis of the sub-feature costs. As can be seen from the first four metrics, AEM is the most economical method. The reason for this is that of all the tools proposed, it is the easiest to use. That sub-features was analyzed in the previous section. Its tools do not entail any investment in hardware or software to implement them. Nor is it necessary to train skilled staff. Although the score for ATAM is below the score for AEM, the difference is only slight, which is why the two methods can be considered equally favorable (Table 15). DUSA is the only one that may involve investing in software and hardware, provided the use of an Architecture Description Language (ADL) or a mathematical model for the Evaluation is considered necessary. It should be noted that this decision is not compulsory. The author suggests it to complement the evaluation. That is why the analysis of its cost features is the least favorable. The results for the General Features analysis are summarized in Fig. 6. The Specific Features and their Sub-features are analyzed below.
Table 15 Evaluation of the sub-feature costs Operational definition 23. Doesn’t it involve a financial investment in hardware to implement it? 24. Doesn’t it involve a financial investment in software to implement it? 25. Doesn’t it involve some type of investment: payment of licenses, complementary material, etc.? 26. Doesn’t it involve additional costs for personnel? 27. Doesn’t it require an experienced evaluating team? 28. Doesn’t it entail a training curve? 29. Doesn’t it entail a learning curve? Total
ATAM
DUSA
AEM
5
3
5
5
3
5
5
5
5
4
3
5
4
3
5
4 4
3 3
5 4
31
23
34
Fig. 6. Summary of values obtained for the general features.
3.6.2. Specific features 3.6.2.1. Analysis of the sub-feature software architecture. As can be seen from the results of the evaluation, the three methods meet the objective proposed: choice of the best architecture to meet the quality requirements proposed. However, ATAM and AEM focus on evaluating different architectures proposed and choosing the best of them, whereas DUSAs strength lies in its ability to transform an initial architecture in order to obtain the ideal sought (Table 16). AEM is also characterized by having a formal language for describing the architecture, which is very important when bringing the candidate architectures into the system. Neither ATAM nor DUSA take this aspect into account, so the use of a language for this depends on the user. Another important aspect when choosing an architecture for a system is risk evaluation. Here ATAM and AEM are the most favorable because they analyze the impact of one decision compared with another. DUSA, for its part, attempts to meet the requirements, even if the transformation is less favorable in other aspects. The following sub-features corresponds to Quality, which was split into attributes to provide a greater level of detail on the architecture evaluated. 3.6.2.2. Analysis of the attribute quality views. As can be appreciated, any of the three methods is very acceptable Architectural Quality Views. The difference with AEM compared with the other two methods is because despite the consideration of some metrics proposed for the evaluation, the application of some of them is not very obvious (Table 17). 3.6.2.3. Analysis of the attribute quality standard. As far as the use of a standard is concerned, in order to check whether it fulfills the quality requirements, AEM behaves better by instantiating ISO 9126. Both ATAM and DUSA are less formal since they do not consider the use of standards (Table 18). 3.6.2.4. Analysis of the attribute quality requirements. As regards the quality requirements sought by the system architecture, the evaluation for AEM is the best, in that
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
885
Table 16 Evaluation of the sub-feature software architecture Operational definition 30. 31. 32. 33. 34. 35. 36.
ATAM
Does it make it possible to work with more than one initial candidate architecture? Does it propose a formal language for the specification of the architecture? Is it useful for evaluating the architecture of a system that has already been developed? Does it support identification of the most favorable architecture? Does it guide the transformation of the architecture; i.e. does it improve one or more of its attributes? Does it choose the best architecture, based on the system’s quality model? Does it analyze the impact and risk of architectural decisions?
Total
DUSA
AEM
5 1 5 5 1 4 5
1 1 3 4 5 4 2
4 5 5 5 1 4 4
26
20
28
Table 17 Evaluation of the attribute quality views Operational definition
ATAM
37. Does it measure the quality associated with the services offered by components that dependent directly and exclusively on the architecture? 38. Do the quality characteristics considered by the method have internal quality? 39. Do the quality characteristics considered by the method have external quality? Total
it takes into account both the functional and the non-functional requirements. ATAM scores the highest because it considers the different points of view on quality requirements and attributes this directly to the fact that the evaluating team does a better job of considering all the stakeholders. DUSA, despite not taking these into account, depends largely on the system architect. AEM does so implicitly, by considering the requirements adopted by the system (Table 19). The individual study of the attributes is an important aspect since, despite the study being done characteristic by characteristic, inevitably one features depends on or is related to another one in the system. Therefore, since the evaluation is simultaneous, it is more accurate and objective. Thus for DUSA and AEM the results are not good
DUSA
AEM
5
5
4
5 5
5 5
5 5
15
15
14
because the analysis of a quality features is unable to appreciate the impact of the evaluation in relation to the remaining ones. 3.6.2.5. Analysis of the attribute metrics for evaluating quality. In this case AEM obtains the best results for the quality features and sub-features to be evaluated. This is because quality characteristics are not only refined into sub-characteristics, but they are also associated with a particular attribute to be evaluated. These aspects are not covered by either ATAM or DUSA. Accordingly, AEM is the only one of the three methods that proposes metrics for evaluating features, these features being associated with the architecture. DUSA is the method that scored the best results for prioritizing quality requirements, since by using
Table 18 Evaluation of the attribute quality standard Operational definition
ATAM
DUSA
AEM
40. Is it based on a quality standard to specify the quality requirements to be evaluated?
1
1
5
Table 19 Evaluation of the attribute requirements Operational definition 41. Does it take into account the system’s non-functional requirements? 42. Does it take into account the system’s functional requirements? 43. Are the different points of view considered in the quality requirements? User’s point of view, developer’s point of view, etc.? 44. Are the quality characteristics studied together? Total
ATAM
DUSA
AEM
5 1 5
5 4 2
5 5 2
5
1
2
16
12
14
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
886
profiles the prioritization analysis is much better than for ATAM with its utility tree and AEM with its prioritization of use cases. Nevertheless, levels attained by ATAM and AEM in this last metric meet the levels desired (Table 20). In base of the evaluation of the specific sub-features related to the Architecture of Software and Quality, in Fig. 7 shows a graph summarizing the values obtained for the specific features and Fig. 8 summarizes the results obtained for the Sub-features Quality and all its subfeatures, and then, as in the case of General Features. Table 21 summarizes the evaluation of the three methods. The main strengths identified for each method are as follows: At the general level, each of them is well structured; however their particular strengths are: For ATAM:
Fig. 7. Summary of values obtained for the specific features.
• It has the most complete and diverse evaluating team, which ensures broad participation by the stakeholders, thereby creating an all-encompassing architectural quality concept. • It is the most precise of the three methods, in that it applies scenario analysis. For DUSA: • Working for a specific architecture assists in evaluating systems that have already been developed. • It may be costly if used as a complement to the evaluation simulation technique; however, it is the only one that guides the process in order to improve the architecture through its transformation. For AEM: • Its main advantage is that it uses an internationally renowned quality standard, ISO/IEC 9126. In addition to being the only one of the three methods that proposes metrics for evaluating the features and these features are associated with the architecture. This helps to make the evaluation method more consistent. • In addition to that, they propose a set of metrics that significantly support the decision-maker. • If the goal is efficiency, then the method should be AEM.
Fig. 8. Summary of values obtained for the specific sub-features quality.
Because the methods differ so greatly from one another; it is hard to choose one over another, as the choice will depend on the requirements of the evaluation. That decision can be based on the criterion of one of the features evaluated. If the purpose of the evaluation is to transform the architecture, the best method is DUSA. If, on the other hand, the idea is to choose the ideal architecture from several candidates, ATAM or AEM would be the recommended ones as methods that have been prepared for that specific purpose. The best way to evaluate system architecture with one method or another will depend on how much the architectural team’s objectives resemble the principles of the
Table 20 Evaluation of the attribute metrics for evaluating quality Operational definition
ATAM
DUSA
45. 46. 47. 48. 49.
1 1 1 1 4
1 1 1 1 5
5 5 4 4 4
8
9
22
Are the quality characteristics refined into specific sub-characteristics? Are the quality sub-characteristics associated with an attribute to be evaluated? Does it propose metrics for evaluating the quality attributes? Are the metrics used to measure the quality of the attributes from an architectural standpoint? Does it propose how to prioritize the quality requirements?
Total
AEM
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
887
Table 21 Summary of the results obtained in the evaluation of the three methods Sub-features/attribute
Prioritization
Aspects considered
Structure
No prioritization
–
Approach by the method
No prioritization
–
Evaluating team
1. ATAM 2. AEM 3 DUSA
Organization Documentation Encourages architectural evaluation
Documentation
1. ATAM 2. AEM 3. DUSA
Completeness Consistency
Tools
1. ATAM, AEM 2. DUSA
Usability Legibility Efficiency Cost Reduction
Costs
1. AEM 2. ATAM 3. DUSA
Tools proposed easy to use Purchase of hardware or software
Software architecture
1. AEM 2. ATAM 3. DUSA
Complies with the requirements Choice of the ideal architecture Risk evaluation
Quality/views
1. ATAM 2. DUSA 3. AEM
Attention to quality Application of metrics
Quality/standards
1. AEM 2. ATAM, DUSA
Formality
Quality/requirements
1. ATAM 2. AEM 3. DUSA
Completeness Objectivity Precision
Quality/evaluation metrics
1. AEM 2. DUSA 3. ATAM
Attention to quality Application of metrics Prioritization of requirements, support for decision-making
method chosen, which can be quantified and systematized with the set of features proposed here. Their strengths could be identified based on the analysis of these features made on these three architectural evaluation methods. The ATAM method stood out for the documentation proposed for the evaluation process, DUSA facilitates the improvement of the architecture with the transformation phase because of its formality, and, AEM emphasizes the specification of architectural quality. A future task could involve combining the strengths of the methods studied with others already being studied in order to ascertain the ideal evaluation method. 4. Conclusions Choosing an evaluation method is no easy task. It involves careful consideration of the main advantages of the methods available and the need to determine which of these strengths adapt best to the requirements. This article puts forward a set of 49 features with their metrics leading to the identification of these strengths. The features proposed here do not entail selecting a method. In order to make the selection, the features must first be pondered
according to the requirements established by the stakeholders. The methods stress different aspects of the architectural evaluation: specification, application of techniques and tools, analysis, etc. Hence the best method will result from the combination of the strengths of each method evaluated. Acknowledgments This research was funded by FONACIT, Fondo Nacional de Ciencia, Tecnologı´a e Innovacio´n, through project S1-2001000794. The authors wish to express their thanks to Eng. Casal for his valuable assistance. References Basili, V., 1992. Software modeling and measurement: the goal/question/ metric paradigm. Technical Report CS-TR-2956, University of Maryland. Bass, L., Clements, P., Kazman, R., 2003. Software Architecture in Practice. Addison-Wesley. Bosch, J., 2000. Design and Use of Software Architectures. AddisonWesley. Callaos, N., 1992. A systemic ‘systems methodology’. In: 6 International Conference on Systems Research Informatics and Cybernetics.
888
A. Grima´n et al. / The Journal of Systems and Software 79 (2006) 871–888
International Institute of Advanced Studies in Systems Research and Cybernetics and Society for Applied Systems Research, Florida. Casal, J., 2003. Evaluacio´n de una Arquitectura Ido´nea para Sistemas Colaborativos que cumple con los Atributos de Calidad Aplicando ISO/IEC 9126. Tesis, Universidad Simo´n Bolı´var, Venezuela. Chirinos, L., Losavio, F., Pe´rez, M., 2001. Feature analysis for qualitybased architecture design methods. In: Proceeding de las Jornadas de Ciencias de la Computacio´n Chilenas 2001 (JCCC2001), Caracas, Venezuela. Clements, P., Kazman, R., Klein, M., 2002. Evaluating software architecture. Methods and Case Studies. SEI Series in Software Engineering. Addison-Wesley. Gamma, R., Helm, R., Johnson, R., Vlissides, J., 1995. Desing patterns. Elements of Reusable Object-Oriented Software. Addison-Wesley. In, H., Kazman, R., Olson, D., 2001. From Requirements Negotiation to Software Architectural Decisions, Software Engineering Institute. Available from:
, Carnegie Mellon University. International Standardisation Organisation, 2002. What are standards? Available from: . Jacobson, I., Booch, G., Rumbaugh, J., 2000. El Proceso Unificado de Desarrollo de Software. Addison-Wesley. Kitchenham, B., 1996. DESMET: a method for evaluating Software Engineering methods and tools. Technical Report TR96-09, University of Keele. Kitchenham, B., Jones, L., 1997. Evaluation software engineering methods and tools. Part 5. The influence of human factors. ACM Software Engineering Notes 22 (1), 13–15. Krutchen, P., 1999. The Rational Unified Process an Introduction, second ed. Addison-Wesley Longman, Inc. Losavio, F., Chirinos, L., Matteo, A., Le´vy, N., Ramdane-Cherif, A., 2004. ISO quality standards for measuring architectures. Journal of Systems and Software 72 (2), 209–223. Ortega, M., Pe´rez, M., Rojas, T., 2003. Construction of a systemic model for evaluating a software product. Software Quality Journal 11 (3), 219–242. Pe´rez, M., Grima´n, A., Domı´nguez, K., 2001. Caracterı´sticas de Calidad Para la Arquitectura del Sistema Proyectos DID-KMS, LI Convencio´n Anual de AsoVAC,. San Cristo´bal, Venezuela, Noviembre.
Pe´rez, M., Griman, A., Losavio F., 2002. Simulation-based architectural evaluation for collaborative systems. In: XXII International Conference of the Chilean Computer Science Society (SCCC), Copiapo´, Chile. Pressman, R., 2002. Ingenierı´a del Software: un enfoque pra´ctico. McGraw-Hill, Madrid. Sommerville, I., 2002. Software Engineering, fifth ed. Addison-Wesley. SEI, Carnegie Mellon, 2005. Feature Analysis. Retrieved online November, 15, 2005. Available from: . Anna Cecilia Grima´n Padua is M.Sc., on Systems Engineering—Simo´n Bolı´var University (Venezuela). She is Associated Professor at Simo´n Bolı´var University working at the Process and Systems Department. She is member of the LISI research group and her main research interests are: Software Architecture, Software Quality, Methodologies, and Collaborative Systems Development. Marı´a Ange´lica Pe´rez de Ovalles is member of the Association for Information Systems. She received her Ph.D. on Computational Sciences at Central University of Venezuela. She is Full Professor at Simo´n Bolı´var University working at the Process and Systems Department. She is member of the LISI research group and her main research interests are: Systemic Quality of Software, Software Architecture, and Information Systems Methodologies. Luis Eduardo Mendoza Morales is member of the Association for Information Systems. He is M.Sc., on Systems Engineering—Simo´n Bolı´var University (Venezuela). He is Associated Professor at Simo´n Bolı´var University working at the Process and Systems Department. He is member of the LISI research group and his main research interests are: Impact of Information Systems in organizations, Systems Integration, and Information Technology. Francisca Losavio received doctoral degrees in France, University of ParisSud, Orsay. She is head of the research Laboratory of Software Technology (LaTecS) of the Software Engineering and Systems (ISYS) research center, Faculty of Science, Central University of Venezuela, Caracas, where she works for the Software Engineering post graduated studies. Her main research topics are software architecture and software quality.