From requirements negotiation to software architecture decisions

From requirements negotiation to software architecture decisions

Information and Software Technology 47 (2005) 511–520 www.elsevier.com/locate/infsof From requirements negotiation to software architecture decisions...

153KB Sizes 0 Downloads 117 Views

Information and Software Technology 47 (2005) 511–520 www.elsevier.com/locate/infsof

From requirements negotiation to software architecture decisions Rick Kazmana,c, Hoh Peter Inb,*, Hong-Mei Chenc a Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA 15213-3890, USA Department of Computer Science and Engineering, Korea University, Seoul 136-701, South Korea c Department of Information Technology Management, University of Hawaii, Honolulu, HI 96825, USA b

Received 6 April 2004; revised 2 October 2004; accepted 8 October 2004 Available online 18 December 2004

Abstract Architecture design and requirements negotiations are conceptually tightly related but often performed separately in real-world software development projects. As our prior case studies have revealed, this separation causes uncertainty in requirements negotiation that hinders progress, limits the success of architecture design, and often leads to wasted effort and substantial re-work later in the development life-cycle. Explicit requirements elicitation and negotiation is needed to be able to appropriately consider and evaluate architecture alternatives and the architecture alternatives need be understood during requirements negotiation. This paper propose the WinCBAM framework, extending an architecture design method, called cost benefit analysis method (CBAM) framework to include an explicit requirements negotiation component based on the WinWin methodology. We then provide a retrospective case study that demonstrates the use of the WinCBAM. We show that the integrated method is substantially more powerful than the WinWin and CBAM methods performed separately. The integrated method can assist stakeholders to elicit, explore, evaluate, negotiate, and agree upon software architecture alternatives based on each of their requirement Win conditions. By understanding the architectural implication of requirements they can be negotiated more successfully: potential requirements conflicts can be discovered or alleviated relatively early in the development life-cycle. q 2004 Elsevier B.V. All rights reserved. Keywords: Requirements negotiation; Architecture analysis; Conflict resolution; WinWin; CBAM; ATAM

1. Motivation Architecture design and requirements negotiations of software-intensive systems are tightly related. However, in most modern processes for building such systems, there exists a wide separation of these two activities, both conceptually and temporally. There has been substantial research in the (separate) fields of requirement negotiations and architecture design. But in practice most designers and developers of complex software-intensive systems are left with the task of struggling to elicit requirements that are elusive in nature; they have few guidelines or methods for how to identify critical requirements and link them to architecture decisions.

* Corresponding author. Address: Korea University, Seoul, 136-701, Rep. of Korea. E-mail addresses: [email protected] (R. Kazman), hoh_in@ korea.ac.kr (H.P. In), [email protected] (H.-M. Chen). 0950-5849/$ - see front matter q 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.infsof.2004.10.001

Moreover, most requirements are not ‘static’ as one would hope; many requirements are not discovered or clarified until the system’s architectural design has been explored. Many iterative design methods attempt to address these issues but they often loop through an implementation phase and then back, which causes substantial and unnecessary time delay and costs. Even the Rational Unified Process, which is described as being ‘architecture-centric’ distinguishes Requirements and Analysis and Design as separate ‘workflow details’ [18]. Requirements and architecture design are seen as separate activities, and iteration is the means for them to interact. This research specifically addresses why and examines how the architecture design process can be tightly integrated with the requirement negotiation process resulting in time savings, cost savings, risk reduction, and higher design quality (i.e. maintainability, flexibility, user acceptance) as it is commonly practiced in many other design disciplines, such as in the design of buildings. Specifically, this research

512

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

extends the previous work on ATAM [10,15] and the CBAM [16] to incorporate requirement negotiation techniques from WinWin [4,5,14] forming an integrated method to assure architecture design quality. The ATAM has been successful in the early identification of technical architectural risks, as described in [10, 15] and elsewhere. Other architecture analysis methods exist [11,19], and they report similar successes and results. But over and over again when using ATAM to analyze an architecture for its technical tradeoffs, software architects have also been asked to consider issues of cost, schedule, benefits, and alignment with user requirements. These issues were beyond the scope of the ATAM. Through these experiences using the ATAM on large projects in real-world organizations, it became clear that the software architecture design must address the economic aspects (cost, benefits, uncertainty) of architecture strategies (ASs) to solidify design alternatives, thus facilitating the design decisions before implementing them. CBAM was born out of this critical need. Furthermore, through these field action research [1,2] studies using first ATAM and then CBAM, it was found that software architects were constantly challenged by needing to address the different, conflicting requirements (goals and constraints) of various stakeholders who have different roles, responsibilities, and priorities. In addition to needing a formal method to collect, explore, understand, specify and prioritize requirements that govern the selection of design alternatives, the architects need a mechanism to gauge how well their architecture strategies satisfy the requirements. This assessment can provide a critical input in the feedback loop to a final architecture design. Most importantly, the method must support the negotiation of requirements. Many software projects have failed because their requirements were poorly negotiated among stakeholders. The critical importance of requirements negotiation have been noted many times in our case studies and reflected in many software researchers’ comments: “How the requirements were negotiated is far more important than how the requirements were specified” (Tom De Marco, ICSE 96) “Negotiation is the best way to avoid “Death March” projects” (Ed Yourdon, ICSE 97) “Problems with reaching agreement were more critical to my projects’ success than such factors as tools, process maturity, and design methods” (Mark Weiser, ICSE 97) Attempts to bring requirements and architecture closer together abound in software engineering methodologies. The emphasis on incremental development and tight iterations found in such diverse methodologies as the Rational Unified Process [18] and Extreme Programming

[3] is an evidence that software engineering researchers and practitioners are aware of this need. However, just bringing requirements and architecture closer together is insufficient: requirement negotiations without understanding the economic and technical implication of software architecture decisions also poses threats to the validity of negotiated requirements. Our prior case studies, using a requirement negotiation method called WinWin, provided insights to this problem. The WinWin negotiation model [5,8] developed by the USC Center for Software Engineering, provides a comprehensive general framework for successful requirements negotiation [6,7,13,14]. In WinWin, using theory W that ‘everyone is a winner,’ stakeholders begin by eliciting their win conditions, identifying issues/conflicts, generating options to resolve these issues, negotiating options and reaching agreements. To address the problems observed separately in these action research cases in each of the two methods, namely CBAM and WinWin, we formulated an integrated method called Win-CBAM. The integration of these two methods is not simply coupling them together. When integrating WinWin techniques with CBAM techniques, we ensure that stakeholders can iteratively explore, evaluate and negotiate design/implementation alternatives to reach agreement at the design stage before actual implementation. In addition, we illustrate how WinCBAM can make requirements negotiation more informed, and to do so at an early enough stage that it is not costly or risky. We design WinCBAM to explicitly link the process of choosing architectural strategies to requirements, to ensure that stakeholders’ win conditions (functionality, performance, cost, schedule, etc.) are met by the chosen implementation. Section 2 describes the separate methods that was integrated to form WinCBAM. Section 3 describes the proposed WinCBAM method. Section 4 demonstrates the use of the method via a retrospective case study. Sections 5 and 6 present future research challenges and conclusions.

2. Context for the work The WinCBAM method exists within a rich context of architecture and requirements analysis research—primarily the ATAM, CBAM, and WinWin methods—and each of these contributes crucially to the ideas presented here. 2.1. CBAM (cost-benefit analysis method) The ATAM [10,15] uncovers the architectural decisions made in a software project and links them to business goals and QA (quality attribute) response measures. The CBAM [16] builds on this foundation by additionally determining the costs, benefits, and uncertainties associated with these decisions. Given this information, the stakeholders then have a sufficient basis for making business decisions regarding how

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

to address their important QA response goals. For example, if they felt that the system’s reliability was not sufficiently high they could use the ATAM/CBAM methods to decide whether to use redundant hard-ware, checkpointing, or some other architectural decision addressed at increasing the system’s reliability. Or the stake holders can choose to invest the project’s finite resources in some other QA— perhaps believing that higher performance will have a better benefit/cost ratio. A system always has a limited budget for creation or upgrade and so every architectural choice is, in some sense, competing with each one for inclusion and this competition is multi-dimensional, encompassing several quality attributes as well as cost and schedule. The CBAM is a decision framework. It does not make decisions for the stakeholders; it simply aids them in the elicitation and documentation of costs, benefits, and uncertainty and gives them a rational process for making choices among competing options. When an ATAM is completed, there are a set of artifacts documented as follows: † a description of the business goals that are most important for the success of the system † a set of architectural views that document the existing or proposed architecture(s), along with the important architectural approaches that dominate the system’s existing design † a utility tree which represents a decomposition of the stakeholders’ goals for the architecture. The utility tree starts with high-level statements of QAs and decomposes these into specific instances of performance, availability, etc. requirements and realizes these as scenarios with specific stimuli and response measures. The utility tree is similar in its aims to the Goal/Question/Metric approach [21], but it is focused on a single architecture and specifically on the architecture’s QA requirements and measures, rather than on an entire program or process. † a set of risks that have been discovered (potentially problematic architectural decisions that have been made, or not made) † a set of sensitivity points (architectural decisions that affect some QA measure of concern) † a set of tradeoff points (architectural decisions that affect more than one QA measure, some positively and some negatively)

and negotiating artifacts such as win conditions, issues, options, and agreements. The WinWin model uses Theory W [8], ‘Make everyone a winner’, to generate the stakeholder WinWin situation incrementally through the Spiral Model. WinWin assists stakeholders to identify and negotiate issues (i.e. conflicts among their win conditions), since the goal of Theory W involves stakeholders identifying their win conditions, and reconciling conflicts among win conditions. The dotted-lined box (Steps 1–3 and 8) shown in Fig. 1 presents the WinWin negotiation portion of WinCBAM. Stakeholders begin by entering their win conditions (Step 1). If a conflict among stakeholders’ win conditions is determined, an issue schema is composed, summarizing the conflict and the win conditions it involves (Step 2). For each issue, stakeholders prepare candidate option schemas addressing the issue (Step 3). Stakeholders then evaluate the options, delay a decision on some, agree to reject others, and ultimately converge on a mutually satisfactory (i.e. winwin) option. The adoption of this option is formally proposed and ratified by an agreement schema, including a check to ensure that the stakeholders’ iterated win conditions are indeed covered by the agreement (Step 8). Usage experience indicates that WinWin is not a panacea for all conflict situations, but generally increases stakeholders’ levels of cooperation and trust [6,14]. Agreement is not always guaranteed. There are often tradeoffs among win conditions that need to be balanced. But in WinWin there is no formal means of assessing the consequences of these tradeoffs. CBAM provides a means to assess and balance these tradeoffs, and a framework for discussion that can lead to a satisfactory resolution.

The CBAM builds upon this foundation of architectural information by probing the architectural strategies (ASs) that are proposed in response to the scenarios, risks, sensitivity points, and tradeoffs. The ASs represent the architect’s plans for the evolution of the system. 2.2. WinWin negotiation model The WinWin model provides a general framework for identifying and resolving requirement conflicts by eliciting

513

Fig. 1. The WinCBAM integrated method.

514

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

3. The steps of the integrated method The integrated method, shown in Fig. 1, attempts to interleave the steps of the CBAM with the steps of the WinWin process in a way that mirrors and augments the natural question-and-answer process that is at the heart of requirements negotiation. In the WinCBAM, as in WinWin, we elicit what the stakeholders believe they need, identify conflicts in the needs of the stakeholders, and explore conflict resolution options. But in the integrated process, techniques from the CBAM supplement, and are interleaved with, the WinWin process by systematically evaluating software architecture alternatives as concrete conflict resolution options with properties that are better understood. This additional understanding is achieved by eliciting architectural, schedule, and economic information from selected system stakeholders, which then feed back into the agreement process or into the further exploration of architectural alternatives. The integrated WinCBAM process thus transforms the requirements negotiation process into a more scientific ‘generate-and-test’ process. In this process, requirements are proposed (as hypotheses) and analyzed or tested (as architectural strategies) in a tightly interwoven iterative process. The CBAM thus augments WinWin by adding information to the conflict resolution process. The stakeholders, using the augmented process, can better understand the ramifications of their requirements, in terms of their conflicts with other requirements, their costs, their schedule implications, and their benefits along multiple quality attribute dimensions. With this added information and added understanding, the stakeholders can make better decisions about their requirements. The steps of WinCBAM shown in Fig. 1 will be elaborated in the following sections. A detailed example—a teaching case study—based upon an existing CBAM case study, will be presented in Section 4. Step 1. Elicit win conditions. In this step, each stakeholder identifies their win conditions. This step provides the basis for the identification of ideal project features by the stakeholders. Step 2. Identify quality attribute conflicts/issues. The lists of win conditions are then reviewed by the stakeholders. This enables the identification of potential conflicts, particularly among quality attribute concerns. The step results in categories of direct conflict, as well as potential conflict. Step 3. Explore architecture strategies as conflict resolution options. Based upon the quality attribute conflicts and issues generated in Step 2, the stakeholders can now generate conflict resolution options. These options must eventually be realized by some architectural capabilities. These capabilities are called architectural strategies (ASs) in the CBAM. Any given set of ASs will generally possess some characteristics that are preferred by each stakeholder,

and the chosen set of ASs will typically include some balance of capabilities representing the needs of all important stakeholders. Where do such ASs come from? They can come from any number of areas: from the architects’ experience, by borrowing from systems that have experienced similar problems in the past, or from repositories of design solutions, such as design patterns [12] or architectural patterns [9,19]. Step 4. Assess quality attribute (QA) benefits. To aid in decision making, the stakeholders now need to determine both the costs and benefits that accrue to the various ASs. Determining costs is a well-established component of software engineering. This is not addressed directly in the CBAM—we simply assume that some method of cost estimation exists and is practiced within the organization. Determining benefits is less well-established and this is the main task of the CBAM. To determine the benefit of an individual AS, a benefit evaluation function is created. We gauge benefit according to how well an AS supports each of the QA goals, which in turn relates back to the business goals for the system. To do this evaluation, we have a subset of the stakeholders (typically managers, who are in tune with the business goals) assign a quality attribute score (QAScore) to each QA goal. We also ask each stakeholder to briefly describe the particular aspect of the quality attribute that led them to assign this score. The stakeholders share and discuss these brief descriptions and their ties back to the business goals as a means of reaching consensus (if possible) on the QAScores. Step 5. Quantify the architecture strategies’ benefits. We then use these scores to evaluate each of the ASs. Very rarely does an AS only affect a single QA. ASs will have effects on multiple QAs, some positive and some negative, and to varying degrees. To capture these effects, we ask the stakeholders to rank each AS in terms of its contribution (Cont) to each QA on a scale of K1 to C1. AC1 means that this AS has a substantial positive effect on the QA (for example, an AS under consideration might have a substantial positive effect on performance) and a K1 means the opposite. Given this information, each ASi can now be assigned a computed benefit score from K100 to C100 ‘Benys’ using the formula given below. This formula simply takes the product of each AS’s contribution to a quality attribute and its attribute score and sums these over all quality attributes of interest: BenefitðASi Þ Z SumðContij !QAScorej Þ This score allows us to rank the benefit of every AS that has been proposed. But clearly this benefit evaluation will have some uncertainty. We capture this uncertainty by recording the variation in stakeholder judgements. In the CBAM we use Kendall’s concordance coefficient for

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

the group as a whole as a measure of the uncertainty of the group, as described in [16]. The more highly correlated the group, the higher the concordance coefficient and hence the lower the uncertainty. Along with each Cont score the architects provide a brief (typically one line) explanation of why they believe this AS supports or erodes a particular QA. These short statements serve to ensure that the architects mean the same thing by their judgements. Once we have done this we can safely conclude that any remaining variation in their Cont scores is attributable to true uncertainty and not to differences of interpretation of the AS. Step 6. Quantifying the architecture strategies’ cost and schedule implications. Now that the stakeholders have estimated the benefits, we must capture two other crucial pieces of information about the various ASs: their costs and their schedule implications. We propose no special cost estimation technique here (although we do think that cost estimation methods that take architecture into account are a desirable and inevitable improvement to existing methods). We assume that an organization has some method in place (even if it is ad hoc) for estimating the costs of implementing new services and features. We simply need to capture these estimates, as they are associated with each AS. Step 7. Calculate desirability. Given this information, we are in a position to calculate a ‘Desirability’ metric, as follows: DesirabilityðASi Þ Z BenefitðASi Þ=CostðASi Þ High values for this metric are indicators of those ASs that will bring high benefit to the organization at relatively low costs. In addition to calculating this metric, the absolute benefit and cost numbers need to be considered as does the magnitude of the uncertainty surrounding all of these numbers, as discussed in [16]. Step 8. Reach agreements. At this point the requirements negotiation among the stakeholders can begin in earnest. This negotiation will now be informed, rather than simply a matter of opinion. The costs, benefits, and uncertainty of each of the ASs associated with each requirement will be plain for each stakeholder to see, as well as the schedule implications and dependencies (if any). And these ASs can be tied back to the business goals, and hence the win conditions of each of the stakeholders. What results is much less an argument than a discussion about business goals, priorities, risk averseness, and the assumptions underlying the judgements. Stakeholders may still disagree about what direction to take the architecture and the system, but they will do so based upon a large base of facts and accumulated evidence and such disagreements can be more easily moderated and resolved than arguments that are simply based upon opinion or prejudice. This makes the win conditions more easily evaluable.

515

4. Earth observing system: a teaching case study The following application of the integrated method is based on work with the earth observing system data information system core system (NASA’s ECS) and is presented as a teaching case study, for illuminating the steps of the WinCBAM method, and its application to a realworld system. The first case study discussing the application of CBAM to this system was reported in [16]. This CBAM exercise was performed for NASA’s Goddard Space Fight Center (GSFC) in 2000. We have since performed a second round of analysis with NASA’s GSFC. The ECS is a constellation of satellites, as well as other kinds of land- and air-based sensors whose collective mission is to gather data about the earth’s climate. This information fuels the US Global Change Research program and supports other scientific communities throughout the world. It gathers gigabytes of data per day, 24 h per day, 365 days a year. 4.1. Executing the steps of the method Step 1. Elicit win conditions. The stakeholders for the ECS fall into four broad categories: data center managers (e.g. Earth Resources Observation System Center, Goddard Space Flight Center, Langley Research Center, National Snow and Ice Data Center), operators, contractors (developers and integrators), and the various science communities who use the data. Each of the stakeholder groups identifies their win conditions such as their goals, their constraints, and any alternatives that they might consider. Table 1 displays a hypothetical list of such win conditions for three groups of stakeholders: managers, operators, and the science community. Step 2. Identify quality conflicts/issues. The list of win conditions will be contributed by a large group of stakeholders and will almost certainly involve some conflicts. For instance, the ideal system may be highly available, but creating such a high availability system may be too costly. Or the system might be highly secure but this security will cost too much in terms of run-time performance. Furthermore, there may inherently be conflicts among the win conditions of individual stakeholders. For instance, Stakeholder 1 and Stakeholder 3 want a substantial level of system scalability and availability, which may require excessive maintenance support, potentially conflicting with the win conditions of Stakeholder 2 (or require added cost, potentially conflicting with the win conditions of Stakeholder 1). Table 2 provides a list of direct conflicts (obviously opposing win conditions) as well as potential conflicts (not necessarily conflicts, but with potential for different directions of solution) for the ECS. Each row of Table 2 shows those win conditions with which there may be a clear direct conflict, as well as

516

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

Table 1 Win conditions by stakeholders Stakeholder 1 (Manager)

Stakeholder 2 (System operator)

Stakeholder 3 (Science community)

1A (scalability) System support 50 sites, ingest from 100 data sources, and electronic distribution to 2000 sites 1B (cost) Automated operations to minimize operational costs 1C (schedule) Installed and operating in 6 months 2A (Maintainability) Reduce time to upgrade operating system, database, and archival management COTS by 50% or within 6 months of release, whichever is sooner 2B (Reliability) No system resource held by data inputs or outputs that are failed or suspended for more than 10 min 2C (Operability) Able to serve 1000 concurrent requests through V0 Gateway or MTM Gateway without operations intervention 3A (Performance) Five-fold improvement for search response times for Landsat L-7 searches 3B (Usability) One stop shopping (location transparency of system components and data) 3C (Availability) All data holdings available all the time to all (24!7)

potential conflicts (such as stakeholder 1’s win condition 1A that the system should be scalable to serve the large number of science community users, which may directly involve conflicts with better maintainability, reliability, and performance at higher operation cost and schedule, and which also may have less direct potential conflicts with operability). Quality, schedule, and cost are often in conflict. Step 3. Explore architecture strategies as conflictresolution options. Different architectural solutions may involve different levels of satisfaction with each win condition. In the past, conflicts of requirements were treated at the requirements stage without exploring their architectural implications. In the WinCBAM, we go beyond this, by exploring the architecture strategies that are involved in meeting a requirement, and hence a win condition. Step 3 involves creation of conflict-resolution solutions, called architectural strategies. Architectural strategies (sometimes known as architectural approaches, styles, patterns, or mechanisms) are proposed at this stage, along with a consideration of the what quality attributes they Table 2 Win condition conflicts Win condition

Conflicts with win conditions Direct conflict

Potential conflict

Cost

Schedule

1A 2A 2B 2C 3A 3B 3C

2A, 2B, 3A, 3B 1A, 3B 1A, 3A 3A, 3C 1A, 2C, 2B 1A 2C

3C

1B 1B 1B 1B 1B 1B 1B

1C 1C 1C 1C 1C 1C 1C

3B 3B, 3C 2C 1A, 3A

address and a description of how these strategies address those attributes. A number of architectural strategies were identified during the initial ATAM architecture presentation meeting and in subsequent meetings. Five of these are presented here: AS1 Universal Meta-data repository (to support usability, add the ability to give meaning to terabytes of data.) AS2 Distributed data repositories with user profiling and selective redundancy (to accommodate the distribution of the user community, to enhance performance and for reliability). AS3 Three-tiered (separate the rules for automatic higherlevel data generation and data subscription from data management and storage). AS4 Client/server (to allow ease of data access by remote users). AS5 Abstract service layer (to remove dependencies on specific COTS products). Step 4. Assess quality attribute (QA) benefits. Table 3 provides a list of quality attribute criteria of importance to the three groups of stakeholders in the example. These criteria will be used in two ways: (1) to understand the quality attributes and their relative importance in this step, and (2) to calibrate the meaning of ‘good’ or ‘bad’ in Step 5, when individual ASs are ranked with respect to how well they support the various quality attributes. Such a calibration is necessary because quality attributes, by themselves, are too vague and amorphous to support quantitative reasoning. Substituting precise, measurable quality attribute response goals provides a means of understanding the QA and a target by which to measure the relative merits of the proposed ASs [10]. Even when the stakeholders agree upon the precise meaning of each quality attribute and agree upon how they will be measured, they will likely want to assign different relative weights of importance for these criteria. For example, one stakeholder will be focused primarily on operability while another will mainly be concerned with reliability or performance. To reflect this spectrum of concerns each stakeholder is asked to distribute a total of 100 points across all of the quality attributes. These weights are then discussed among the group of stakeholders and a consensus value is eventually reached. An example, along with a consensus set of weights representing a set of weights assigned by the manager after discussion with the group, is given in Table 4. The stakeholders who participate in this step are typically limited to those who can assess the business implications of the various quality attributes. Step 5. Quantify the architectural strategies’ benefits. The next step in the combined process is to identify the relative benefit of each of the ASs with respect to each of the quality attributes, by assigning Cont (Contribution) scores.

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

517

Table 3 Quality attribute criteria Criteria

Min value (K1)

Max value (1)

Performance Scalability

L-7 search with 100 hits under normal operations, result in 3 min System support 5 sites, ingest from 10 data sources, and distribution to 200 sites No reduction to upgrade time System is available at least 70% of the time; less than 100 min of down times 10 site visits to collect all necessary data 100 concurrent requests can be served

L-7 search with 100 hits under normal operations, result in 30 s System support 50 sites, ingest from 100 data sources, and distribution to 2000 sites Upgrade time is reduced by 50% System is available at least 99.999% of the time; less than 10 min of down time 1 site visit to collect all necessary data 1000 concurrent requests can be served

Maintainability Availability/ reliability Usability Operability

Each participating stakeholder assigns one Cont score to each (AS, QA) pair. In this step the stakeholders who are permitted to vote are restricted to architects and quality attribute experts. Table 6 is an example of an architect’s scoring with respect to our chosen five ASs. Each stakeholder who is able to evaluate the ASs will likely assign a different set of scores. After an initial round of creating the Cont scores these values, and the rationale behind them, are discussed by the stakeholders, with the goal being to reach a consensus view. Any remaining variation in the Cont scores after this exercise is viewed as true uncertainty regarding the value of the individual ASs. By multiplying each of the stakeholder’s QAScores (from Table 4) by their Cont scores (exemplified in Table 5), a numeric benefit estimate can be calculated, as shown in Table 6. The consensus scores are not the only or even the most important result of this step. The uncertainty surrounding each of these consensus values must also be calculated (the amount by which the benefit score varies, according to the variation in stakeholder judgements of both QAScores and Contrib scores). The amount of uncertainty will have a large influence on whether an AS is chosen for implementation or not. It is a measure of the risk inherent in the ASs. This point is discussed further in [16]. Step 6. Quantifying the architecture strategies’ cost and schedule implications. A similar process needs to be applied to the costs and schedule implications of the ASs. Cost estimates must be provided for both the development and installation of each AS. Table 7 first shows the schedule estimates for the five ASs under consideration. These schedule Table 4 Relative quality attribute weights (QAScores) Quality attribute Performance Scalability Maintainability Availability/ reliability Usability Operability Total

Stake holder 1

Stake holder 2

Stake holder 3

Consensus value

10 40 15 15

5 15 30 20

20 10 0 30

10 25 15 20

5 15 100

10 20 100

30 10 100

15 15 100

estimates need to take into account any dependencies among the ASs and any contention for shared resources. Table 8, taking these schedule implications into account, provides cost estimates for each of the ASs adjusted by the value to the company due to project completion time over 3 months, as well as their differential operating costs. Step 7. Calculate desirability. The calculation of Desirability (from Table 9) is simply to divide the consensus benefit estimates (from Table 6) by the estimated costs (from Table 8). These results show substantial differences in the Desirability of the five architectural strategies. The highest ranked AS1 has triple the Desirability of AS5. But these rankings tell us several other things as well about the requirements process, and these results are precisely the point of having the WinCBAM be an integrated process. First of all, they point out a previously unknown requirements conflict between 2A (Maintainability) and 3A (Performance). This conflict accounts for the low Benefit (and hence low Desirability) of AS5. Similarly, these scores show that a perceived requirements conflict, between 1A (Scalability) and 3A (Performance) does not, in fact, need to be a conflict when realized by the appropriate architectural strategy. AS1 satisfies both requirements, and does so at a reasonable cost, resulting in a high Desirability rating of 0.095. Finally, these rankings may help to prioritize requirements one against the other. Resolving requirements conflicts in the abstract is difficult, but resolving them when faced with a set of criteria that ranks them according to their cost, benefit, and schedule is far easier. Consider requirements 2C (Operability) and 3C (Availability). These are judged by the stakeholders to be in conflict. Table 5 Architectural strategies’ contribution scores Quality attribute

AS1

AS2

AS3

AS4

AS5

Performance Scalability Maintainability Availability/ reliability Usability Operability

0.4 1.0 0.3 K0.2

0.7 0.8 K0.3 0.6

K0.3 0.4 0.8 K0.3

0.6 0.6 0.5 0.2

K0.4 0.2 0.8 K0.1

0.9 0.6

K0.1 K0.2

0.2 0.5

0.4 0.1

0.0 0.0

518

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

Table 6 Consensus benefit results

Table 8 Estimated AS costs (in $1000)

AS

Stakeholder 1

Stakeholder 2

Stakeholder 3

Consensus

AS1 AS2 AS3 AS4 AS5

46 32 31 54 8

60 22 21 41 12

50 36 28.5 25 13

52 30 23.5 40 11

When realized by architectural strategies AS1 and AS2 they still appear to be in conflict. However, when we examine AS1 and AS2 we see that they are substantially different in terms of how they are realized. AS1 is less expensive than AS2 and has a far higher desirability rating—more than double that of AS2. It also has a longer predicted schedule. If this set of tradeoffs is acceptable to the stakeholders then an apparent requirements conflict can be resolved in an informed manner. Note that this consideration of costs, benefits, and schedules turns into a discussion of requirements—what should be included in the system from a functional and perhaps quality attribute perspective—into a discussion of all of the business issues surrounding a system’s life-cycle. This is a significantly different approach to the problem of requirements negotiation, and shows why the integrated method is a powerful decision support tool. Step 8. Reach agreement. The WinCBAM is not, by itself, a decision-making tool. It is a decision-support tool, helping to structure and focus the discussion of requirements by showing the participants the implications of their requirements, in terms of their realization as ASs. This translation of requirements into architecture, accompanied by cost and benefit information helps lead to informed decisions. These decisions should aid the organization in better utilizing their resources. It is well-accepted that a change or a fault in requirements that is not caught until implementation time is one or two orders of magnitude more costly to correct than if it was caught at the requirements definition phase. So any means of making the requirements process more efficient and more accurate provides a great return on investment. NASA GSFC management has already reported an improvement in their requirements and planning processes as a result of marrying requirements with the CBAM. Table 7 AS schedule implications Architectural strategy

Implementation schedule (months)

AS1: Meta-data AS2: Distributed data repository AS3: Three-tiered layers AS4: Client-server AS5: Abstract service layer

7 5 6 3 7

AS

Installed cost ($)

Schedule penalty ($)

Operating cost ($)

Total cost ($)

AS1 AS2 AS3 AS4 AS5

300 450 250 300 200

150 50 100 0 150

100 150 100 200 0

550 650 450 500 350

Table 9 Desirability results AS

Desirability

AS1 AS2 AS3 AS4 AS5

0.095 0.046 0.052 0.08 0.031

4.2. Discussion The results of the method cannot and should not simply be adopted by the stakeholders as a means of deciding which requirement (and eventually which AS) to pursue. The most important results of the WinCBAM integrated process, as this case study illustrates, are that they feed back into the discussion of the requirements and their realization as architectural strategies, making these decisions more informed. These insights can help to identify differences in requirements of importance to each group member. To reiterate, the process of reaching agreement in the CBAM by itself is limited to deciding which ASs to pursue (by investing in them and implementing them). The process of reaching agreement in WinWin is limited to finding requirements choices that are acceptable to all stakeholders. In the WinCBAM method, these decision-making processes are extended and integrated. We not only aid in the negotiation surrounding the choice of the ASs, but we also provide information that can cause the stakeholders to reflect back on the original requirements that led to those ASs.

5. Challenges for the future The WinCBAM integrated decision framework offers useful tools to aid the stakeholder negotiation process from requirements to architecture evaluation. However, there are a number of challenges for further refinement of the methodology. 5.1. Exploration of architecture strategies In relating the issues/conflicts (shown in Step 2) to explore architecture strategies as conflict-resolution options (shown in Step 3), we are faced with the challenge of formalizing modeling of dependencies between these

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

artifacts. In particular, we found there are dependencies among ASs. We are also examining the issue whether several ASs should be explored per each issue/conflict, or per a set of issues/conflicts, or for all issues. We are currently experimenting with clustering conflicting issues and are exploring cross-impact (or dependency) analysis, which would identify clusters of stakeholder positions. 5.2. Sensitivity to uncertainty in benefit, cost values Objectively quantifying the benefits and cost of architecture strategies (shown in Steps 5 and 6) is a challenging problem. For example, even though we can quantify costs with popular cost estimation tools, the accuracy of these estimates is limited. Thus, the uncertainty in the models must be included when we try to make decisions. So, we need robust techniques for teasing apart the uncertainty that is inherent in the problem and the uncertainty that grows as a result of different understandings of the problem and its potential solutions. We are currently investigating the use of Delphi techniques to address this issue. 5.3. Reaching agreement on desirability results WinCBAM structures stakeholder interaction and discussion. However, it does not make decisions. One way to extend Step 8 is to simply let an arbitrator (e.g. a responsible manager) make the decision. The use of a group support system gives stakeholders the opportunity to provide their input and facilitates a wide variety of decision-making models. That alleviates some of the apparent arbitrariness perceived in dictatorial decisions. 5.4. Empirical validation A method such as the WinCBAM—aimed at complex, large-scale, real-world systems—does not lend itself to easy validation. So a substantial research issue for us is to determine whether we can design and run experiments that prove the efficacy and efficiency of the WinCBAM. Certainly it is possible to survey users of the method to determine their reactions, but it is an open question as to whether anything more precise and quantifiable can be extracted.

6. Conclusions In this paper, we have introduced an integrated framework for coordinating architectural decisions with requirements negotiation and presented a teaching case study that illustrates how the integrated framework works. The WinCBAM method combines a top-down and a bottomup approach to making crucial system development decisions early in the life-cycle. Adding WinWin to the CBAM provides substantial synergies compared with

519

standalone requirements negotiation models [20] or software architecture evaluation methods [10,11,15,17,19]. Enabling more powerful analysis of software and system architectures based upon their costs and benefits. Architectural alternatives (or strategies) can be evaluated based on requirements that multiple stakeholders who have different roles, responsibilities, and priorities elicit, explore, and negotiate. Facilitating requirements negotiation in a more systematic way. Uncertainty raised in requirements negotiation can be clarified through architecture alternative exploration, evaluation, and negotiation. The main contribution of this paper is in bridging two research domains and the integration of the two methods. Requirements documents and architectural design documents are typically analyzed, if at all, as distinct entities using unrelated processes. Using the integrated WinCBAM method we can resolve requirements conflicts by relating them to implementation alternatives that have attendant costs, benefits, schedule implications, and uncertainties. We can also find conflicts in requirements that the stakeholders did not realize. Finally, we can help to inform the entire requirements debate by providing more criteria upon which to evaluate alternatives.

Acknowledgements This work is partially supported by funding from Korea University and funding from NASA JPL under the contract C00-00443 with Texas A&M University. We would like to thank Dr Barry Boehm, Dr David Olsen, Mark Klein, and Jai Asundi for their helpful discussions on this topic.

References [1] D. Avison, F. Lau, M. Myers, P. Nielsen, Action research, Communications of the ACM 42 (1) (1999) 94–97. [2] R. Baskerville, M. Myers, Special issue on action research in information systems: making is research relevant to practice— foreword, MIS Quarterly 28 (3) (2004) 329–335. [3] K. Beck, Extreme Programming Explained: Embrace Change, Addison-Wesley, Reading, MA, 1999. [4] B. Boehm, P. Bose, E. Horowitz, M. Lee, Software requirements as negotiated win conditions, Proceedings of the First International Conference on Requirements Engineering, Colorado Springs, CO, 1994. [5] B. Boehm, P. Bose, E. Horowitz, M. Lee, Software requirements negotiation and renegotiation aids: a theory-w based spiral approach, Proceedings of the 17th International Conference on Software Engineering, Seattle, WA, 1995, pp. 243–253. [6] B. Boehm, A. Egyed, D. Port, A. Shah, J. Kwan, R. Madachy, Using the WinWin spiral model: a case study, IEEE Computer 1998; 33–44. [7] B. Boehm, H. In, Identifying quality-requirement conflicts, IEEE Software 13 (2) (1996) 25–35. [8] B. Boehm, R. Ross, Theory W software project management: principles and examples, IEEE Transactions on Software Engineering 1989; 902–916.

520

R. Kazman et al. / Information and Software Technology 47 (2005) 511–520

[9] F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, M. Stal, A System of Patterns: Pattern-Oriented Software Architecture, Wiley, West Sussex, England, 1996. [10] P. Clements, R. Kazman, M. Klein, Evaluating Software Architectures: Methods and Case Studies, Addison-Wesley, Reading, MA, 2001. [11] H. de Bruin, H. van Vliet, Scenario-based generation and evaluation of software architectures, in: J. Bosch (Ed.) Generative and Component-Based Software Engineering, Proceeding of Third International Conference, Erfurt, LNCS 2186, Springer, Berlin, 2001, pp. 128–139. [12] E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns, Elements of Reusable Object-Oriented Software, Addison-Wesley, Reading, MA, 1995. [13] P. Gruenbacher, R.O. Briggs, Surfacing tacit knowledge in requirements negotiation: experiences using EasyWinWin, Proceedings of the 34th Hawaii International Conference on System Sciences, 2001. [14] H. In, B. Boehm, T. Rodgers, M. Deutsch, Applying WinWin to quality requirements: a case study, Proceedings of the 23rd International Conference on Software Engineering, Toronto, Canada, 2001, pp. 555–564.

[15] R. Kazman, M. Barbacci, M. Klein, S.J. Carriere, S. Woods, Experience with performing architecture tradeoff analysis, Proceedings of the 21st International Conference on Software Engineering, Los Angeles, CA, 1999, pp. 54–63. [16] R. Kazman, J. Asundi, M. Klein, Quantifying the costs and benefits of architectural decisions, Proceedings of the 23rd International Conference on Software Engineering, Toronto, Canada, 2001, pp. 297– 306. [17] M. Klein, R. Kazman, L. Bass, J. Carriere, M. Barbacci, H. Lipson, Attribute-based architectural styles, Software Architecture (Proceedings of the First Working IFIP Conference on Software Architecture), San Antonio, TX, 1999, pp. 225–243. [18] P. Kruchten, The Rational Unified Process: An Introduction, third ed., Addison-Wesley, Boston, MA, 2004. [19] N. Lassing, P. Bengtsson, H. Van Vliet, J. Bosch, Experiences with ALMA: architecture-level analysis of modifiability, Journal of Systems and Software 2001. [20] B.A. Nuseibeh, S.M. Easterbrook, A. Russo, Leveraging inconsistency in software development, IEEE Computer 33 (4) (2000) 24–29. [21] R. Solingen, E. Berghout, The Goal/Question/Metric Method, McGraw-Hill, New York, 1999.