The Journal of Systems and Software 98 (2014) 79–106
Contents lists available at ScienceDirect
The Journal of Systems and Software journal homepage: www.elsevier.com/locate/jss
A method to optimize the scope of a software product platform based on end-user features Hamad I. Alsawalqah, Sungwon Kang, Jihyun Lee ∗ Department of Information and Communications Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, 305-701 Daejeon, Republic of Korea
a r t i c l e
i n f o
Article history: Received 14 November 2013 Received in revised form 18 August 2014 Accepted 20 August 2014 Available online 29 August 2014 Keywords: Product platform scope Software product line engineering Commonality decision
a b s t r a c t Context: Due to increased competition and the advent of mass customization, many software firms are utilizing product families – groups of related products derived from a product platform – to provide product variety in a cost-effective manner. The key to designing a successful software product family is the product platform, so it is important to determine the most appropriate product platform scope related to business objectives, for product line development. Aim: This paper proposes a novel method to find the optimized scope of a software product platform based on end-user features. Method: The proposed method, PPSMS (Product Platform Scoping Method for Software Product Lines), mathematically formulates the product platform scope selection as an optimization problem. The problem formulation targets identification of an optimized product platform scope that will maximize life cycle cost savings and the amount of commonality, while meeting the goals and needs of the envisioned customers’ segments. A simulated annealing based algorithm that can solve problems heuristically is then used to help the decision maker in selecting a scope for the product platform, by performing tradeoff analysis of the commonality and cost savings objectives. Results: In a case study, PPSMS helped in identifying 5 non-dominated solutions considered to be of highest preference for decision making, taking into account both cost savings and commonality objectives. A quantitative and qualitative analysis indicated that human experts perceived value in adopting the method in practice, and that it was effective in identifying appropriate product platform scope. © 2014 Elsevier Inc. All rights reserved.
1. Introduction Increasingly, many companies are adopting the Software Product Line Engineering (SPLE) approach to improve customization while shortening time to market and reducing costs. SPLE is a current software development paradigm that applies the concept of product families to the development of software products and software–intensive systems (Pohl et al., 2005). A product platform is the common basis of all individual products within a product family from which these products can be derived. The scope of the product platform determines the extent of the commonality to be used by a product family. This is to decide which features should be implemented in the reusable platform common to the whole family (full commonality) or in the variable part of the SPL, and among the
∗ Corresponding author. Tel.: +82 42 350 7712. E-mail addresses:
[email protected] (H.I. Alsawalqah),
[email protected] (S. Kang),
[email protected] (J. Lee). http://dx.doi.org/10.1016/j.jss.2014.08.034 0164-1212/© 2014 Elsevier Inc. All rights reserved.
variable part which features should be reusable (partial commonality) and which features should be uniquely developed for each product. In this paper, we refer to this decision as the Commonality Decision (CD). The key to successful product family engineering is to figure out the total costs of reuse and the benefits of using the platform (Withey, 1996; Schmid, 2002), which would be the consequences of the CD. Therefore, the quality of the derived CD has a tremendous impact on the economic benefits that can be accrued from the product line approach. However, determining the optimized CD is challenging and must be handled carefully when designing a family of products.1 Even in a small SPL, feature combinatorics can
1 Once a CD is made, decision makers have to simultaneously assess its impact on the whole family. It must be verified to conform to the myriad constraints and dependencies among features in a SPL. Additional constraints, such as resource constraints (e.g., total budget for the SPL, binary size of individual products), and a variety of customer needs, make it more complex and difficult.
80
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
produce an exponential number of possible platform configurations. This makes CD optimization a configuration optimization problem. Previous research has shown that configuration optimization problems within the context of SPLE, such as feature selection optimization with resource constraints, are NP-hard problems (White et al., 2009; Guo et al., 2011). To overcome the complexity of solving the CD problem, decision makers need methods that are useful for determining an optimized CD. During the past decade, a number of techniques for SPL scoping have been developed (Schmid, 2002; Noor et al., 2008; Inoki and Fukazawa, 2007; John et al., 2006; Bandinelli and Mendieta, 2000; Geppert and Weiss, 2003; Park and Kim, 2005; Rommes, 2003; Taborda, 2004; Kishi et al., 2002; Gillain et al., 2012; Muller, 2011; Ullah et al., 2010a, 2010b). Among these, are the techniques recently proposed to address the optimization goal of the product portfolio using computer based optimization techniques (Gillain et al., 2012; Muller, 2011; Ullah et al., 2010a). Yet, the optimization of CD is not explicitly considered in these techniques. Solving the CD problem in most of existing scoping approaches is limited to identification and assessment activities for ranking candidate solutions based on the judgments and intuition of the domain expert (John and Eisenbarth, 2009; de Moraes et al., 2009). Since so far there is no use of computer based optimization techniques to explicitly optimize CD, it is very hard to determine the optimized CD, especially in realistic SPLs, which may comprise hundreds or thousands of features. In order to overcome this problem, we introduce the first systematic method to CD optimization based on end-users’ features. The proposed method, PPSMS (Product Platform Scoping Method for SPL), mathematically formulates the CD as an optimization problem in a systematically abstract form, and develops an optimization algorithm based on a simulated annealing technique. PPSMS aims to identify the optimized CD, to maximize life cycle cost savings and the amount of commonality, while meeting the goals and needs of the envisioned customer segments. It integrates different types of information such as customer preferences, judgments of human experts concerning the commonality and variability analysis, and cost to generate, validate, and evaluate alternative product platform scopes. Note that there are a huge number of organizational, financial, technical and customer related criteria that can influence CDs. Given that the objective of SPL development differs from one organization to another, it is neither our intention nor feasible to propose the optimized solution to the CD problem for every company/product line. PPSMS is designed to operate under the assumption that the organization is sensitive to increasing cost savings and to the perceived benefits of increasing commonality. We demonstrate the practicality of the new method with a case study. By following PPSMS and based on the 262 responses to our survey and human experts inputs, five none-dominated platform scopes2 were identified. Our empirical results show that the proposed SA-based algorithm, with proper parameter settings and through a relatively small number of iterations, finds solutions with an average optimality gap of 3% with 0.012 of a deterministic algorithm’s time consumption. We further conduct quantitative and qualitative studies to validate the method and its results. For the quantitative study, we conduct survey of academia and industry practitioners and a comparative analysis with their suggested platform scopes (based on their judgments). The results of the comparative analysis show that PPSMS improves the product platform scope
2 A solution is called non-dominated if none of the objective functions can be improved in value without degrading some of the other objective values. For this work the non-dominated solutions are the platform scopes which perform better than all the other candidate platform scopes in the solution space when both the amount of commonality and the cost savings are considered together.
achieved by the practitioners in terms of increasing the amount of commonality and cost savings at 0.05 and 0.01 level of significance, respectively. The results of PPSMS also were validated as “satisfiable” to “very satisfiable” by the practitioners. Moreover, the survey indicates that practitioners would be willing to adopt PPSMS in practice. The qualitative analysis, based on the subjective opinion of four experts, confirms the effectiveness of the method and its capabilities in complementing and expanding upon current scoping capabilities. The specific technical contributions of this paper are: (1) A decision support method, PPSMS, for optimizing the scope of a software product platform based on end-users’ features. (2) Proposing an analytical measure, The Software Commonality Index (SCI), to assess feature commonality, which is the amount of feature sharing in a product family. (3) Customization of existing software product line cost models to estimate the life cycle cost savings of each candidate product platform scope. (4) Proposing and integrating decision models with existing commonality and variability analysis rules to generate product platform scopes. (5) Mathematical formulation and the application of Simulated Annealing to support the optimization of platform scope. (6) A proof of concept of the PPSMS method using an illustrative case study, and a subsequent quantitative and qualitative analysis to assess the perceptions of practitioners about the method and to demonstrate its effectiveness. The remainder of this paper is organized as follows: Section 2 presents the research work related to the solution approach. Section 3 presents the method, the optimization’s mathematical formulation and the SA-based optimization solver. Section 4 presents a demonstration of the method on an illustrative case study, demonstrates the validity and effectiveness of the method via a quantitative and qualitative analysis, and presents a discussion of threats to validity. Section 5 concludes the paper with a summary and an outlook on future research direction. 2. Related work There are three main research areas that are related to the proposed method. These areas are scoping, commonality measurements, and software product line cost models. In this section, we present a brief overview of works in these areas and their relationship with the proposed method. 2.1. SPL scoping The main idea of SPLs is to provide high-quality products at low costs by developing similar products via extensive reuse. Therefore, finding commonalities of products and developing common assets that realize these commonalities are important for high reusability (Lee et al., 2010). The activities for scoping a product platform start with a commonality and variability analysis process. The process of identifying commonality and variability is a basic part of the product line scoping (Coplien et al., 1998; Ardis and Weiss, 1997). Product line scoping is an important activity in SPL development to decide not only what products to include in a product line (product portfolio scoping), and the features of these products as well as the features that will be developed for reuse (asset scoping) (Schmid, 2002), but also whether or not an organization should launch the product line (Lee et al., 2010). 2.1.1. Overview of SPL scoping During the last decade a number of methods and techniques for software product line scoping have been developed (Schmid,
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
2002; Noor et al., 2008; Inoki and Fukazawa, 2007; John et al., 2006; Bandinelli and Mendieta, 2000; Geppert and Weiss, 2003; Park and Kim, 2005; Rommes, 2003; Taborda, 2004; Kishi et al., 2002; Gillain et al., 2012; Muller, 2011; Ullah et al., 2010a, 2010b). In Noor et al. (2008), a collaborative approach was proposed, based on the WinWin requirements negotiation method, to converge on a product map as well as the definition of reusable infrastructure. Another work Inoki and Fukazawa (2007) presented core asset scoping on the basis of two metrics, called the levels of coverage and consistency of core assets, to prioritize the core assets. Schmid and Schank (2000) proposed a tool for performing PuLSEEco (Schmid, 2002) which is the particular technical component of the PuLSE method, which captures aspects related to product line scoping. This tool is called the PuLSE Basic Eco Assistance Tool (PuLSE-BEAT). The tool supports quantifying the core asset reuse potential with an economical model based on characterizations and benefit functions which reflect the organization goals. PuLSE-Eco v2.0 does not, however, explicitly use an automated optimization technique to determine the optimized product platform scope. As a customization of the scoping process, in John et al. (2006) the authors present an update of PuLSE-Eco that explicitly integrates 21 customization factors based on their project experiences. Kang et al. (2002) considered marketing and product plans and how they affect the product line asset development. Another scoping method is proposed in Geppert and Weiss (2003) where the authors consider scoping as a decision-making activity in which they evaluate multiple candidates for the scope, and then select the proper one by examining the individual optimality vis-à-vis the whole optimality. In Bandinelli and Mendieta (2000) and Geppert and Weiss (2003) we can find some research limited to assessment of candidate domains, and which domains start a SPL first. The Quality Function Deployment Product Portfolio Planning (QFD-PPP) suggested by Helferich et al. (2005) identifies different customer groups and their needs, to systematically derive a product portfolio (i.e., members of a product line) and derive common and variable product functions. QFD-PPP elicits required features of products of a SPL and asks engineers about technical feasibility. QFD-PPP helps to identify required features and to attach priorities to them; it neither considers cost aspects nor optimizes the CD. Another work Pohl et al. (2005) presented a Kano-method portfolio planning, which allows designing a customer-oriented SPL. However, it does not focus on the cost of SPLs or consider platform scoping. According to Muller (2011) and Helferich et al. (2006) most of the existing SPL scoping approaches do not integrate market and technical perspectives. Among them, some are relatively informal, that is, rely on market analysis techniques (e.g., surveys) to elicit the scope of the product line (i.e., the scoping activity in the Software Product Line Practice Framework (PLP)) (Northrop and Clements, 2007). On the other hand, there are very technical scoping techniques. PuLSE-Eco v2.0, which is one of the Technical Components of PuLSETM , is one such approach (Schmid, 2002). Recent methods have been introduced to address this issue. COPE+ proposed by Ullah et al. (2010a) uses the preferences of customers on product features to generate multiple product portfolios, each containing one product variant per segment of customers. However, although it generates solutions by considering both the business and the technical perspectives, it targets a specific evolution scenario, when a single evolving software system has evolved into a product line. Specifically, it aims at optimizing the trade-off between required changes within a product and the features desired by customers, based on structural analysis of the evolving software system. Another method, called OPTESS, was proposed in Ullah et al. (2010b) with the aim of finding the most appropriate product portfolio among a given set of candidate SPL portfolios, to maximize the business value and minimize the technical risk.
81
2.1.2. Optimization in SPL scoping Several surveys and analysis of SPL scoping have been performed (John and Eisenbarth, 2009; de Moraes et al., 2009). They highlight that the majority of existing SPL scoping approaches focus on commonality and variability, except the old ones, which focus only on commonality (Bandinelli and Mendieta, 2000; Fritsch and Hahn, 2004). The commonality and variability processes in those approaches are performed based on the experience and intuition of the domain expert. In addition, scoping optimization has only been partially addressed in those approaches. In the current approaches there are some optimization activities for product portfolios, but there are no optimizations in the platform scoping, which are just limited to identification and assessment activities for ranking candidate solutions, without the explicit use of optimization techniques. Recently, Muller (2011) supported portfolio scoping approaches with a technique called value-based portfolio optimizations, to help in deciding what features are most important to realize in the target SPL. He assumed that configuration products can be implemented by a various set of asset components. Gillain et al. (2012) suggested using a mathematical program to optimize the SPL scope, similar to Muller (2011). In addition, their model introduced time considerations, with the goal of setting development priorities and release planning, and integrated the optimization of the assets scope with the portfolio scope. While Gillain et al. (2012), Muller (2011) and Ullah et al. (2010b) were able to determine whether a feature was developed as a core asset, or exclusively for the product(s) (assets scoping), they didn’t decide which feature can be made common or remain variable in order to enhance the quality of the derived product platform. They do not explicitly address commonality and variability analysis in their models or the optimization of the CD. Moreover, they have dealt with only one important contributor to the overall cost of the SPL, namely development cost. Considering that evolution and maintenance costs usually exceed development costs, the restriction to development costs might not be realistic. In summary, the literature lacks a method that uses computer based optimization and integrates business and technical aspects to explicitly optimize CD. Our proposed PPSMS method addresses these problems by formulating the product platform scoping mathematically as an optimization problem, and then suggesting an algorithmic tool to optimize the product platform scope. Our formulation combines both business (e.g., customer preferences) and technical aspects (e.g., judgments of domain experts concerning the commonality and variability analysis, development cost, evolution cost), to guide the optimization process. 2.2. Commonality measurements The use of metrics can be an effective way to support a decision making process. Metrics can help optimize scope and determine the costs associated with different SPL designs. The author in de Moraes et al. (2009) identified the lack of metric as an aspect in the majority of existing scoping approaches. Very few existing scoping approaches define metrics (Schmid, 2002; Inoki and Fukazawa, 2007; Park and Kim, 2005). Some of these approaches perform economic benefit analysis to define products in the SPL, such as PuLSE-Eco (Schmid, 2002), but still do not provide metrics to assess candidate scopes based on the amount of commonality. The CD occurs early in the development process and will inherently encompass subsequent software developments. Therefore, for some organizations, it is critical to have an analytical measure of the effect a given CD will have on commonality, in addition to the metrics used in Schmid (2002). In the literature, to the extent of our knowledge, only very few preliminary studies defining suitable commonality metrics have been conducted (Her et al., 2007; Berger ˜ et al., et al., 2010; Capra and Francalanci, 2006; Peterson, 2004; Pena
82
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
2007). Some of them measure commonality at the feature level (Her ˜ et al., 2007) while others measure commonalet al., 2007; Pena ity at the component level (Poulin, 1997). However, none of these attempts consider the differences among features or components while measuring commonality; for example, some components cost much more than other components. The impact of such differences on commonality is not considered. Furthermore, the work presented in Berger et al. (2010) attempts to measure the degree of commonality at the component level. In that study only ‘fully common components’ (common among all the products) contribute to the amount of commonality, while ‘partially common components’ (common among some products) do not contribute at all. The other works which have been proposed in software reuse (Poulin, 1997; Poulin and Caruso, 1993), are limited in evaluating the concept of commonality in SPL. Their metrics are rather oriented toward traditional reuse. The commonality metric proposed in this paper measures the commonality at the feature level. In our commonality metric, not only the fully common features, but also the partially common features contribute to the measured amount of commonality. Also, our metric incorporates a flexible weighting parameter to capture the differences among features while measuring the commonality. 2.3. Software product line cost models Since it is apparently attractive for companies to invest in establishing a product line, several cost models have been proposed and assessed in the past, which estimate and analyze the costs associated with software product line development. These cost models have various levels of detail (Pohl et al., 2005). On one side of the spectrum, there are models for making detailed cost estimations. An example of such models is the Constructive Product Line Investment model (COPLIMO) (Boehm et al., 2004). COPLIMO makes detailed estimations considering the cost of initial SPL development, plus costs and revenues from future extensions of the product line, but it is time consuming. To extend the COPLIMO model, qCOPLIMO has been proposed to evaluate the additional benefits in product line development through higher quality (Baik et al., 2006). On the other side of the spectrum, there are cost models which work on an abstract level with less accuracy, but with higher speed than the more detailed models. The Structured Intuitive Model for Product Line Economics (SIMPLE) is one such model (Böckle et al., 2004a). SIMPLE is one of the well-known cost models in the literature which determine cost at an abstract level (Pohl et al., 2005; Ullah et al., 2010b). SIMPLE provides four cost functions (Corg ( ), Ccab ( ), Creuse ( ), and Cunique ( )) that can be combined in a number of scenarios covering several possibilities of evolving and initiating a SPL (Böckle et al., 2004b). However, those cost functions return the costs considering the whole product line. For example, Ccab ( ) returns how much it costs to develop a Core Asset Base (CAB) to satisfy a particular scope, not on an individual features level. In the present work, out of the four cost functions of SIMPLE, we assume that the function Corg ( ), which is used to calculate organizational and process related costs, does not depend on the product platform scope. The saved development effort model presented in Schmid (2002) is another cost model which works on an abstract level. It expresses the saved development effort by making a certain feature reusable, using four characterization metrics (req(f, p), eff(f, p), eff(fgen ), and effreuse (f, p). It can be extended to assess the resultant saved development efforts by introducing additional features to the products. This is done by incorporating the characteristic function post (f, p), which takes the value of 1 if feature f can be introduced in product p. Although this model is very simple, the real benefit metrics take a similar form. However, the specific interpretation of the various characterization metrics can vary considerably between organizations (Schmid, 2002). For example, there are
differences in how to interpret and specify when a certain feature can be introduced into a product, post(f, p) = 1. The cost savings model presented here maps the value of characteristic metrics, req(f, p) and post(f, p), to the classification and prioritization of customer needs. In other words, we use the priority of the feature for a certain product to decide whether the feature is required by that product or can be introduced into it. We based our cost savings model on two models: the SIMPLE model (Böckle et al., 2004a) and Schmid’s effort saving model (Schmid, 2002). We specifically chose those models because the first is a proven generic model for estimating and analyzing costs associated with several possibilities of evolving and instantiating a SPL (Böckle et al., 2004b). In addition, our model can be applied even though the core assets and products are described in terms of features (Gillain et al., 2012; Muller, 2011; Ullah et al., 2010b). In this sense, applying SIMPLE requires effort estimates for the features. Effort estimation methods for software systems fall into three main categories: expert judgment, estimation by analogy, and algorithmic cost estimation (Angelis and Stamelos, 2000). While the second proposes characterization metrics, those allow customer preferences to be incorporated into the cost savings model. We have presented an overview of the most popular cost estimation models for SPL development. The interested reader is referred to Ali et al. (2009) for a detailed survey of economic models for Software Product Lines.
3. The PPSMS method The PPSMS method we propose addresses the complex problem of optimizing a product platform scope. We make the following assumptions about the application context of PPSMS: (1) the organization is sensitive to increasing cost savings and to the perceived benefits of increasing commonality; (2) the organization does not have a time constraint that enforces the delivery of minimal products first as the initial products of the PL because such a constraint restricts investment in preparing the platform and, as a consequence, may delay the time-to-market for the later products and their respective new releases; (3) the organization is at least at CMMI level 3 because measurement and analysis practices are needed given that PPSMS requires detailed cost estimates of features and the practices are followed in an organization at CMMI level 3 (Ullah et al., 2010b; SEI, 2013). The PPSMS consists of three phases, as shown in Fig. 1 using UML activity diagram. The method requires three types of inputs. The first input is SPL portfolio related information. The second input is related to the Kano model of customer satisfaction (Kano et al., 1984). Among many approaches that address customer need analysis, the Kano model has been widely practiced in industries as an effective tool of understanding customer preferences due to its convenience in classifying customer needs based on survey data (Nilsson-Witell and Fundin, 2005; Xu et al., 2009). The Kano model of customer satisfaction is a useful tool to classify and prioritize customer needs based on how they affect customer’s satisfaction (Matzler and Hinterhuber, 1998; Berger et al., 1993). Essentially, Kano’s model classifies customer preferences into five major attributes, namely, Basic (B), Satisfier (S), Delighter (D), Indifferent (I), and Reverse (R)3 . It allows selecting a set of product features that yield high customer satisfaction (Pohl et al., 2005). The last input is the cost estimates of features. Briefly, the two steps of the first phase sequentially process Kano’s survey data to classify, and then attach priorities for features, based on how they affect customer’s satisfaction in each market segment. These results are then used with the inputs of human experts (i.e., domain and marketing experts) for identifying further indicators for commonality and variability analysis (Phase 2). Phases 1 and 2 have the role of preparing the necessary inputs (i.e.,
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
Portfolio related information
Kano survey (customer preferences)
83
Cost estimates of features
Experts inputs
Phase 3: Optimization Phase 1: Analyzing customer needs
Step 1.1: Classify customer preferences using the Kano model
Step 1.2: Prioritize features using the absolute importance values
Step 3.1: Construct mathematical model Phase 2 (=Step 2): Analyzing features for potential commonality and variability
Step 3.2: Optimize with Simulated Annealing
Step 3.3: Analyze Non-dominated solutions
Kano categories for features (for each product)
: Task
Priorities for features (for each product)
Information (indicators) for controlling commonality and variabili ty analysis
: Arfacts
: Workflow
Optimized product platform scope
: Dataflow
Fig. 1. The workflow of PPSMS.
parameters, constraints) for the optimization phase (Phase 3). This ensures that the generated platform scopes are technically feasible and valid, and satisfy the diversity of customer needs. It also ensures economical gain, and is based on well-known human experts’ practices for commonality and variability analysis (not in an arbitrary way). The optimization steps rely on these inputs for optimizing the product platform scope. The outcome from this phase is a set of non-dominated solutions. The decision maker analyzes these suggested solutions and selects a final solution based on his/her preferences.
3.1. Phase 1: analyzing customer needs This phase aims to analyze customer needs using the Kano model. It requires several inputs: the segments of customers and their corresponding products, the list of features to be offered by the product line, and the initial list of differentiated features among the products.3 This portfolio related information can be generated by using existing portfolio scoping approaches (e.g., QFD-PPP). Another input is Kano’s survey data for each market segment. The Kano model is constructed through customer surveys, where a customer questionnaire contains a set of question pairs for each and every product feature. Each question pair includes a functional form question, and a dysfunctional form question. The functional form question captures the customers’ response if a product has a certain feature while the dysfunctional form question captures the customers’ response if the product does not have that feature. For each customer segment, the final classification of a product feature is then made based on a statistical analysis of the survey results of all respondents in the customer segment. The output of this Phase is the classifications and priorities for each and every feature based on how they affect customer’s satisfaction in each market segment.
3 It’s necessary to maintain the differentiation within the product portfolio to minimize the cannibalization effect (competition among a family of products).
Table 1 An example of categorizing and computing the importance weight. Feature 1
f f2
D
S
B
I
12 11
23 22
50 20
15 47
Total 100% 100%
Category
SAT
DIS
W
B I
0.40 0.33
0.73 0.42
0.73 0.42
Note. B: percentage of response for categorizing the feature as basic feature, S: percentage of response for categorizing the feature as satisfier feature, D: percentage of response for categorizing the feature as delighter feature, I: percentage of response for categorizing the feature as indifferent feature, SAT: impact on customer satisfaction, DIS: impact on customer dissatisfaction, w: absolute importance.
3.1.1. Classify customer preferences using the Kano model After deploying the questionnaire to a number of potential customers, each answer pair is aligned with the Kano evaluation table (Berger et al., 1993), revealing an individual customer’s perception of a product feature. The Kano evaluation table shows the overall distribution of the feature categories. After having combined the answers to the functional and dysfunctional question in the evaluation table, the most frequent observation approach is used to identify Kano categories for features (Berger et al., 1993). Basically, the most frequently observed Kano category is assigned to each feature (as shown in Table 1). This step is applied for each market segment. If the individual product features cannot be unambiguously assigned to the various categories, the evaluation rule “B > S > D > I”4 is used. Detailed discussion and examples about Kano’s model are presented in Matzler and Hinterhuber (1998) and Berger et al. (1993). 3.1.2. Prioritize features using the absolute importance values Often it is not possible to satisfy all features in a single product. This may be due to technical restrictions or financial reasons. Thus a decision has to be made on which features should be realized by the product. Prioritizing features provides decision capabilities to make trade-off between features when necessary. Kano’s method provides valuable help in trade-off situations in the product platform
4 A > B means: if the collected answers are distributed between categories A and B the feature is assigned to category A (Pohl et al., 2005).
84
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
scoping stage. If two product features cannot be met simultaneously due to technical or financial reasons, the feature which has the greatest influence on customer satisfaction can be identified (Matzler and Hinterhuber, 1998). Measuring the impact of individual features on customer satisfaction is performed in this step. Once Kano categories are identified, priorities of features within each of these categories for each market segment are identified using the absolute importance values of features, which are calculated using the two terms: impact on customer satisfaction (SATj ) and impact on customer dissatisfaction (DISj ) (Sireli et al., 2007). SATj = DISj =
Dj + Sj Dj + Sj + Bj + Ij Bj + Sj Dj + Sj + Bj + Ij
(1)
(2)
where Dj , Sj , Bj , Ij , represent the percentage of responses for feature j. Similar to Sireli et al. (2007), we assume that achieving customer satisfaction is equally as important as avoiding customer dissatisfaction. For this reason, we identify the absolute importance of each feature as the highest of either SATj , or DISj , using: Wj = max(SATj , DISj )
(3)
Table 1 shows an example of categorizing and computing the importance weight based on customer satisfaction for a product. The table shows two different features and the values of respective SAT, DIS and category evaluated according to the most frequent observation approach. Thus, f1 would be a basic feature (50%) and f2 an indifferent feature (47%). The absolute importance of each feature is selected as the highest one in (3). Thus, since the impact on customer satisfaction (SAT) value of f1 is 0.73 and its impact on customer dissatisfaction (DIS) value is 0.40, the importance value of f1 is assumed to be 0.73. In the same way, the importance value of f2 is assumed to be 0.42. To identify high priority features within a category of features for each product, we use the calculated absolute importance values of features (Wj ). These values are used to order the features within each category. This applies to satisfiers, delighters, and indifferent categories. For these categories, a decision has to be made on which features among each of these categories should be included in the product. For example, if there were several customer features whose mode was delighter, we can rank the delighter features in descending order of importance. Using that order, we select the top delighter feature(s) as high priority ones. This step is applied for each market segment. For the basic category, there is no need to arrange the features within this category based on their priorities. The reason is that these features are taken for granted by the customer as the absence of these features would lead to high customer dissatisfaction. Thus, they are not subject to trade-off. The result from this prioritization is a good base for analyzing which features are high priorities for all or large group of the different market segments. By using this information, it is possible to identify which features should be common for all market segments and which features should be specifically developed for specific market segments. For example, a feature that is a high priority for a large group of the different market segments, and which other market segments do not reject, is a good indicator for commonality (Pohl et al., 2005). On the other hand, features that are rated positively by one group of customers but are rejected by another group of customers are candidates for variable features. 3.2. Phase 2 (=Step 2): analyzing features for potential commonality and variability This phase aims to identify further indicators, in addition to the output of Phase 1, for categorizing a certain feature as a candidate
to be defined either as a common product line feature or a variable one. Here, the indicators for commonality and variabilities are identified based on well-known human experts’ practices for commonality and variability analysis, presented in Pohl et al. (2005). According to these practices, further indicators for potential common and variable features can be obtained by analyzing features with regard to the following aspects: • Other aspects of features prioritization: Prioritizing features based on their impact on customer satisfaction can be only one of several factors (i.e., competitor’s products, urgency of implementation, importance of a feature for the product architecture, risk) that will dictate what should be identified as high priority features. Therefore, in addition to the prioritization in Phase 1, domain experts can also suggest that a feature is a high priority for some products (s) or be part of the platform if, by their own judgment, they see it as an important feature for the product family. • Change of customer needs: With the passage of time, what is now perceived as delighter, or even perceived as indifferent, may become a basic thing in the near future, because it will have become a common thing (Nilsson-Witell and Fundin, 2005). Pohl et al. (2005) defined this foreseeable basic need that will appear in the product line’s life time as a strategic commonality. • Delta among features: Here, delta means a group of features which have mostly the same functionality but with slight differences or have the same functionality but with slight differences in quality levels (e.g., performance). Here, performance does not mean the product overall’s performance. It simply refers to the way the feature is implemented. For instance the same functionality can have different implementations, where each of them consumes different amount of resources. Domain experts can suggest that a feature is slightly different from another feature if, by their own judgment, they see such slight difference does not have significant impact on the overall product. For example, it does not require different hardware or change in the operating environment. Moreover, it does not impact the distinctiveness of individual products within a family. If this difference is a significant in distinguishing between high and low performance system, it will not be considered as “a slight difference”. Depending on the available information, delta can be identified by domain experts based on: the implementation source code level (percentage of differences); requirements realized within the feature (number of different requirements); context (packages of operational settings); or structure and behavior of the feature. Accordingly, within each delta, it is possible to identify which feature is superior and which is inferior (superior vs. inferior). • Technical conflicts and dependencies: In SPL, not all features can be implemented independently. Hence, mutual dependencies and interactions of features are important constraints when developing a SPL. Features with high ratings, possibly from different customer groups, which cannot be realized within the same product because they are in conflict with each other (i.e., technical incompatibility) can lead to the introduction of variability (Pohl et al., 2005). This information is used to constrain the optimization process, to ensure the validity of the generated platform scopes. 3.3. Phase 3: optimization The purpose of this phase is to generate, validate, and evaluate platform scopes. The scopes are generated based on a set of rules and the outputs of the previous phases. This set of rules is adopted from the commonality and variability analysis rules which have been utilized successfully in real industrial SPLs, and are introduced by Pohl et al. (2005). These rules use the outputs of the previous phases. These outputs are used as input parameters and constraints
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
85
Table 2 Mathematical notations used in the model. Indices j i k
features j = (1, . . ., J) products i = (1, . . ., I) deltas k = (1, . . ., K)
Input parameters B CAB j Cu j Ccab
is the allowable development cost limit of the product line (budget), assumed value is 50,000 $ is the set of features in the core asset base cost of developing feature j the traditional way (stand-alone fashion) cost of developing feature j as a reusable asset (CAB)
Cr j CEu j CEcab
cost of reusing the shared feature j from the CAB cost of evolving feature j the traditional way cost of evolving feature j as a reusable asset
CEr Bj(i) Sj(i) Dj(i) Ij(i) Rj(i) hj(i) difj(i) Strj(i) wj(i) Indiff Highj(i) Delta
cost of reusing the evolved feature j from the updated core asset is equal to 1 if feature j is categorized as a basic feature for product i, it is 0 otherwise is equal to 1 if feature j is categorized as a satisfier feature for product i, it is 0 otherwise is equal to 1 if feature j is categorized as a delighter feature for product i, it is 0 otherwise is equal to 1 if feature j is categorized as an indifferent feature for product i, it is 0 otherwise is equal to 1 if feature j is categorized as a reverse feature for product i, it is 0 otherwise is equal to 1 if feature j is a high priority feature for product i, it is 0 otherwise is equal to 1 if feature j is a differentiated feature for product i, it is 0 otherwise is equal to 1 if feature j is a strategic feature for product i, it is 0 otherwise is the absolute importance value of feature j for product i is equal to 1 if feature j is indifferent feature with high importance value for product i, it is 0 otherwise is a set of features differ slightly from others (non-identical for all the products) indexed with Z ⊆ {1, . . ., j} is a finite set with || = K elements in which each element, k, is a set of slightly different features indexed with Z ⊆ {1, . . ., j} and grouped as one delta (we assume there is no overlap between the deltas)
j
j
Design variables Xj(i) j Xcab
is equal to 1 if product i contains feature j, it is 0 otherwise is equal to 1 if feature j is integrated to the CAB, it is 0 if not
Xr j(i) Xu
is equal to 1 if product i reuses feature j from the CAB, it is 0 otherwise is equal to if feature j is uniquely developed for product i, it is 0 otherwise
j(i)
to control and guide the optimization process. Additionally, this phase requires that cost estimates of features be available as input parameters (see Table 2). According to the business objectives for product line development, different criteria can be used for assets scoping. The Return on Investments (ROI) is one of such criteria which can be estimated for a product line using a defined cost and benefits function (Lee et al., 2010). The cost estimates of features have to be available in order to estimate the ROI. Several techniques can be used for generating these estimates (i.e., an organization’s own historical data, or domain experts Delphi measures). Hence, the proposed method utilizes typically available information during the scoping phase especially when the ROI is among the business objectives for product line development in the given organization. Evaluating and finding the optimized platform scope in the feasible solution space is another issue. In this research we use two functions as possible measures of platform scope optimality: (1) the amount of feature commonality; and (2) the projected life cycle cost savings. Phase 3 helps a decision maker select the most appropriate product platform scope by performing trade-off analysis on the commonality and cost savings objectives. In the following subsections, we present an overview of these techniques with the supporting mathematical representation. 3.3.1. Construct mathematical model The mathematical model formulates the CD as an optimization problem, so that tools can be built and used to minimize the effort and time when optimizing a CD. Table 2 shows the necessary notations for building our mathematical model. In this paper, decisions are made based on features. We assume that the features are binary: (i.e., a feature either is or is not included in the platform). 3.3.1.1. Constraints. 3.3.1.1.1. a. Product definition constraint. A product can be defined as a package of features (Pohl et al., 2005). To define the products, we use the information produced by Kano method. It
enables the optimization of the choice of features with respect to customer satisfaction (Pohl et al., 2005). Basically, to avoid low customer satisfaction, all basic features should be implemented. By additionally including half of the satisfiers and two or three delighters, high-performance products can be developed (Pohl et al., 2005; Kano et al., 1984; Matzler and Hinterhuber, 1998). We refer to the sets of basic, high priority, and differentiated features for product i as “requirement i.” To ensure customer satisfaction and deliver competing products in the market, each product has to include these sets of features. Therefore, requirement i can be fulfilled by product i if and only if product i contains at least the set of basic features, high priority features or valid replacement for those features, and the set of differentiated features. This can be achieved by introducing the following constraints: Bj(i) + hj(i) ≤ X j(i) ,
Bj(i) + hj(i) ≤
∀j ∈ / Delta, i
X z(i) ,
∀j ∈ Delta, i, k
(4) (5)
z ∈ Zk
dif j(i) ≤ X j(i) ,
∀j, i
(6)
Inequality (4) states that a basic or high priority feature for a product which does not belong to delta will have to be included in the product. Inequality (5) states that a basic or high priority feature for a product which belongs to delta will have to be included in the product, or another feature among its delta group will have to be included in the product. A feature differentiated for a product will have to be included in the product. This is stated in inequality (6). 3.3.1.1.2. b. Variability constraints. According to K. Pohl (Pohl et al., 2005), variability means “the places where the products can differ so that they can have as much in common as possible”, for which “we can predefine what possible realizations shall be developed” (Pohl et al., 2005). A systematic treatment of variability requires a consideration of these two aspects. We address these aspects by introducing the following constraints:
86
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
• Features dependencies: Mutual dependencies and interactions of features are important constraints when developing a SPL. They predefine and restrict the number of possible configurations. From the SPL’s literature, we identified the following types of feature dependencies and interactions: - Implication (“require”): feature fj can only be implemented in a product if feature fl is also implemented in this product. Therefore, if we select feature fj for product i we also have to select feature fl for product i. X j(i) ≤ X l(i) ,
∀i
(7)
If features j and m together requires feature l, then the product needs to use feature fl . This constraint is formalized as follows:
∀i
X j(i) X m(i) ≤ X l(i) ,
for product 1, then it cannot be used in product 1, and is represented as: i
[a1 ] = [ 0 1 1 1
• Budget constraint: development cost has to be within the allowable limits as follows: J
j j Ccab Xcab
+
j=1
∀i
xj(i) + xl(i) ≤ 1,
(9)
• Allocation constraint: Constraint that allows a feature fj to be placed in product i. We use this constraint to describe exactly where the products of the product line may differ in terms of the features they can provide. This can help in deciding what variability should be supported and what variability can be eliminated. Hence, it becomes possible to attain further commonality without limiting the choices for customers in each market segment. Under the defined 0–1 integer variable (Xj(i) , the allocation constraint can be introduced using the following constant:
aj(i) =
1 f j can be placed in product i 0 f j cannot be placed in product i
,
∀j, i
(10)
Under this representation, the allocation constraints can be defined using the following equalities: xj(i) aj(i) = 1,
∀j, i
(11)
Using matrix structure, the feature allocation constraint for f1 (for example) can be represented as follows: i
[a1 ] = [ 1 1 1 1
1]
(12)
where columns represent the products. This means it’s possible for feature f1 to be used in all the products. For each product, the value of aj(i) considers whether the j is a differentiated feature for product i, is rejected by the product, or conflicts with other high priority features for product i. This can be specified as follows: - Rejected feature: if feature f1 is rejected by product 1 (R1(1) = 1, then it cannot be used in product 1. This is represented as: 1 i
[a ] = [ 0 1
1 1 1]
(13)
- Differentiated feature: each product within a product line should be different from each other in ways that are meaningful to the customers in each relevant market segment. Therefore, if feature f1 is a differentiated feature for product 1 (diff1(1) = 1), then it can be used only in product 1 and represented as: i
[a1 ] = [ 1 0
0
0
0]
(14)
- Conflict with other high priority features: if feature f1 technically conflicts with another feature specified as a high priority feature
(15)
The variability constraints are not limited to these constraints. Other constraints also can be imposed, for example, by the specifications of the hardware environment in which products will operate, and by the available resources for operating the products. 3.3.1.1.3. c. Product line constraints.
(8)
- Exclusion: there are maybe some features which cannot be realized within the same product as they are in conflict with each other. This may be due to technical incompatibility or due to a real conflict. For instance, if features fj and feature fi cannot be used in the same product this can be modeled by the inequality
1]
I
j j(i) [Cu Xu
j j(i) + Cr Xr ]
≤b
(16)
i=1
• CAB constraints: a feature used in a product will have to be reused from the CAB or developed exclusively for this product. j(i)
Xr
j(i)
+ Xu
∀j, i
= X j(i) ,
(17)
Before reusing a feature, it has to be integrated in the CAB (18). However, the feature to be integrated with CAB must have the potential to be reused in at least one product. Therefore inequality (12) is replaced by equality (19) as follows: j(i)
Xr
j
≤ Xcab ,
I
j(i)
∀j, i
(18) j
(1 − xr ) = (1 − xcab ),
∀j
(19)
i=1
3.3.1.2. Rules for generating product platform scopes. Although the commonality and variability analysis process presented in Pohl et al. (2005) provides valuable information for deciding whether a feature should be variable or common for the software product line, its only focus is on the customer point of view. As a consequence, the platform scope is customer-based and does not consider the implications of such a decision on the cost of SPL. Moreover, it lacks decision models. For example, consider a group of products in which a certain feature has high priority: how large must the group be, to consider it as an indicator for commonality? And how should features that only differ marginally be defined as common features? Without explicitly specifying the decision criteria, it is very hard to integrate this process with optimization techniques. To overcome this problem, we propose decision models and quantifications for integration with this process, and additional rules to guide the optimization process for deciding whether a feature should be variable or common. In general, a feature, j, can be implemented as a common feature among all the products if: a. It can be placed in all the products. This is defined using the allocation constrain as follows: I
aj(i) = I
(20)
i=1
b. The cost of developing and evolving it as a common feature among all the products (cost commonj ) is less than the cost of developing and evolving it the traditional way (cost initialj ): cost initialj = cost commonj ≥0
(21)
However, the feature can still be placed in all the products, based on its priority to customers of the product portfolio, and the company. Therefore, different scenarios can exist in which a feature can be placed in all the products. Depending on the scenario, the way
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
we calculate cost initialj differs from one scenario to another. This depends on the outputs of Phases 1 and 2. The possible scenarios are defined with the following rules: Rule 1: The feature is categorized as basic by all the products in I j(i) the portfolio. This is represented as B = I, and will be implei=1 mented as a common feature if (21) is satisfied. (21) is calculated as follows: I
j(i)
j(i)
j
j
[Cu + CEu ] − Ccab + CEcab +
i=1
I
j(i)
(Cr
j(i)
+ CEr ) ≥0
(22)
i=1
Rule 2: The feature is rated as high priority by all the products in I j(i) h = I, and will be implemented as a common the portfolio. i=1 feature if (22) is satisfied. Rule 3: The feature is categorized as a strategic feature to all I the products in the portfolio. Str j(i) = I, and can be inserted i=1 into the products and implemented as a common feature if (22) is satisfied. This is to attain a stable set of common artifacts and to differentiate from competitors products (Pohl et al., 2005). Rule 4: The feature is rated as high priority by more than or equal to 6 of the products in the portfolio. This is represented as ≤ I j(i) h < I, and can be implemented as a common feature if (22) i=1 is satisfied. represents a threshold on the number of products in which the feature is identified as high priority feature. The value of the parameter is set by the human decision maker. If the number of products is less than the value of , then that feature is considered to be a differentiated feature to the products in which it is identified as a high priority feature. Here, cost initialj represents the cost of developing and evolving feature j in the traditional way, for the products which rated this feature as high priority. Therefore, (22) is calculated as follows: I
j(i)
h
j(i) (Cu
j(i) + CEu ) −
j Ccab
j + CEcab
+
i=1
I
j(i) (Cr
j(i) + CEr )
i=1
Rule 5: The feature which is neither basic nor strategic nor high I priority to all products, which is represented as ≤ [hj(i) + i=1
Bj(i) + S j(i) ], still can be implemented as common if (21) is satisfied. Here cost initialj considers the products in which feature j is either rated high or basic or strategic. We assume there is no overlap between these categories. Hence (21) is calculated as follows:
i=1
−
i=1
j(i)
j j Xcab (Ccab
j(i)
j
j
Ccab + CEcab +
I
j(i)
j(i)
+ CEr ) ≥0
X
j(i)
j(i) (Cr
j(i) + CEr )
≥0
(25)
cost initialk − cost upgradek ≥0
(26)
satisfaction(j+1,i) ≥satisfaction(j,i)
(27)
Constraint (26) states that the cost of resolving the delta (k) by upgrading features is less than the cost of the initial state for that delta (before upgrading, where each feature is exclusively developed for its intended products). Constraint (27) states that customers of product i must be satisfied with the upgrade based on the rule: D > S > B. For instance, a delighter (D) will generate greater customer satisfaction than a basic (B) one. The concept of delta is explained with example “1”. The delta can be solved by determining which of fkz , feature z belongs to delta k, is used for a delta k of a product i either based on the reusable platform or unique development. Since the relationship between the elements of each delta is mutually exclusive, each delta at maximum can have a feature respectively, and the following constraint should be satisfied: X z(i) ≤ 1,
∀i, k
(28)
z∈k
To solve the delta we introduce the replacement feasibility constraint. Replacement feasibility can be defined as a constraint that allows a feature originally for a product to be replaced by a slightly different feature (i.e., superior functionality feature) used in another product. Hence, this constraint defines the pattern of diversion among the features grouped by delta analysis, where features with superior functionality have the potential to replace features with inferior functionality. To represent replacement feasibility mathematically, first we introduce the following constant:
=
1. . .fkz can be used for delta k in product i 0. . .fkz cannot be used for delta k in product i
,
∀i, z, k (29)
(Cr
I
This cost saving constraint considers the products in the generated solution that use feature j only. Up to now we have explained features that are identical for all the products in the portfolio. For slightly different features which are grouped together as one delta, we suggest the following model to resolve how each delta can be made common. Rule 7: For the feature grouped with other feature(s) as one delta, upgrade inferior feature j in product i to the superior feature (j + 1), upgrade(j → j + 1, i). However this must satisfy the following constraints:
j(i)
j + CEcab ) +
i=1
z(i) rk
[hj(i) + Bj(i) + Str j(i) ](Cu + CEu )
j(i)
X j(i) (Cu + CEu )
−
≥0 (23)
I
I
87
(24)
i=1
Under this representation, the replacement feasibility constraints can be defined with the following equalities:
The features which does not satisfy any of these five rules should be in the variable part for the software product line and should be subject to Rule 6. Rule 6: The feature which cannot be made common among all the products (i.e., it is rejected by some products or differentiI j(i) ated), a < I, will still have the potential to be implemented i=1 as common among the products that use it (partial commonality) if:
z(i)
rk X z(i) ≤ 1,
∀i, k
(30)
z∈k
Using matrix structure, the feature replacement feasibility constraint for the Wi-Fi can be represented as follows:
i
z [rWi−Fi ] =
1 1 1
0
0
1 1 1
(30.a)
88
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
Table 3 An example to show how a delta can be defined as a common feature. Product feature
P1
P2
P3
P4
f1 Wi-Fi (basic standards) f2 Wi-Fi (advanced standards) f1 (f2 )
S↓ D
B↓ S
I B
I B
Note. B: basic feature, S: satisfier feature, D: delighter feature, I: indifferent feature.
where the columns from 1 to 4 represent the products and the rows represent the elements of Wi-Fi delta, z (f1 and f2 ). In this matrix, we can see f2 can replace f1 in products 1 and 2, while f1 is not allowed to replace f2 in products 3 and 4. Such a constraint considers: - Customer satisfaction: if feature f2 is rated “Indifferent” by product 1, then it cannot replace f1 in product 1 which is rated “Satisfier”, defined as:
i
z [rWi-Fi ] =
1 1 0
0
0
(30.b)
1 1 1
- Differentiated features: if feature f2 is specified as a differentiated feature for products 3 and 4, it cannot be used to replace f1 for the other products, defined as:
i
z [rWi-Fi ] =
1 1
0
0
1 1
0
0
(30.c)
Regarding the cost savings constraint for the delta model (26), Cost Initialk is calculated as follows: cost initialk =
I z∈k
j(i)
j(i)
X z(i) (Cu + CEu )
(31)
i
The initial cost for each delta represents the reference cost for calculating the cost savings for features belonging to delta. Therefore, it will be computed once, and its value will be assigned to a constant. This is explained in the cost savings objective function (Section 3.3.1.3). After upgrading any feature which belongs to the delta, the new cost of the delta, denoted as cost upgradek , is calculated as follows: cost upgradek =
z z z Xcab (Ccab + CEcab )
z∈k
+
I
z(i)
z(i)
[Xu (Cuz + CEuz ) + Xr
z(i)
(Crz + CEr )]
i
(32) here, at some points, features within the delta can be uniquely developed or based on a reusable platform. Therefore, these possibilities are considered in (32). Although we consider this list as adequate list of rules according to the commonality and variability analysis process followed in this paper, it’s by no means a complete list. Different ways of treatment of commonality and variability analysis can impose different rules. For example, the correlation between evolution patterns and commonality and variability presented in Douta et al. (2009) can be another systematic reasoning for defining further rules. Moreover, the criteria of the optimization can enrich the list or rules defined in this paper. Example 1: In Table 3, the feature Wi-Fi is not identical for all the products: it supports basic standards for P1 and P2 and advanced standards for P3 and P4. Therefore, it is placed onto different rows in the table. A domain expert can identify and determine whether such features slightly differ from each other or not. Based on that, they can be grouped as one delta. We represent this as f1 (f2 ).
Moreover, domain experts have to assess the impact of upgrading f1 in products 1 and 2 to f2 . Specifically, the impact of upgrading features for the purpose of commonality may result in: increasing the overall cost of the feature (i.e., Wi-Fi), customer dissatisfaction, impacting other features in P1 and P2, or requiring additional resources, such as DB size and processor. 3.3.1.3. Objectives. 3.3.1.3.1. a. Maximize feature commonality. The first measure is the amount of feature commonality. It is important to have as much commonality as possible, and thereby to reduce the amount of variability to the required minimum (Ardis and Weiss, 1997). There are many advantages of increasing commonality. Basically, taking platform commonality into account can reduce the cost of development, shorten the time to reach the market for the new products and releases, increase productivity, and enhance the quality of products (Pohl et al., 2005; Schmid, 2002; van der Linden et al., 2007; Riebisch et al., 2001). High commonality in software systems is beneficial when system complexity is reduced, leading to lower effort required in design for flexibility (Pohl et al., 2005). Artifacts’ management cost also reduced due to the presence of fewer artifacts that need to be managed. Fewer components also need to be tested. While commonality can offer many advantages for a company, too much commonality within a product family can have major drawbacks. First, commonality can diminish the ability to match varying customer requirements, and hide the differences between different software products, thereby making customers confused between each product, and hence create cannibalization effect. Moreover, for some organizations, time to market may enforce the delivery of more minimal products especially in the first round of the PL, often leading commonality to decrease. Finally, it is possible that common features possess excess functionality, thereby increasing resource consumption for some products and the similarity between products, which is not something every product’s customers appreciate. Accordingly, the issue of maximizing commonality should be sufficiently taken into account according to the context of the PL. Basically, the amount of variability should at least allow the development of individual applications that fulfill the goals and needs of the envisioned customers (Pohl et al., 2005). To do so, we controlled the process of maximizing commonality using the set of constraints and rules defined in Sections 3.3.1.1 and 3.3.1.2, respectively. This can define a reasonable boundary for maximizing the commonality without ignoring the essence of the product line (creating diversity efficiently and avoiding cannibalization effect). In this paper, we define feature commonality as the amount of feature sharing among family of software products. To measure the amount of feature commonality, we propose an analytical tool called the Software Commonality Index (SCI), to measure the degree of commonality, and evaluate the scope of a product platform on a 0–1 scale, similar to those used in the manufacturing domain (Johnson and Kirchain, 2010). This index provides the ability to compare different platform scopes using the degree of commonality in addition to the projected cost savings. It is proposed as the ratio between the number of features shared, and the number of features that could be shared, in a given product family. This ratio is calculated for each distinct feature in all products in the family being analyzed. These quantities are summed and then divided by the total number of distinct features. SCI is defined as:
d
SCI =
j=1
[(Sj − 1)/(I − 1)] d
(33)
where Sj is the number of products sharing feature j (number of repetitions), I is the number of products in the family, and d is the number of distinct features in the family. One is subtracted from the number of products sharing a feature and the number of products,
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
to account for the first product that contains this feature. Consequently, the resulting ratio represents the number of products sharing a feature to the number of products that could share that feature. The concept of distinct feature is explained with example “2”. With our mathematical representation we can describe SCI:
J
j X j=1 cab
SCI =
J
j=1
I j(i) X i=1 r
− 1 /(I − 1)
I
j
Xcab +
j(i)
(34)
X i=1 u
To reflect the differences among features while measuring commonality, we incorporate a feature’s development cost as a weighting parameter to SCI. Thus SCI can be redefined as:
d j=1
SCIcost =
C j [(Sj − 1)/(I − 1)]
d
j=1
with j
C =
j
(35)
Cj
j
if j ∈ CAB
j
,
∀j
(36)
here NPj is the number of products reusing feature j. With our mathematical representation we can describe SCIcost :
SCIcost =
j
X Ccab + j=1 cab
J
j=1
j(i)
req
=
1 if Bj(i) = 1 or hj(i) = 1 or dj(i) = 1 0
otherwise
,
∀j, i
(38)
• post(j,i): can feature j be introduced for product i? It’s equal to 1 if feature j can be introduced for product i. This is the case if feature j is categorized as delighter, satisfier, high priority indifferent, or is identified as a strategic feature for product i and it isn’t grouped as a delta with other features. It is 0 otherwise. With this definition we can describe post(j,i):
post j(i) =
1 f (S j(i) or Dj(i) or Str j(i) or Indiff Highj(i) = 1) and j ∈ / Delta 0
reqj(i) + post j(i) ≤ 1,
if j ∈ / CAB
Cu
j
• req(j,i): is feature j required for product i? It’s equal to 1 if feature j is a basic, high priority, or differentiated feature for product i, and it is 0 otherwise. With this definition we can describe req(j,i):
otherwise
,
∀j, i
(39)
Ccab + [Cr × NPj ]
J
89
j
I
j
j(i)
C X i=1 r r j
[Ccab Xcab +
I i=1
I j(i) X −1 i=1 r
j
j(i)
j
/(I − 1)
j(i)
[Cu Xu + Cr Xr ] (37)
Example 2: In the first example shown in Table 3, we have two features, f1 and f2 . Assume that both features are developed exclusively (stand-alone fashion) for their intended products. Consequently we have four distinct features, each with one repetition in the family (used by one product only). Thus, the value of SCI is 0 ((0/3 +0/3 +0/3 +0/3)/4). If f2 can be developed as a common feature between P3 and P4, then we will have three distinct features in the family: f1 for P1, f2 for P2, and f2 developed for reuse and reused in P3 and P4. This means f2 has two repetitions in the family. However, if for products 1 and 2 we upgrade f1 to the reusable f2 , then we will have one distinct feature in the family with four repetitions. Accordingly, the value of SCI is 1((3/3)/1). 3.3.1.3.2. b. Maximize life cycle cost savings. The second measure of platform scope optimality is the amount of life cycle cost savings the company can achieve by using the candidate product platform scope. Based on Pohl et al. (2005), Northrop and Clements (2007) and van der Linden et al. (2007), cost saving is considered to be among the most attractive drivers for adopting the SPL technique. Hence, we consider this as a valid objective of platform scope optimality. In our cost savings model, the development cost saving is specified by a metric, CostDev , which gives the saved development cost by realizing a certain product platform scope. In addition to the development cost saving, by adopting the SPL approach, organizations can exploit further opportunities resulting from not having to duplicate corrective and evolutionary maintenance activities. However, they do have to incur some cost to incorporate upgraded features(s) into their products. The evolution and maintenance cost saving can be specified by a metric Costevo which gives the saved evolution and maintenance cost realized from a certain product platform scope. For simplicity, we assume the evolution cost estimates represent the whole evolution and maintenance cycle (i.e., the total number of upgrades). However in order to effectively determine the values of CostDev and Costevo we define the following variables:
∀j, i
(40)
Although indifferent features are characterized as the ones a customer does not care much about, including indifferent features in a product may still be worthwhile, to add functionality if the customer assigns a high enough importance value to it, as suggested in Berger et al. (1993) and Sireli et al. (2007). Hence, this study suggested that a company can charge the customer for providing those high priority indifferent features (Indiff Highj(i) = 1). On the other hand, indifferent features with the lowest importance values (Indiff Highj(0) = 0) should not be included in any product (Pohl et al., 2005; Nilsson-Witell and Fundin, 2005; Sireli et al., 2007). This is either because the customer does not care about them and is not willing to pay for those extra features, or the product developers simply feel that these features will not add any value to the product. Therefore, this study suggests that inserting those features into some products represents an extra cost to the product line. Hence, when the company considers introducing indifferent features into some products for commonality, the company has to assess the impact of such a decision on cost savings and commonality. The reason for introducing these two functions is twofold: first, using req(j,i) allows us to compare the cost of the initial assignment of the feature to products, with the cost of making it a common product line feature. Second, using post(j,i) allows us to decide which product(s) the feature can be inserted into based on the customer willingness to pay. If the feature cannot be inserted into some products (i.e., customers of these products do not care about it and are not willing to pay for this extra feature), then this feature represents an extra cost to the product line. This can happen in two situations: • Inserting new features: Inserting an indifferent feature with low importance value for a product (Indiff Highj(i) = 0), will cause extra cost to the product line. • Upgrading features: The initial state for features that belong to delta is to assign those features to the products which required them (req(j,i) = 1). However, for the purpose of commonality we are interested whether it’s possible to achieve cost savings by modifying this initial assignment. The cost of the initial assignment for those features is the reference point for estimating their cost savings after making some upgrades. The reason is that the initial assignment represents what a customer is asking for and what he/she is willing to pay for. Thus, we assume that he/she is not willing to pay for the upgraded functionality. Consequently, the company is responsible for the increase or decrease of costs resulting from introducing the commonality for a certain delta of features. Therefore, (post(j, i) = 1) is equal to 0 if features j and j + 1 belong to delta. However, this does not mean that we can’t
90
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
provide feature j + 1 for product i (as an upgraded feature for feature j). It simply means that we can’t charge the customer of product 1 for providing him with upgraded functionality (j + 1). With these concepts, we can capture the impact of introducing commonality on the cost savings based on customer preferences. The total cost saving is the cost saved in the development cycle, plus the cost saved in the evolution and maintenance cycle. Thus, the projected life cycle cost savings is: CostsSavings = CostDev + CostEvo
(41)
with CostDev =
J I
j
Cu [reqj(i) + post j(i) X j(i) ]
j=1 i=1
−
J
j j Ccab Xcab
+
I
j=1
CostEvo =
j j(i) [Cu Xu
j j(i) + Cr Xr ]
J I
j
CEu [reqj(i) + post j(i) X j(i) ]
j=1 i=1 J
−
j=1
(42)
i=1
j
j
CEcab Xcab +
I
j
j(i)
j
j(i)
[CEu Xu + CEr Xr ]
(43)
i=1
Domain experts may not be able to estimate the directions of evolution at the scoping. However, according to Böckle et al. (2004b), the ROI for the evolution cost is computed in the very early stages of PL development. Moreover, the scoping process is an iterative process. It is therefore important that these estimates are re-examined when the actual data are available. When further details are available, new inputs, or even changes to the current scope; the scope has to be adjusted. Hence our model can be customized to consider only development cost estimates at the early scoping’s iterations. Then, when evolution estimates become available, they can be easily integrated into our method. Therefore, the evolution cost is included in our formulation. Depending on the available estimates, we can customize our model. 3.3.2. Optimize with simulated annealing The analysis of the product platform scoping problem, and model development, formulates a 0–1 integer, constrained, linear programming problem. This leads to combinatorial optimization. The size of the configuration space determines the appropriate approach to produce the optimal configuration(s). To solve small-sized problems, deterministic algorithms that explore all the possible combinations, also known as Exact Algorithms, can be computationally feasible. For large-sized problems, which occur with realistic SPLs, exact techniques which explore all the possible configurations suffer from exponential algorithmic complexity, and typically require days or more to identify the optimal platform scope. Thus, finding the exact solutions is NP-hard and this approach becomes infeasible. For this, heuristic-based optimization algorithms that can find solutions with acceptable optimality are needed; such algorithms are also known as approximation algorithms. Search-Based Software Engineering (Harman, 2007) advocates the application of heuristic techniques from operations research and heuristic computation research communities to software engineering. It has already had several successful application domains in software engineering (Harman, 2007). However, to the best of our knowledge, there is no single work providing an
approximation algorithm for the product platform scope. Simulated Annealing (SA) (Van Laarhoven and Aarts, 1987) is a compact and robust technique that provides excellent solutions to single and multiple objective optimization problems with a substantial reduction in computation time. SA is widely used for large combinatorial and multi-peak optimization problems and guarantees a convergence to a globally optimal solution upon running sufficiently large numbers of iterations. Our optimization solver is based on SA. We present an outline of the optimization solver in Fig. 2. The six steps of our solver are described as follows: Step 1 defines and evaluates the initial state that represents the assignment of features to the products, and whether those selected features will be reusable or uniquely developed, according to rules 1, 2 and 6. The initial state is necessary for the SA technique and must belong to the feasible scoping space. The initial state has to fulfill the requirements, “requirement i, for each product and be valid against features dependencies. Step 2 generates a new state by creating movements of feature assignments to the products and the platform. As we mentioned earlier, product platform scopes are generated based on the set of rules presented earlier. This step defines a neighborhood of a given solution as a tentative solution. First, a feature is selected randomly to undergo a certain movement toward a new state (allocation to new products and possible assignment to the platform). If the selected feature is an identical feature (does not belong to any delta), the algorithm checks the applicability of commonality rules (3–6). Depending on the applied rule, the variables are adjusted. If the selected feature is not identical (grouped to a delta), the procedure then begins to resolve that delta by applying Substep (2.3). To ensure the validity of the new state (must be a feasible scope), the value adjustment of the variables impacted by the generated solution are assumed, using dependency constraints equations. Moreover, using the product line constraints of Eqs. (17)–(19), impacted features will be assigned, either as platform based or unique development. Any additional constraints, such as the budget constraint of Eq. (16), can be checked for violations at this stage. If any of the constraints is violated, then the specific move is rejected and the algorithm moves back to Substep (2.1). Step 3 evaluates the objective function of the new generation and computes the change in the objective function C. Step 4 decides whether to accept or reject the tentative solution. The tentative solution is accepted if C ≤ 0. If C > 0 and to escape local minimum, the solution might still be accepted, with a probability inversely proportional to the increase. Step 5 iterates the work from Step 2 to Step 4 for a predefined number of iterations before moving to the next step. Step 6 updates the solution, and the algorithm decreases the temperature according to a simple geometric annealing schedule (Van Laarhoven and Aarts, 1987). The geometric function causes the algorithm to start with a rapidly decreasing temperature and to slowly cool down at the later temperature steps. The optimization process is terminated if the solution is recognized to have converged (where there is no improvement in the best solution within a specified number of temperature steps) or the freezing point is reached. 3.3.3. Analyze non-dominated solutions So far, the algorithm identifies which product platform scopes(s) maximizes either the amount of commonality or cost savings among the generated ones (single objective optimization). If competing scopes exist, the optimization problem becomes multiobjective optimization. Instead of finding one global optimum as in single objective optimization, multiobjective optimization must find a set of solutions, which is called the, non-dominated, Pareto set, or Pareto optimal frontier, as all the non-dominated solutions are equivalently important and all of them are the global optimal solutions. By modifying our SA-based single objective optimization algorithm, proposed in Section 3.3.2, using the
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
91
Fig. 2. SA-based algorithm for optimizing the scope of a product platform.
simulated annealing scheme for the multiobjective optimization suggested in Nam and Park (2000), the non-dominated scopes can be identified using commonality and projected cost savings. The human decision makers analyze and select a proper solution from the found non-dominated solutions set, based on their preferences, as the scope for the product platform.
4. Evaluation In this section, we first report on the implementation of the proposed method in an illustrative case study (Section 4.1) and the evaluation of the optimization solver (Section 4.2). This is followed by a description of quantitative and qualitative evaluation procedures performed to better understand human experts’ perceptions about our approach (Sections 4.3 and 4.4). We summarize the results of the evaluation in Section 4.5. Finally, we present a discussion of assumptions and threats to validity in Section 4.6. The case study enables us to demonstrate the practicality of the proposed method while the evaluation of the optimization solver helps us to perform a statistical analysis of the time consumption and optimality of the solver. The quantitative and qualitative evaluation help us to assess the effectiveness of the method in providing decision support for product platform scoping, and whether experts see a benefit in adopting our approach in a real industrial context.
4.1. Demonstration of the proposed method on an illustrative case study In order to show how PPSMS can provide decision support to determine the appropriate product platform scope, we designed this illustrative case study. Initially, PPSMS was evaluated for a Home Automation System (HAS) platform scoping problem. HAS is a popular software system used in the software product line literature with a number of customizable features. The actual selection of the features collectively constitutes the HAS, which influences the customer’s perceptions, and ultimately determines his/her satisfaction. Therefore, HAS is defined by the selection of features with respect to the customer’s satisfaction. Moreover, the home automation domain is well understood by the human experts we could access for our study. Because of these characteristics, HAS was selected as a good candidate to demonstrate the ability of the PPSMS. The application problem was: select the most appropriate product platform scope for a family of HASs based on end users’ features. The study team first prepared an initial features list for HAS design by conducting extensive market and literature studies. This list was refined to a final list with 36 features based on the recommendations of a focus group including two domain experts, a member of a marketing team, and two potential end users. After the features list was completed, the project team prepared
92
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
Table 4 Customer groups in our Kano survey. Market segment
Income
Expertise
No. of respondents
Segment 1 (P1) Segment 2 (P2) Segment 3 (P3) Segment 4 (P4) Segment 5 (P5)
Low Mid Mid High High
Novice Intermediate Expert Intermediate Expert
51 46 62 58 45
a Kano questionnaire specifically for HAS and developed a web based survey using an online tool.5 A total of 262 householders constituted the Kano survey respondent set. The respondents were grouped into different market segments based on their income levels and level of expertise in IT, representing five market segments, as shown in Table 4. For each market segment, this SPL aims at developing one product according to the rule “one member of the product line per customer segment” (Helferich et al., 2005). Therefore, the portfolio consists of five products (P1–P5). The list of candidate differentiated features among the products was also identified by the focus group. This list was further validated against actual customer preferences after analyzing customer needs. In particular, it was further re-examined and finalized with respect to the spread in customer satisfaction among the customer segments. For each segment, surveys were conducted to acquire customers’ assessment of features according to the functional and dysfunctional forms of questions. Each respondent was required to answer the Kano questions with respect to each and every feature. A link to the survey was distributed to the Kano survey respondent set using social websites (i.e., Facebook) and emails. For cost estimation of features, we asked each member in the study team individually to roughly generate an estimate for each j j of Cu and CEu of each feature. Basically, the estimators used feature complexity (based on their judgment) with respect to the other features in cases where complex features received higher estimates than others. When their estimates widely varied for a certain feature, the team resolved issues and revised estimates until they agreed that the range was acceptable. This can be seen as an instance of the Wideband Delphi estimation method. After that, we took the average for each cost estimate. Next step, we asked them individually to roughly estimate the Relative Cost of Reuse (RCR) and the Relative Cost of Writing for Reuse (RCWR) of each feature. The study team assumed those factors for each feature considering their possible ranges as stated in Poulin and Caruso (1993). Resemj j bling Cu and CEu , the team resolved issues and revised estimates until they agreed that the range was acceptable. After that, we took j j the average for each RCR and RCWR. We calculated Ccab and Cr of j
each feature by applying, respectively, RCR and RCWR on its Cu estij j mate. The same procedure was used to calculate CEcab and CEr using j
the traditional evolution cost (CEu ). However, instead of using the RCWR, the study team estimates similar factor to represent the relative cost of evolving a feature as a reusable feature. The features list and their cost estimates are reported in Table 5. 4.1.1. Phase 1: analyzing customer needs Once having combined the answers to the functional and dysfunctional question in the evaluation table for each market segment, customer expectations within each market segment were categorized as B, S, D, I, or R. Next, the impact on satisfaction and the impact on dissatisfaction for each feature were calculated by using (7) and (2), and selecting the highest one as the absolute importance
5 www.kanosurvey.com; a description for each feature was included in the Kano questionnaire.
value using (3). Again, this procedure was repeated for each product (market segment). The Kano categories and importance weights for features with respect to five market segments are summarized in Table 6. 4.1.2. Phase 2: analyzing features for potential commonality and variability Next we analyzed features for further indicators of potential commonality and variability based on the expert opinions from two domain engineers and one marketing employee, who were knowledgeable of the HAS domain. For each product, highly rated satisfiers and delighters features (according to their Wj values) were identified as being high priority features for that product. For instance, f12 was identified as a high priority feature for P1 and P2 while f14 was for P4 and P5. Some indifferent features with the highest importance values were identified as high priority indifferent features based on expert opinions (e.g., f8 for P1) and denoted as “ I” in Table 6. Additionally, the strategic features for each product and feature dependencies were identified. These dependencies are summarized as follows: -
f16 excludes f25 and f26 . f23 requires f27 or f28 or f29 . One of (f10 , f11 ) must be used in every product. f10 excludes f11 .
Regarding delta, we analyzed and grouped slightly different features into four deltas: “authentication- password”, “recording’s video quality”, “WLAN standard”, and “remote connection encryption”. For instance, authentication with low password quality accepts only alphabetic characters with fixed length and can be changed only when necessary, while high password quality accepts special characters too, has variable length and has to be changed frequently. Therefore, UI support for the high quality password can handle different types of inputs and different sizes. Additionally, the authentication algorithm required by high quality passwords can handle the low quality password and the same hardware can be used. The output of Phase 2 is also tabulated in Table 6. 4.1.3. Phase 3: optimization Next we identified the optimized platform scope(s). First, we customized the mathematical model introduced in Section 3.3.1 according to our case study specifications. For instance, the allocation constraint for f2 is represented as follows: i
[a1 ] = [0
1 1 1 1]
(44)
While the replacement feasibility constraint for remote connection encryption (delta 4) is represented as follows:
⎡
1 1 1
0
0
0
0
1
z [rremote ] = ⎣1 1 1 1 i
0
1
⎤
0⎦
(45)
After that, we resolved the optimization problem as a single objective using a deterministic solver that scans all the possible platform scopes (brute force). Specifically, we performed two experiments: the first one to find the product platform scope with the maximum cost savings, and the second to find the one with the maximum commonality. Next, we resolved the optimization problem using the SA-based algorithm proposed in Section 3.3.2. Likewise, we performed two experiments using this solver: an experiment to find the product platform scope with the maximum cost savings (Experiment 3), and another experiment to find the one with the maximum commonality (Experiment 4). In Experiment 3, after a relatively small number of runs, we ascertain that the solution found by the SA-based algorithm is the same unique
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
93
Table 5 Cost estimates of features. Feature
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 f29 f30 f31 f32 f33 f34 f35 f36
Door Locks Control-Manual Door Locks Control-Electronic Authentication-Short Password Keypad Authentication-Long Password Keypad Authentication-Magnetic Card Authentication-Fingerprint Authentication Iris Scan Door-Auto Close Door-Open/Close Sensors Data Storage–MySQL Data Storage–Oracle Out Door Camera Surveillance-Infrared Out Door Camera Surveillance-B/W Out Door Camera Surveillance-Color Out Door Motion Detection–Optical Based Out Door Motion Detection–Radar Based Cullet Detection Glass Break Detection Recording’s Video Quality T ≥ 1 s Recording’s Video Quality 1 > T ≥ 0.5 s Recording’s Video Quality T < 0.5 s Communications-Telephone Communications-Internet Cable LAN WLAN IEEE 802.11a,b WLAN IEEE 802.11a,b,g Remote Connection Encryption-128 Bits Remote Connection Encryption-256 Bits Remote Connection Encryption-512 Bits Message-Voice Message-Data GUI-Monitoring Devices GUI-Website GUI-Touchscreen GUI-Mobile Remote Health Monitoring
Traditional (standalone)
Product line
Cu
CEu
Ccab
Cr
CEcab
CEr
40 125 180 280 310 400 485 115 185 465 545 195 240 355 320 275 340 433 335 405 510 185 265 185 270 465 290 410 665 165 285 135 405 331 386 595
95 150 130 190 200 310 450 240 285 370 410 190 250 330 390 190 170 230 200 260 315 130 250 870 230 320 215 330 500 180 270 270 390 270 400 720
56 187.5 270 420 496 680 824.5 172.5 296 697.5 872 292.5 360 568 512 412.5 442 649.5 469 567 867 277 397.5 259 378 697 435 697 1163.75 231 427.5 175.5 631.8 496.5 617.6 1130.5
6 31.25 36 56 62 160 194 23 64.75 116.25 136.25 58.5 72 106.5 64 55 34 86.6 50.25 60.75 153 27.75 45.05 33.3 54 116.25 58 143.5 199.5 24.75 57 27 101.25 66.2 115.8 178.5
132.25 207.4 180 262.5 286 495 705 325 427 507 602 273 351 490 574 273 228 325 286 406 502.5 187.5 337.5 1023.5 287.5 408 282 420 728 220 348 348 533 377 546 1110
2.4 15 27 28 31 36 43.65 11.5 37 65.1 76.3 39 48 71 32 13.75 27.2 43.3 36.85 40.5 76.5 9.25 21.2 18.5 27 46.5 23.2 61.5 133 16.5 28.5 13.5 60.75 33.1 38.6 89.25
optimal solution with the highest cost savings found by the deterministic algorithm (Experiment 1). Similarly, the solution found in Experiment 4 is the same unique optimal solution with the highest commonality found in Experiment 2. The outcomes of Experiments 3 and 4 reported in Table 7 represent the optimized platform scopes based on cost savings and commonality objective, respectively. These outcomes also represent the optimal solutions found in Experiments 1 and 2. In the table, a letter ‘r’ indicates that the respective feature is reused from the CAB by the product, while a letter ‘u’ indicates that the respective feature is uniquely developed for the product. The projected cost savings and commonality values for those optimized scopes are reported in Table 8. By examining the generated scopes we observed that increasing commonality increased cost savings in the majority of these scopes. However, at some decision points, increasing commonality caused extra costs. This could reduce the amount of cost savings. Thus, a scope with the highest possible commonality does not guarantee the highest cost savings, and vice versa. Therefore, we obtained two optimized scopes, one based on commonality and one on cost savings. However, it is possible that the highest commonality gives the highest cost savings, depending on the input parameters (i.e., cost estimates, features priorities). PPSMS helps in identifying and analyzing such scenarios. Each scope consists of the set of features that should be implemented either as full commonality or partial commonality, in addition to the set of features that should be uniquely developed for each product. Moreover, it identifies the exact number of feature’s variants to be used for each feature, and the assignment of those variants to the individual products, which will maximize the value of the respective objective and best satisfy customer needs in the portfolio. For instance, f1 is to be
implemented as a fully common feature in the two scopes. This feature is perceived as a “Basic” feature by all the products in the portfolio and generates cost savings by implementing it as a commonality, as compared to a unique development. Therefore, this feature is to be implemented as a fully common feature in the two scopes by applying the first rule (Rule 1), while f17 is to be implemented as partial commonality (common among P2–P4) in the first scope, and as fully common in the second scope. This is explained as follows: Customers of P1 and P5 perceive this feature as a “low priority indifferent” feature. Thus, they are not willing to pay for this extra feature. Consequently, and based on our cost savings model, including this feature in P1 and P5 will add extra costs for the company. Therefore from a cost savings perspective, this feature will not be included in P1 and P5. However, f17 satisfies Rule 4 and therefore it can be implemented as full commonality (as in the second scope) when the respective objective function is the commonality index. Moreover, this decision has been assessed globally in the solution space to make sure it satisfied the stated constraints, and it outperformed other candidate scopes in the solution space. Regarding features’ variants, consider the example of Recording’s Video Quality feature, which had three variants before optimization, according to the customer preferences reported in Table 6. In the initial state, each variant was uniquely developed for the intended product which will use it: T ≥ 1 s (f19 ) included in P1 and P3, 1 > T ≥ 0.5 s (f20 ) included in P2 and P4, and T < 0.5 s (f21 ) included in P5. As a result of this initial scope, the total lifecycle cost of these variants will be 2 × (335 + 200) + 2 × (405 + 260) + (510 + 315) = $3225. Applying the delta model introduced in Section 3.3.1.2 during the optimization reduces the
94
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
Table 6 Portfolio related information and the outputs of Phases 1 and 2. Feature
Low-end segment
Mid-end segment
P1
P2
Category
W
Category
High-end segment P3 W
P4
Category
W
Category
P5 W
f1 B 0.86 B 0.87 B 0.77 B 0.79 R 0.00 D 0.59 B 0.76 S 0.72 f2 0.55 B 0.67 I B 0.33 I 0.37 f3 (f4 ) f4 D 0.57 S 0.56 S 0.74 B 0.65 √ f5 I 0.37 D 0.55 D 0.63 S 0.55 √ I 0.26 I 0.30 I 0.29 D 0.62 f6 R 0.00 I 0.24 R 0.00 I 0.37 f7 8 I 0.47 D 0.56 S 0.70 B 0.66 f I 0.33 D 0.59 S 0.58 S 0.60 f9 f10 I 0.37 I 0.38 S 0.69 I 0.39 f11 I 0.39 I 0.42 I 0.37 I 0.37 f12 S 0.71 I 0.45 S 0.68 B 0.70 I 0.28 S 0.68 B 0.73 I 0.33 f13 R 0.00 I 0.35 D 0.53 D 0.64 f14 15 I 0.35 I 0.55 D 0.64 I 0.63 f 16 S 0.68 S 0.70 I 0.30 D 0.55 f I 0.33 D 0.63 S 0.70 S 0.68 f17 I 0.29 I 0.36 I 0.40 D 0.56 f18 B 0.53 I 0.41 B 0.55 I 0.35 f19 (f20 ) 0.55 B S 0.56 S 0.56 B 0.62 f20 (f21 ) f21 D 0.57 D 0.58 D 0.58 S 0.65 22 B 0.84 B 0.84 B 0.84 B 0.83 f 23 D 0.53 S 0.64 S 0.68 B 0.68 f B 0.73 B 0.82 B 0.73 B 0.75 f24 0.37 I 0.35 B I 0.58 S 0.72 f25 (f26 ) f26 I 0.45 I 0.38 S 0.60 D 0.59 f27 (f28 ) I 0.63 S 0.59 B 0.64 I 0.32 I 0.69 D 0.55 S 0.58 S 0.67 f28 (f29 ) 29 R 0.00 R 0.37 D 0.55 I 0.54 f √ 30 D 0.65 D 0.54 S 0.56 I 0.32 f I 0.41 D 0.58 S 0.62 S 0.59 f31 B 0.73 B 0.75 B 0.73 B 0.68 f32 f33 D 0.62 S 0.70 B 0.56 S 0.63 √ √ f34 I 0.33 D 0.61 I 0.32 D 0.62 f35 I 0.30 I 0.31 D 0.62 I 0.46 36 I 0.26 R 0.27 I 0.45 D 0.61 f √ Note. : differentiated feature, B: basic, S: satisfier, D: delighter, I: indifferent, R: reverse (rejected), I: high priority indifferent feature. High priority features are bolded, strategic features are underlined, f3 (f4 ): f3 is slightly different from f4 (f3 and f4 are grouped as a delta).
number of variants that have to be included in the portfolio, based on the objective used in the optimization, as follows: In the optimized scope for cost savings, the number of variants was reduced from three to two, where f20 is included in P1–P4 and f21 is included in P5. The effectiveness of this decision is shown by the reduction in total cost for the “recording’s video quality” feature, from $3225 down to $2143, an increase of $1082 in cost savings compared to the initial scope (Sinitial The effectiveness of this decision can also be shown through the increase in the commonality index value, by 2.7%. This percentage represents the amount that this decision contributes to the increase in the value of the commonality index, when we compare the initial scope (Sinitial with 0 commonality) with the optimized scope (with 0.5 commonality). In the optimized scope for commonality index, only one variant, f21 , is included in all the products. Thus, “recording’s video quality” becomes a common feature. This decision reduces the total cost for “recording’s video quality” feature from $3225 down to $2437, increasing cost savings by $788 compared to the initial scope, and reducing cost savings by $294 compared to the optimized scope for cost savings. However, this decision contributes by 7.3% to the increase in the value of the commonality index when we compare the initial scope (Sinitial with 0 commonality) with the optimized scope for commonality (with 0.6 commonality). Regarding f10 and f11 , the majority of customers do not care about the type of data storage, as indicated by Kano categories. However f10 is a high priority satisfier for P3, thus it has to be included in this product.
Category
W
B B R B I S √ D B S S I B I S B S I D R I B B B B I B R I S I S B S I S D
0.80 0.76 0.00 0.71 0.42 0.56 0.60 0.69 0.71 0.53 0.40 0.80 0.33 0.73 0.64 0.58 0.34 0.61 0.00 0.42 0.69 0.76 0.71 0.73 0.33 0.62 0.00 0.29 0.67 0.47 0.63 0.78 0.69 0.42 0.71 0.65
The optimization results indicate that f11 will be eliminated from the portfolio according to customer’s preferences and the exclude dependency with f10 . With these considerations, including f10 as a part of the platform is the optimized decision for Data Storage feature, in term of commonality and cost savings. The results of Experiments 3 and 4 clearly indicate that competing scopes exist. In other words, no single scope has the highest cost savings and commonality amongst all the valid scopes in the solution space. Thus, by considering the two objectives (cost savings and commonality) the optimization problem becomes multi-objective optimization problem. The multi-objective optimization problems typically present a set of compromised optimal values. The set of solutions resulting from these optimal values are called the non-dominated solutions if there are no other solutions that are superior to them in the solution space, when the two objectives are considered together. For the HAS example, we identified the non-dominated solutions by resolving the problem as a multiobjective optimization problem (Experiment 5). To do that, we modified our SA-based algorithm, according to the proposed multiobjective simulated annealing technique presented in Nam and Park (2000). The result of this experiment presents non-dominated product platform scope(s) with their commonality values and projected cost savings. It is worth notice that the identified set also includes both of the optimal solutions; PPS (1) is the optimal solution for maximizing the cost savings objective while PPS (5) is the optimal solution for maximizing the commonality
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
95
Table 7 Optimized product platform scopes. Features
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 f29 f30 f31 f32 f33 f34 f35 f36
Optimized on cost savings objective
Optimized on commonality objective
P1
P2
P3
P4
P5
P1
P2
P3
P4
P5
r
r
r r
r r
r r
r
r
r r
r r
r r
r
r
r u
r
r
r
r
r u
r
r
u
u
r r r
r r r
r r r
r r r
u r r r
r
r r
r r
r
r
u
u r
r r
r r
r
r r r
r r r
r r r
r r r
u r r r
r
r r
r r
r
r
u
u r
r
r
r r
r r
r
r
r u
r r r r
r r r r
r r r r
r r r r
r r r r
r
r
r
r
r
u r
r
r
r
r r r
r r r
r r r r
r r r r
r
r
r
u r r r u
u
r
r
u u r r r
r r r r
r r r
r r r r
r u
u u r r r
r r r
r r r r
r u
r r r
r r r r
r u
r r r r u
Note. r: reused, u: unique, f: eliminated feature from the portfolio.
index, as determined in Experiments 3 and 4, respectively. This shows that our algorithm can find the non-dominated solutions which constitute the global optimum solutions. The identified non-dominated solutions are shown in Fig. 3. Based on Fig. 3, it is much easier to visualize the tradeoff between cost savings and commonality level. As we can see, PPS (5) is the optimized scope for cost savings, with the highest cost savings among all the candidate product platform scopes. Platform Scope PPS (1) is optimized for commonality with the highest possible commonality. The remaining platform scopes do not have the highest commonality amongst all the solutions, nor the highest cost savings. However, when both objectives are considered together, they perform better than all the candidate platform scopes. Moreover, a closer look reveals that some CDs decreased the development cost, but increased evolution cost. This is the case in PPS (1) and PPS (3), where increasing the commonality from 0.57 (PPS (3)) to 0.60 (PPS (1)) minimized development cost saving by $352.7 but also increased evolution cost saving by $274. Hence, it’s necessary to assess both types of cost when identifying the optimized decision. By considering total life cycle cost, it’s possible that introducing commonality at some decision points could
decrease the total cost, while decreasing commonality at some other decision points could decrease the total cost due to differences in customers’ preferences and the cost of making features common. With this information at hand, the decision maker analyzes the non-dominated solutions and selects one of them based on his perception of the perceived benefit of increasing commonality. This perception typically differs widely among decision makers. One decision maker is extremely sensitive to shortening the time to market for new products and releases; therefore, he is willing to ignore some extra costs for achieving a higher amount of commonality. On the other hand, another decision maker is extremely sensitive to reducing costs; thus, ignores some extra commonality which would have the result of introducing extra costs on the PL. While the first two decision makers considered one objective at a time, another decision maker analyzes the nondominated solutions looking for a candidate scope that performs better than all the candidate product platform scopes when both objectives are considered together. In summary, we cannot conclude that higher commonality (i.e., as measured by SCIcost ) produces best cost savings. However, that is a possible scenario (as demonstrated in our case study). It
Table 8 Cost savings and commonality indices values of the optimized scopes. Product platform scope
Cost savings CostDev
Optimized on cost savings (Experiment 3) Optimized on commonality index (Experiment 4)
7251.60 6329.4
Commonality CostEvo 13,912.25 13,835
Total 21,163.85 20,164.4
SCI
SCICost
0.45 0.531
0.50 0.60
96
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
Fig. 3. Non-dominated product platform scopes maximizing the cost savings objective and the commonality objective.
is possible that the highest commonality gives the highest cost savings depending on the input parameters (i.e., cost estimates, features priorities). PPSMS helps in identifying and analyzing such situations.
4.2. Optimization solver evaluation Although an approximation algorithm, such as SA, can produce a valid solution, it cannot guarantee an optimal solution. The optimization behavior of the SA algorithm is determined by several parameters. Such parameter setting determines generation of a new solution. The parameters which constitute the annealing schedule and have to be properly chosen are: initial value of temperature (T), cooling schedule, number of iterations to be performed at each temperature, and the stopping criterion to terminate the algorithm. To evaluate the effectiveness of the SA and the appropriate setting of parameters, we used the optimal solution found by the deterministic solver to assess the optimality of the SA, as shown in Table 9. To choose the appropriate parameters setting for the SA, an experiment with 16 different configurations was carried out. Each run represents a unique configuration of values of initial temperature, cooling factor, maximum number of iterations to be performed at each temperature (max tries), and stopping criterion (converge), represented by the max number of tries where the solution is not updated (max success). To reduce fluctuations in the results caused by the randomness nature of the algorithm, we performed 20 repetitions for each run and took the average value of those results for analysis. The results of the 16 runs are summarized in Table 9. The best results were obtained in run 16 with an average optimality 97%. In addition to the objective function value, three other parameters (algorithm’s cost measures) were considered in comparing these 16 runs: number of fitness evaluations needed to find the optimal solution (NFE), time consumption and number of function calls. Ideally, we would like to have the highest value for the objective function (in the case of maximization) while keeping time consumption and the number of function calls as low as possible. The NFE can be used when the optimal solution value is known where the lower the NFE,
the higher the algorithm’s effectiveness. For instance only runs 10, 14, 15, and 16 could converge to the optimal solution with a total cost savings $21,163.85 and a reduction in total cost (development and evolution costs) of more than 56% compared to the initial value ($47,997). Moreover, the NFE indicates that run 16 needed only 2 repetitions to achieve the optimal solution. Furthermore, the result shows that while run 1 and run 5 converge as the fastest, they fail to converge to the optimal value compared to runs 10, 14, 15, and 16. In other words, runs 1 and 5 converged faster on a solution but produced lower optimality solutions than those generated in runs 10, 14, 15, and 16. Comparing the time consumption of the SA runs with one of the deterministic algorithm, which was 3853s, shows that the SA-based approach is considerably more time efficient for optimizing the platform scope. Furthermore, an analysis of variance was run to assess the impact of the SA algorithm parameters on the objective function value. The results of 16 full-factorial experiments are shown in Fig. 4. The main factor influencing the objective function value is the initial temperature: the higher the initial temperature, the higher the objective function value. While the other factors have less influence on the objective function, the graphs also suggest choosing the ‘Max Tries’ high level (200), the ‘Max Success’ value at its high level (60), and the ‘Cooling Factor’ (∝) set to its high value (0.98). Regarding time consumption, the main factor influencing the time consumption is the cooling factor ∝: the higher the value of ∝, the higher the time consumption. That configuration was used in run 16, which confirmed that these are the best settings for SA parameters for our experiment. The same experiment has been conducted using commonality index as the objective function and revealed similar trends. However, due to space limitation, those details are not presented here.
4.3. Quantitative evaluation There are several methods to evaluate new software engineering methods and tools (Zelkowitz and Wallace, 1998). Our technology is a new approach for improving and expanding upon the common scoping approaches to help in optimizing the scope of a product
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
97
Table 9 Details of experimental runs of SA. Run no Cooling factor ∝×T
Max tries
Max success
Initial temperature
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
100 100 200 200 100 100 200 200 100 100 200 200 100 100 200 200
30 30 30 30 60 60 60 60 30 30 30 30 60 60 60 60
500 500 500 500 500 500 500 500 5000 5000 5000 5000 5000 5000 5000 5000
0.8 0.95 0.8 0.95 0.8 0.95 0.8 0.95 0.8 0.95 0.8 0.95 0.8 0.95 0.8 0.95
Cost savings ($) Optimality (best) (best) 17,430 18,220 18,004 19,840 17,851 19,658 18,004 19,658 20,886 21,163 20,175 20,964 20,886 21,163 21,163 21,163
82% 86% 85% 94% 84% 93% 85% 93% 99% 100% 95% 99% 99% 100% 100% 100%
Cost savings ($) Optimality (mean) (mean) 16,084 17,566 17,142 18,624 16,296 18,412 16,719 18,624 20,105 20,105 19,470 20,105 19,894 20,317 20,105 20,528
76% 83% 81% 88% 77% 87% 79% 88% 95% 95% 92% 95% 94% 96% 95% 97%
NFE N/A N/A N/A N/A N/A N/A N/A N/A N/A 13 N/A N/A N/A 5 6 2
Time consumption No. function (mean) ms calls (mean) 3883 22,406 8761 28,892 6080 24,180 11,281 31,211 14,869 36,766 16,602 43,841 15,232 34,544 19,510 47,112
5102 12,672 7681 18,963 6489 14,655 7992 22,048 9288 26,223 11,744 31,332 10,781 23,801 11,792 34,641
Note. NFE: number of fitness evaluations needed to find the optimal solution, N/A: fails to converge to the optimal value.
platform. Hence, we believe that it requires that it be compared with the existing approaches. The comparison should answer the question whether the proposed method is better than, at least in some way, the baseline. Such a baseline must be representative of the best known solution so far (existing approach). However, to the best of our knowledge, the literature lacks methods similar to our method. This in particular meant that we could not set up a comparative study to answer the question whether the proposed method is better than the baseline. Therefore, we had to rely on the results from the expert judgments method as a baseline for comparison and on the experts’ perceptions. 4.3.1. Research questions The quantitative evaluation was targeted at answering the following research questions, RQ1 and RQ2: RQ1: Is the method effective? More specifically, this question is concerned with whether the product platform scope(s) is appropriately identified by PPSMS. This question evaluates two key aspects: (1) whether PPSMS is effective compared to expert judgment; (2) whether the decision-making process followed in PPSMS is adequate or not as seen by the experts. In the context of this research, effective means a higher amount of commonality and
projected cost savings. Our hypothesis is that the PPSMS will generate platform scopes that are based on industrial practices reported in the literature; will verify and evaluate (using optimization technique) complicated decisions for which domain experts would be unable to assume a decisions’ consequences; and will thus recommend satisfiable platform scope(s) with higher commonality and cost savings. Accordingly, RQ1 is divided into three hypotheses. H1: PPSMS would improve the product platform scope achieved by human experts in term of increasing the amount of commonality. H2: PPSMS would improve the product platform scope achieved by human experts in term of increasing the amount of cost savings. H3: PPSMS provides satisfiable outcomes (products and product platform scopes) as determined by human experts. RQ2: Is there potential for adopting PPSMS in a real industrial context? To answer this question, we had to consider what factors would be of interest to the experts (practitioners) in assessing whether to adopt the technology, as well as what hypotheses we needed to have. According to Roger’s theory of innovation diffusion (Rogers, 2010; Schmid, 2002), it’s important to assess whether
Fig. 4. Analysis of variance for the 16 SA runs.
98
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
the experts perceived PPSMS to be difficult to understand and use (complexity), whether it provided guidance to the experts, and whether the experts would see value in adopting our approach. Therefore, RQ2 is divided into three hypotheses to address these key factors: H4: PPSMS provides guidance to the interviewed experts. H5: PPSMS is understandable and easy to apply. H6: The value of PPSMS is well-perceived by the interviewed experts.
4.3.2. Participants Our sampling frame was SPL industry and academic practitioners involved with developing and/or marketing software, and who were knowledgeable in the Home Automation domain. The sample group was hand-picked from the target sample frame to include those who would likely produce the most valuable data for the purpose of the research. Thus we used a snowball sampling technique combined with purposive sampling (Oates, 2005). The sample size was 17 participants representing two domain analysts, three developers, two marketing personnel, two project managers, a product manager, and seven academicians.6 All the participants had more than 2 years of experience in the software field (e.g., development and marketing) and we had no control over them. On a scale of 1–5 (1: No experience, 5: Very experienced), the participants rated their experience in scoping-related activities. As we explained to the participants, scoping-related activities cover the following: attending workshops, courses and seminars on product management and scoping; self-reading of scoping and commonality/variability analysis; participating in scoping and commonality/variability analysis. 17% of the participants rated as 5, while 58% rated as 4, and a further 17% rated as 3; no one rated as 1.
4.3.3. Methodology To answer RQ1, a comparative study was carried out, as shown in Fig. 5, in which the same problem specifications (to be optimized through our method) were distributed to the sample group of human experts. Each expert was asked to provide a new scope of the platform which in his/her judgment was valid and most appropriate. An external person who was knowledgeable in statistical analysis collected and evaluated each and every platform scope produced by the experts and by PPSMS, to validate H1 and H2. Next, H3–H6 were validated via a validation survey using closed-ended questions. The first version of the survey was tested by a Postdoctoral and two PhD students to have feedback from an academic point of view, and by two persons from software solution companies to make sure the questionnaire was understandable for all the respondents. The questionnaire was structured in 4 parts: six questions for validating H3 and two questions to validate each of H4–H6. The last three parts were to answer RQ2. To carry out the survey, we first conducted an interactive workshop lasting for 90 min. During this workshop, we provided a reasonably thorough overview of our solution (PPSMS) to the participants, explained all the steps of PPSMS, providing examples drawn from the case study domain, and then presented and explained the results achieved by PPSMS in the case study. There was time for a questions and answers session to clarify any concerns. After that, we circulated the questionnaire for the participants to answer. Then, the external person performed statistical analysis on answers to the survey questions.
6 Graduate students, practitioners and lecturers of software engineering at KAIST and The University of Jordan, who are familiar with SPLE approach and possess more than 2 years of experience in software development.
Table 10 Validation of H1 and H2. Approach
Cost savings (mean; SD)
Commonality (mean; SD)
PPSMS Expert judgments
(20,600.65; 438.4) (19,094.45; 896.078)
(0.551; 0.04) (0.529; 0.05)
4.3.4. Measurements To evaluate our approach in terms of the research questions RQ1 and RQ1, different measures and statistical procedures have been used to validate each of H1–H6. For H1 and H2, the external person measured the amount of commonality and cost savings for each and every platform scope produced by the experts and by PPSMS, using the commonality index and the cost savings model introduced in this paper (Section 3.3.1.3). As the next step, he used the results of the experts’ group as a baseline for comparison, and analyzed the information from this comparative experiment statistically. With respect to H3–H6, he examined the questionnaire responses according to the used scale. For H3, H5 and H6, the average CI, the lower and upper bounds for the CI have been computed, while for H4 the statistical procedure in Appendix has been followed. 4.3.5. Results 4.3.5.1. RQ1 (effectiveness). We will now discuss the validation of H1–H3 and compare it with the results we achieved from our validation. The aim of H1 and H2 is to investigate the likelihood of our approach reaching at least the same results as experienced experts. A statistical analysis was performed using the following hypotheses: • H10 : Commonality,Experts > CCommonality,PPDMS (null hypothesis for H1). H1a : Commonality,Experts ≤ CCommonality,PPDMS (alternative hypothesis for H1). • H20 : Cost Savings,Experts > CSCostSavings,PPDMS (null hypothesis for H2). H2a : Cost Savings,Experts ≤ CSCost Savings,PPDMS (alternative hypothesis for H2). Starting with H1, the results are compared in Table 10. We compared the average “Commonality” obtained by the experts (i.e., 0.529) against each of the five measures of “Commonality” obtained by the PPSMS, the non-dominated solutions. Out of these five comparisons, two (0.60 and 0.570) produced a value of T less than −2.583; hence, for these two measures of “Commonality”, H10 is rejected at the 0.01 level of significance. On the other hand, one of the “Commonality” values obtained by the PPSMS (0.55) produced a value of T less than −1.746; hence, for this measure of “Commonality”, H10 is rejected at the 0.05 level of significance. Moreover, the other two values of “Commonality” (0.50 and 0.52) cannot reject H10 in favor of H1a . However, we compared the average “Commonality” obtained by the experts (i.e., 0.529) against the average value of the five measures of “Commonality” obtained by The PPSMS (i.e., 0.55). We found that this measure produced a value of T less than −1.746; hence, H10 was rejected in favor of H1a at the 0.05 level of significance. Thus, H1 is successfully validated. Similarly for H2, we compared the average “Cost Savings” obtained by the experts (i.e., 19,094.45) against each of the five measures of “Cost Savings” obtained by the PPSMS. From these five comparisons, all of them (20,164.40; 21,163.85; 20,243.00; 20,947.00; and 20,485.00) produced a value of T less than −2.583; hence, for all of them, H20 is rejected at the 0.01 level of significance. Moreover, we compared the average “Cost Savings” obtained by the experts (i.e., 19,094.45) against the average value of the five measures of “Cost Savings” obtained by the PPSMS (i.e., 20,600.65). We
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
99
Fig. 5. Workflow of the evaluation of RQ1 in a UML Activity diagram.
found that this measure produced a value of T less than −2.583; hence, H20 was rejected in favor of H2a at the 0.01 level of significance. Thus, H2 is successfully validated. Accordingly, from the validation of H1 and H2, we can claim that PPSMS is effective compared to expert judgment. Nevertheless, it is not sufficient to validate the optimization results only in terms of the objective functions’ values (H1 and H2); it’s also necessary to validate them in terms of the achieved solutions (H3). H3 aims at investigating the adequacy of the platform scope(s) achieved by the PPSMS as perceived by practitioners via validation survey. However, scoping a platform also impacts the design of individual products (insert new features, replace features, and/or delete features). Hence, it’s also necessary to validate the design of individual products (set of features it’s made of). Products and platform design were validated via the validation survey. The first part of the survey asked the participants to rate how satisfied they were with each product design (Q1–Q5) and with the product platform scope (Q6) on a scale of 1–5 (1: Very Dissatisfied, 2: Dissatisfied, 3: Neutral, 4: Satisfied, 5: Very Satisfied). Survey questions addressing H3 are reported in Table 11. We performed a statistical analysis of the inputs for the validation survey provided by the participant using the following hypothesis:
• H30 : PPSMS does not provide satisfiable outcomes. This hypothesis is split into two hypotheses: – H3.10 : PPSMS does not provide satisfiable product design(s) (null hypothesis for H3.1). – H3.20 : PPSMS does not provide satisfiable product platform scope(s) (null hypothesis for H3.2). Based on the obtained inputs for Q1–Q6, the mean values of these 17 inputs with a 95% CI (t-distribution) are summarized in Table 11. From the inputs in Table 11, we can observe not only that the respondents’ answers were consistent for Q1–Q5, with a mean higher than 3 (neutral point) for all cases, but also that the lower limit of the confidence interval is higher than 3. In other words, in all questions, the respondents have a positive evaluation about PPSMS. Therefore, we can reject H3.10 with 95% confidence. Moreover, we can observe that PPSMS provides very satisfiable product platform scopes, with a 4.05 sample mean and 95% CI, for an actual mean of 3.674–4.443 (H3.20 has been rejected). Thus, from the evaluation of all questions, we can claim with 95% confidence that H3 has been successfully validated. As we would like our approach to be adopted in a real industrial context, we cannot rely on the effectiveness of the method alone;
100
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
Table 11 Validation of product designs (H3.1) and platform scope (H3.2). Question
Average
CI (95%)
Upper limit
Lower limit
Q1: How satisfied you are with the design of P1? Q2: How satisfied you are with the design of P2? Q3: How satisfied you are with the design of P3? Q4: How satisfied you are with the design of P4? Q5: How satisfied you are with the design of P5?
3.882 3.941 4.000 4.000 4.353
±0.477 ±0.339 ±0.406 ±0.315 ±0.361
4.359 4.280 4.406 4.315 4.714
3.405 3.603 3.594 3.685 3.992
Q6: How satisfied you are with the scope of the product platform?
4.353
±0.312
4.665
4.041
we need to also consider the potential for adopting our method from experts’ perspectives, and that is the subject of RQ2, discussed below. 4.3.5.2. RQ2 (potential for adopting PPSMS). This research question aims at analyzing the potential of adopting PPSMS in a real industrial context. To address RQ2, we will now discuss the validation of H4–H6 and compare it with the results we achieved from the validation survey. Our survey had questions to directly assess the perceptions of the participants about each of the factors addressed by the hypotheses. H4 addressed the guidance offered by PPSMS, to check whether it provides helpful guidelines to the experts. The survey contained two questions, derived from Schmid (2002), that are related to the aspect of guidance: Q7: Have you felt sufficiently guided during the performance of PPSMS? Q8: Would you have expected more guidance from PPSMS? For each one, the respondents were asked to give an exclusive answer between four options: “Strongly Disagree”, “Disagree”, “Agree”, “Strongly Agree”. We performed a statistical analysis of the inputs provided by the participant for these questions, similar to the statistical analysis presented in Schmid (2002). The result is shown in the appendix, using the following hypothesis: • H40 : PPSMS does not provide sufficient guidance (null Hypothesis for H4). Based on the obtained responses, we have a probability (p) to find a similar table to our “Result table” with five cells deviation from the optimal result (Expected table) to be less than 0.01 (p = 0.000079). Thus, we can refute H40 on the 0.01-level and accept our initial hypothesis that PPSMS provides sufficient guidance. H5 addresses the complexity of PPSMS, to check if it presents any difficulty in understanding and application. Our survey contained two questions related to this aspect, as shown in Table 12. For each question, we asked the participants to give an exclusive answer on a scale of 1–5 (1: Very Difficult, 2: Difficult, 3: Average, 4: Easy, 5: Very Easy). We performed a statistical analysis of the inputs provided by the participant for the validation survey using the following hypothesis: • H50 : PPSMS is hard to understand and apply by the interviewed experts (null Hypothesis for H5). Based on the obtained inputs for Q9 and Q10, the mean values of these 17 inputs with a 95% CI (t-distribution) are summarized in Table 12. We can observe that the respondents’ answers were almost consistent for both questions, with a mean higher than 3 (neutral point) for both cases. In the case of Q9, we found that the lower limit was higher than 3 and thus can claim with 95% confidence that the steps in PPSMS were easy to follow by the interviewed experts. However, Q10 presents a lower limit that is slightly
lower than 3, but we consider it still acceptable. Therefore, we can claim with 95% confidence that H5 has been successfully validated. The final hypothesis which we want to discuss for validation addresses the perceived value of adopting PPSMS, to check if the improvement of benefits is trivial or not. The last part in the survey has two questions to address this aspect (Table 13). For each question, we asked the participants to give an exclusive rate on a scale of 1–5 (1: Very Probably Not, 2: Probably Not, 3: Possibly, 4: Probably, 5: Very Probably). Based on the obtained responses for Q11 and Q12, the mean values of these 17 respondents with a 95% CI (t-distribution) are summarized in Table 13. The inputs show that the respondents’ answers were consistent for both questions, with a mean higher than 3 (neutral point) for both cases, but also that the lower limit of the confidence interval is higher than 3 as well. Thus, from the evaluation of both questions, we can claim with 95% confidence that H6 has been successfully validated. 4.4. Qualitative evaluation The qualitative analysis aims at obtaining and analyzing human experts’ feedback on the proposed method and on its effectiveness in providing decision support for product platform scoping. Semistructured interviews with qualitative questions were used for data collection. We interviewed four human experts. We used criterion sampling to select them from among those who participated in the validation survey of Section 4.3.3. The selected interviewees, three experts from industry (product manager, project manager, and domain analyst) and one from academia, were the most knowledgeable and experienced experts among all, and for that reason their inputs were considered most valuable for the qualitative validation of the method that this research proposes, and its results. The qualitative questions were open ended, and asked for the expert to comment on the answers he gave to the survey questions. The collected data were summarized and conclusions were drawn from the data by an external person. To ensure that our findings are valid, we fed the findings of the analysis (summarized in Table 14) back to the participants to assess how closely they considered them to reflect the issues from their viewpoints. The participants agreed that the findings reflected their perspectives. The identified themes were as follows: The first theme was the quality of results. All the interviewees considered the suggested scopes of the platform very reasonable scopes, by examining the preferences of customers and according to their own experiences. However, the main issue was the design of Product 1. Three interviewees argued that the design of this product possessed excess functionality in terms of included features. An illustrative quote for this issue was, “My main concern is the design of P1 where the included features are quite much compared to the remaining products in the portfolio. Thus, it is possible that P1 competes with P2 creating cannibalization effect”. This can explain the low validation results of this product, as reported in Table 11, compared to the remaining products. However, this can be the case with vertically differentiated products, where products are created by adding features to the smallest product (in terms of number of features). Hence, the smallest product is contained within the next
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
101
Table 12 Validation of complexity aspect (H5). Question
Average
CI (95%)
Upper limit
Lower limit
Q9: How easy is it to follow the steps in PPSMS? Q10: How easy is it to use PPSMS for communication with a scoping body?
3.826
±0.361
4.187
3.465
3.261
±0.298
3.559
2.963
Question
Average
CI (95%)
Upper limit
Lower limit
Q11: Does PPSMS provide useful decision support for product platform scoping? Q12: How likely are you to adopt and/or recommend PPSMS to use for product platform scoping?
4.217
±0.259
4.477
3.958
4.174
±0.310
4.484
3.864
Table 13 Validation of perceived value aspect (H6).
higher level product. In PPSMS, we assume products are horizontally differentiated, where products share some features with each other but every product has some unique features as well. We specified constraints to maintain the necessary differentiations among products according to the inputs of the marketing team. Hence, it is up to the marketing team to identify the required differentiation carefully based on the preferences of customers in the different segments. The second theme was the example techniques used in the method. Participants felt the techniques were helpful in supporting the decision maker. However, one participant stated that further decision support was necessary in order to select among the Non-dominated solutions provided by PPSMS. The main argument was about the cost savings model. Two interviewees found it was a good idea to map cost to customer preferences. That made it possible to assess the impacts of the decisions during commonality and variability analysis on the total cost from the customer point of view. Another interviewee was more concerned about the cost estimates used in the cost savings model. According to him, “I still have difficulties understanding if it makes sense to do such detailed costs estimates; in our company we do not do such estimates”. We Table 14 Aspects in PPSMS positively and negatively judged by the interviewees. Negative (−)
Comments
I1 , I3 , I4
Creating cannibalization effect Mapping cost to customer preferences Detailed cost estimates More automation capabilities Mathematical formulation and optimization support Consideration of different roles and point of views More decision support to analyze the non-dominated solutions Tabular presentation of information Legacy systems Mapping between features and components
I4 I1 , I2 , I3 , I4
I2
I1 I3
Note. I1 : interviewee one.
Positive (+)
I2 , I3
I1 , I2 , I3 , I4
I1 , I3 , I4
I2 , I4
believe that software engineering practices are different from one organization to another. This includes cost estimation practices. In that sense, some may apply very detailed cost estimation models (COPLIMO, Boehm et al., 2004) while others may depend on a more abstract level. It depends on the available historical data and experts’ judgments. Therefore, we believe organizations being at least at CMMI level 3, where measurement and analysis practices are followed, are capable of making such estimates (SEI, 2013). Moreover, the goal in this article is not to focus on specific techniques, but rather to provide a decision support framework. When it is impossible to do such detailed estimates, a relative scale can be used, as mentioned by one of the interviewees, without losing the decision support capabilities of PPSMS. The third theme was the involvement of different stakeholders with different roles and backgrounds in the application of the method. All participants appreciated the consideration of different roles and information during the workflow in PPSMS. One participant suggested legacy systems should be considered in the process, while another one stated that mapping between features and technical components was an important issue which needed to be addressed while deciding on commonality. Moreover, according to one interviewee, “There are a huge number of organizational, financial, technical and customer related criteria that can influence scoping decisions. The involvement of different roles can produce better decisions. However, the communication among stakeholders from different backgrounds (e.g., marketing, management, development) can be an issue to think about.” During the interactive workshop, we found some participants were not familiar with marketing or customer satisfactions terms, such as: differentiated feature, Kano’s model, and the classifications of customer needs. Therefore, this gives a possible interpretation for the low validation result for Q10 in Table 12. Additionally, one interviewee commented that: “There is a range of uncertainty factors in making scoping decisions in general; such as cost estimates, technology change, competitors, evolution of customer needs. Although such range of uncertainty factors exists in the proposed method, the CD performed by domain experts during scoping phase also suffers from it. However, in the presence of such mathematical formulation and optimization support, it is easier to adjust those factors in the course of more accurate information. Thus, the decision maker can initiate further iterations of PPSMS by changing model settings to generate more solutions and perform a sensitivity analysis to identify the degree of influence of the adjusted factors on the results generated by PPSMS.” This is suggesting that PPSMS is likely to be superior to an ad hoc decision made by experts in terms of accuracy in assessing and evaluating the impact of such uncertainty on the CD.
102
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
4.5. Discussion on evaluation We recall that our focus for demonstrating PPSMS using the illustrative case study was to show how PPSMS can support tradeoff analysis in selecting a candidate platform scope. To that end, we have shown that the solution space has been reduced down to five non-dominated solutions. The human decision maker is likely to focus on those five solutions and select one of them as the platform scope, instead of spending a lot of effort to explore the entire solution space. Moreover, the mathematical representation and consequent optimization approach showed their effectiveness in verifying and evaluating complicated decisions. In such cases, domain experts could not be expected to assume the consequences of such decisions in a large solution space, based solely on their expertise. This should be a significant outcome from using our mathematical formulation of the product platform scoping problem, and consequent optimization technique. Additionally, through the results shown in Table 9 and Fig. 4, we ascertained that a unique optimal solution can be found by our SA-based algorithm through a relatively small number of iterations, with 0.012 of the deterministic algorithm’s time consumption. Regarding RQ1, the results of H1–H3 show there is sufficient evidence to claim that the method is effective in identifying appropriate product platform scope. PPSMS is designed to integrate different factors in a single mathematical program, which makes it possible to verify and evaluate a larger solution space than human experts can manually handle. For RQ2, the results suggest that overall PPSMS was viewed to be quite easy to understand, and the participants thought that it would be useful to use within a real industrial context. This response indicates that there is potential for adopting PPSMS. Moreover, according to the interviewed experts, the capabilities provided by PPSMS complement and expand upon current scoping capabilities, especially those that help in deciding on product platform scope. Although there are a huge number of factors that can influence the scope of the product platform, still the results of PPSMS are very satisfiable and should be considered when deciding on platform scope. 4.6. Threats to validity In this section, we discuss threats to validity of the work presented in this paper and list the procedure we took to alleviate their risks. Based on our experiences and the proposed set of threats that may affect Search-based Software Engineering (SBSE) experimental studies (Arros and Neto, 2011; Johnson, 2002), we identified the following threats. 4.6.1. Construct validity • Lack of assessment of the validity of effectiveness measures (optimization criteria): one possible measure of the product platform scope optimality is the organization benefit(s) of the SPL development. In Section 3.3.1.3, we discussed the rationale behind the selection of cost savings and commonality as possible criteria for optimizing the platform scope. Based on that, we consider these measures are valid, at least from theoretical point of view, within the assumptions we made about the context of application of PPSMS. Another potential threat is that we formulate our own measures, Costsaving and SCICost , to evaluate the effectiveness of PPSMS. To reduce the impact of this threat, we performed subjective validation of the effectiveness of PPSMS based on the validation survey (Q1–Q6). Still, it is possible that other measures may offer different conclusions. Additionally, it might be difficult to obtain such precise cost estimates used in these measures in practice. However, as the cost estimation techniques in software engineering improve and when more raw data become available, those estimates can be adjusted and thus we can get
more precise results. Moreover, even though we started with this level of precision in estimations, when it is impossible to do such detailed estimates using detailed cost estimation models (e.g., COPLIMO, Boehm et al., 2004), relative estimates can be used since our result does not depend on the level of precision in estimations. Such approach has been taken in architecture evaluation techniques (e.g., Kazman et al., 2002; Lee et al., 2009) and in the decision making techniques for selecting an optimal set of software requirements for implementation (e.g., Jung, 1998; Karlsson and Ryan, 1997). Relative values are suitable for such decision making techniques since in such techniques the purpose is not to calculate correct values of the evaluation criteria of the alternatives but to select the alternative that best suits the given criteria. Further research should be conducted for fitting the appropriate information from practice with the proposed measures and for further validation of the effectiveness of these measures in practice. • Lack of discussion of the underlying model subjected to optimization: For building a model describing a software-related problem, simplifications have to be made on the real problem. We have made simplifications on features interdependencies. In reality, the interaction between features is complex and may require additional constraints. For instance we did not model cost dependencies among features. The cost dependency means that having a certain feature is by itself insufficient in order to determine costs. For example, if one feature is chosen for implementation then the cost of implementing another feature increases or decreases. Moreover, constraint (21) can be generalized by taking into account that the benefit of the added feature should be higher than the costs of the feature. Furthermore, we assumed that feature can be categorized into two categories: reusable or unique, as a simplification of the possible reuse scenarios in practice. However, there is a third category widely applied in practice called “reuse and adapt”. Within this category, a base implementation of a feature is created in the product platform representing a 50–80% solution of the feature and then explicit variations points are built in. In order to realize and finalize the feature, these variation points then are product-specific resolved and adapt. This assumption can limit the practical applicability of PPSMS. However, PPSMS can handle this category as follows: first, an explicit design variable should be introduced to encode this scenario as a design option to distinguish it from the first two categories. The delta rule should then be modified to provide a treatment for this scenario in addition to its original resolution which we introduced in Section 3.3.1.2. In this treatment, a base implementation of features grouped as one delta is created in the product platform and then explicit variations points are built in to support the variation among these features. In order to realize and finalize a certain feature among this delta, the base implementation is reused and then these variation points are product-specifically resolved and adapted. For example, a base implementation for the feature Wi-Fi in Table 3 can be created in the product platform with an explicit variation point. Then, this variation point is resolved and adapted in order to provide f1 for P1 and P2 and f2 for P3 and P4. Experts have to specify in which deltas the “reuse and adapt” is technically feasible. For each of the feasible ones, the cost of creating the base implementation, the cost of reusing this base implementation, and the adaptation costs for realizing and finalizing the feature in the corresponding products should be provided by experts as inputs. After that, the new design variable has to be encoded into the objectives. For instance, SCIcost should also distinguish and capture the percent solution that the platform provides of the feature with the number of the products reusing it, in addition to those quantities used in Eqs. (34) and (37). Thus during the optimization the reuse and adapt scenario will be considered and compared with
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
other scenarios (e.g., either having a reusable or a unique feature) while resolving the delta issue. We believe that the new model presented in this paper can be easily enriched with more realistic and practical constraints. We argue that, with the detailed steps of constructing the optimization model, a reasonable approximation of the real-world problem can be provided. A good solution to such approximations can be considered as a step forward in determining a practical solution and is far better than ad hoc solutions.
4.6.2. Conclusion threats • Not accounting for random variation: To deal with the randomness of results obtained from simulated annealing, various actions were taken: - The simulated annealing parameters were carefully calibrated. The chosen values were determined based on the following factors: the simulated annealing literature and analysis of variance studies to assess the impact of parameters on the objective function value. - Different initial states of SA have been used in a given instance to avoid the results that carry the benefits of a favorable initial state or the prejudice of a badly selected initial state. - To reduce the fluctuations in the results caused by the random nature of the algorithm, we performed 20 repetitions for each run and presented the average of their results for analysis. With these actions, we could verify that the SA-based algorithm converges to the optimal solution. However, in practice it is very difficult to verify this since the optimal solution is usually unknown. Thus the user has to repeat several times within the available time frame and take the best solution. • Lack of a meaningful comparison baseline: To the best of our knowledge, the optimization model and the SA-based approach presented in this paper are the first attempts to mathematically formalize and optimize the scope of the product platform. Therefore, direct comparison with other works is not possible. Thus, to compare results, we have taken the results from a deterministicbased search (non-heuristic) as the base line.
4.6.3. Internal validity • Poor parameters settings: A complete description of parameter values for the proposed technique and the case study elements are explicitly presented in the design of the experiment, making the experiment reproducible. • Accuracy of subject assumptions: In our study we have two groups of subjects. The first group is the study team who designed the illustrative case study and made cost estimates of features. In practice, the cost estimates of features are difficult to obtain. This is an inherent problem in Software Engineering. To minimize the risk of this threat, we selected a widely used case study that the subjects were familiar with from the software product line literature, allowing for a better understanding of the case study specifications, thereby enhancing the estimation accuracy. Moreover, when their estimates varied widely for a certain feature, the team resolved issues and revised estimates until they agreed that the range was acceptable. Then the average is taken for each feature. The second group is the human experts. The accuracy of the participants’ responses were elevated through the team member selection criteria, where the sample group was hand-picked from the target sample frame to include those who would likely produce the most valuable data for the purpose of the research. • Fatigue effects: In our experiences, people in academia and industry usually attend 2–3 h meetings. Therefore, we ensured that all the sessions held with the participants last for less than 90 min to avoid the fatigue effect.
103
4.6.4. External validity • Lack of evaluations for instances of growing size and complexity: As the analysis was carried out on a problem with limited size and complexity, it might lead to a result that is different from using problems varying in size and complexity. However, this is not a serious threat because the analysis intended to give assistance to people to choose the appropriate parameters setting of the algorithm rather than to generalize its results. Moreover, throughout this analysis, we ascertain that the algorithm, given enough time and using appropriate parameters settings, can find the unique optimal solution. In addition, similar to any randomized algorithms, we expect that in practice users will perform as many repetitions as possible and take the best achieved solution. Yet, although the SA-based algorithm is shown to be a very efficient means for product platform scope optimization, there is still no exact boundary for the size of the problems that can be solved in a reasonable amount of time with truly applicable solutions. More research should be conducted for further validation of the effectiveness of the algorithm in terms of optimality and time consumption and the appropriate settings of parameters in practice. Since scope optimization results on industrial PLs are scarce in the literature, the analysis reported in this paper still provides an important contribution despite this potential threat. • Representativeness of the case study: The selected product line might not be representative of all product line practices. However, we chose one of the most used real-world problems in the software product line literature, which allowed for an understanding of the presented approach, and its adaptability to different product line domains, thereby increasing the generalizability of our approach. Nevertheless, this does not automatically imply external validity of the results; instead, additional replications are necessary to determine if our results can be generalized to other domains. • Uncertainty in modeling the problem: There is a range of uncertainty factors in modeling the problem such as the cost estimates of features. The solutions were identified in this paper under deterministic cost estimates assumptions. It might be that an uncertainty in these estimates may lead to a situation where two different platform scopes cannot be safely differentiated. However, such uncertainty also influences the decisions performed by software experts during scoping activities. To minimize the risk of this threat, the deviation in the estimates in our method should be handled by determining the sensitivity of the achieved solutions to cost estimates of features. To do so, the cost estimates are to be replaced with normal distribution centered on the given estimates (original values) with a given standard deviation (i.e., 5%, 10%, 15%, 20%, 25%, 30%, and 35%). A trial simulation (i.e., 10 – trial simulation) is to be conducted after computing the distribution of each estimate. The consistency of the results of the simulation with the original solutions can be indicated by the average number of changes in the original solution. Thus, the effect of the deviation in the estimates can be analyzed. The presence of the mathematical formulation and optimization support makes it easier to perform such sensitivity analysis. This suggests that the proposed method is likely to be superior to ad hoc decision made by experts in term of the facility and accuracy of assessing and evaluating the impact of such uncertainty on the product platform scoping.
5. Concluding remarks Selecting the optimized product platform scope is a challenging issue that must be handled carefully when designing a family of products. This paper proposed a step by step method for product platform scope optimization (PPSMS) that integrates different
104
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
types of information and techniques. It also proposed a mathematical formulation and a simulated annealing based algorithm to support decision maker in selecting the optimized product platform scope (Section 3.3). The mathematical formulation and consequent optimization approach can verify and evaluate complicated decisions and identify the optimized scope, or, a set of non-dominated scopes when there is no single scope in the solution space that simultaneously optimizes each objective. PPSMS should be considered as a decision framework to show how tradeoff analysis can be performed for selecting a candidate platform scope using techniques such as the cost saving estimation model, commonality index, and Kano’s model for analyzing customer needs. PPSMS in its current form can be easily generalized by encoding different product line optimization criteria and constraints. First, PPSMS can address customer satisfaction as a criterion for product line optimization. The categorization of features and their priorities for each customer group can be used to measure and maximize the customer satisfaction of each product in the corresponding customer group. Second, the number of products can be made variable rather than given as an input by encoding some pricing and revenue generation models (Moorthy, 1984; Sundararajan, 2004). Thus, the product platform scope can be defined and optimized considering both the revenue and the cost associated with it. Third, the characterization and benefit metrics introduced in Schmid (2002) and DeBaud and Schmid (1999) can also be encoded. In particular, experts can provide values for the characterization functions as inputs; these values are then used to evaluate the benefit functions. For instance, the characterization functions “S2 : Competitor products” and “S3 : Market-share gainer” can be used to evaluate the “Competitive Advantage” benefit metric. Then, the different benefit functions can be used as optimization criteria for the product platform, instead of or in addition to those used in our approach. Fourth, when there is a time constraint that enforces the delivery of minimal products, the PPSMS can be adapted to restrict the amount of commonality by this constraint. Lastly, PPSMS can be adapted to use more abstract or relative cost estimates for organizations which are not capable of making detailed cost estimates of features. The PPDMS we proposed has limitations that need to be addressed in future work. (1) PPSMS addresses a part of the business factors that influence the success of SPLs. Other factors have to be addressed in order to provide a more comprehensive decision support framework. (2) We prioritized features only according to their impacts on customer satisfaction. Different factors have to be considered to dictate what should be identified as high priority features; these include a product manager’s personal judgment, and features included in competitors’ products. (3) Analysis of customer needs and customization of the mathematical model may require significant efforts for large SPLs. The overall effort to apply PPSMS can be reduced by providing tool support that is able to automatically analyze customer needs and customize the mathematical model. (4) Although the study team provided rough estimates for features’ costs, it still artificial data. In our opinion, it does not limit the applicability of the method. Those estimates can be adjusted in the cores of more accurate data. There are several key business factors that influence the success of SPLs (Ahmed and Capretz, 2007). Strategic planning is among these factors. Under the umbrella of this factor, several components exist, such as: the identification of plans to target market segments, which differentiation strategy to follow (Horizontal vs. Vertical), the competitors in each segment, and the order of entry
in the market. PPSMS addresses a part of this strategic planning. Future work should focus on adding further techniques to perform trade-off analysis, such as risk estimation and sensitivity analysis. Moreover, the objective of a SPL development differs from one organization to another. Thus the criteria for optimizing the platform scope has to be defined in the context of the business objectives for a specific product line development in a given organization. Hence, another promising area for future research is to consider additional criteria, such as quality and time to market, which can be applied to optimizing candidate platform scopes. PPSMS lacks consideration and analysis of the impact of legacy assets while generating product platform scopes. We are planning to model legacy assets as part of our context approach. In addition, we are also planning to relax some of the assumptions currently being applied in PPSMS. For example, presently, the number of products is known as part of the inputs required by our method. We would like to weaken this assumption by encoding some pricing and revenue generation models. Thus the number of products will be a decision variable rather than a known parameter. Also, presently, features are prioritized only according to customer preferences. We would like to integrate different factors to prioritize features. Moreover, for simplicity our model made a distinction of either having a reusable feature or a unique feature which might not always be the case in practice where reuse and adapt scenario is heavily applied. We would like to explicitly integrate this third category into our method and optimize the scope. This can be a significant extension in widening the practical applicability of PPSMS. Our method provides the beginning of a process to build a tool-based approach for optimizing platform scope; additional work must be done for automating it. Finally, further application in industrial case studies with real costs estimates would further confirm the usefulness of this method and expedite its acceptance. Acknowledgments This research was supported by the MSIP (Ministry of Science, ICT & Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (NIPA-2014H0301-14-1020) supervised by the NIPA (National IT Industry Promotion Agency). This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (NRF2013R1A1A3005162). The authors would also like to gratefully acknowledge Prof. Danhyung Lee for his invaluable discussions and support during this research. Appendix. Statistical analysis for H4 • Here we are using a 4-point Likert scale (1 = Strongly Disagree, 2 = Disagree, 3 = Agree, 4 = Strongly Agree) where 1 and 2 represents responses against (−) the guidance of our methodology, while 3 and 4 represents responses in favor (+). • We will build a table with the responses received in terms of positive or negative responses (Table A.1). • Then, we will build a table with the responses expected in an ideal scenario. In this case we expect positive responses for Question
Table A.1 An example for encoding the obtained answers for Q7 and Q8. Results table
Respondent 1
Respondent 2
Question 7 Question 8
+ −
+ +
...
Respondent n + −
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106 Table A.2 An example for building the expected table. Expected table
Respondent 1
Respondent 2
Question 7 Question 8
+ −
+ −
...
Respondent n + −
7 and negative one for Question 8. Then for all respondents we expect these results (Table A.2). • Then, we calculate the number of possible Tables we can get if responses are random. We use this formula, taking into account that there is 2 options per cell (+ or −) and we have “n” respondents: 22n • Then, we should calculate from the 22n possible tables, the number of tables that are similar to our “Results Table” in terms of deviation from the “Expected Table”. Let’s call “m” to this number. - If the “Results Table” is exactly equal to the “Expected Table”, there is no deviation, so there is only one table from the 22n possible tables that fits the criteria (m = 1). - If the “Results Table” differ from the “Expected Table” in just one cell, the deviation is one cell, so there are 2n tables from the 22n possible tables that fits the criteria (m = 2n). - For more than one cell, we should follow this process . . . • Then, we should calculate the probability (p) to obtain a table similar to our “Results Table” from the 22n possible tables. That is divide “m” by “22n ”.p = m/22n (according to the obtained data, p = 0.000079) • If the value of “p” is less than 0.01/0.05/0.1 we can reject the null hypothesis (H0) in favor of the alternative hypothesis (Ha) with 0.01/0.05/0.1 significance level. References Ahmed, F., Capretz, L.F., 2007. Managing the business of software product line: an empirical investigation of key business factors. Inf. Softw. Technol. 49 (2), 194–208. Ali, M.S., Babar, M.A., Schmid, K., 2009. A comparative survey of economic models for software product lines. In: SEAA’09. 35th Euromicro Conference on Software Engineering and Advanced Applications, 2009. IEEE. Angelis, L., Stamelos, I., 2000. A simulation tool for efficient analogy based cost estimation. Empir. Softw. Eng., vol. 5(1)., pp. 35–68. Ardis, M.A., Weiss, D.M.,1997. Defining families: the commonality analysis (tutorial). In: Proceedings of the 19th International Conference on Software Engineering. ACM. Arros, M., Neto, A.D., 2011. Threats to Validity in Search-based Software Engineering Empirical Studies. UNIRIO. In, H.P., Baik, J., Kim, S., Yang, Y., Boehm, B., 2006. A quality-based cost estimation model for the product line life cycle. CACM 49 (12), 85–88. Bandinelli, S., Mendieta, G.S., 2000. Domain potential analysis: calling the attention on business issues of product-lines. In: Software Architectures for Product Families. Springer, pp. 76–81. Berger, C., Blauth, R., Boger, D., Bolster, C., Burchill, G., DuMouchel, W., Walden, D., 1993. Kano’s methods for understanding customer-defined quality. Center Qual. Manage. J. 2 (4), 3–36. Berger, C., Rendel, H., Rumpe, B., 2010. Measuring the ability to form a product line for a set of existing products. In: 4th Int. VAMOS. Böckle, G., Clement, P., McGregor, J.D., Muthing, D., Schmid, K., 2004a. A cost model for software product lines. In: Software Product-Family Engineering. Springer, pp. 310–316. Böckle, G., Clement, P., McGregor, J.D., Muthing, D., Schmid, K., 2004b. Calculating ROI for software product lines. Softw. IEEE 21 (3), 23–31. Boehm, B., Brown, A.W., Mandachy, R., Yang, Y., 2004. A software product line life cycle cost estimation model. In: Proceedings. 2004 International Symposium on Empirical Software Engineering, ISESE’04, 2004. IEEE. Capra, E., Francalanci, C., 2006. Cost implications of software commonality and reuse. In: Third International Conference on Information Technology: New Generations, ITNG, 2006. IEEE. Coplien, J., Hoffman, D., Weiss, D., 1998. Commonality and variability in software engineering. Softw. IEEE 15 (6), 37–45. de Moraes, M.B.S., de Almeida, E.S., Romero, S., 2009. A systematic review on software product lines scoping. In: Proceedings of 6th Experimental Software Engineering Latin American Workshop (ESELAW 2009).
105
DeBaud, J.-M., Schmid, K.,1999. A systematic approach to derive the scope of software product lines. In: Proceedings of the International Conference on Software Engineering (ICSE’21). ACM Press, pp. 34–43. Douta, G., Talib, H., Nierstrasz, O., Langlotz, F., Comp, A.S., 2009. A new approach to commonality and variability analysis with applications in computer assisted orthopaedic surgery. Inf. Softw. Technol. 51 (2), 448–459. Fritsch, C., Hahn, R., 2004. Product line potential analysis. In: Software Product Lines. Springer, pp. 228–237. Geppert, B., Weiss, D.M., 2003. Goal-oriented assessment of product-line domains. In: Proceedings. Ninth International Software Metrics Symposium, 2003. IEEE. Gillain, J., Faulkner, S., Heymans, P., Jureta, I., Snoeck, M.,2012. Product portfolio scope optimization based on features and goals. In: Proceedings of the 16th International Software Product Line Conference, vol. 1. ACM. Guo, J., White, J., Wang, G., Li, J., Wang, Y., 2011. A genetic algorithm for optimized feature selection with resource constraints in software product lines. J. Syst. Softw. 84 (12), 2208–2221. Harman, M.,2007. The current state and future of search based software engineering. In: 2007 Future of Software Engineering. IEEE Computer Society. Helferich, A., Herzwurm, G., Schockert, S., 2005. QFD-PPP: product line portfolio planning using quality function deployment. In: Software Product Lines. Springer, pp. 162–173. Helferich, A., Schmid, K., Herzwurm, G., 2006. Product management for software product lines: an unsolved problem? CACM 49 (12), 66–67. Her, J.S., Kim, J.H., Oh, S.H., Rhew, S.Y., Kim, S.D., 2007. A framework for evaluating reusability of core asset in product line engineering. Inf. Softw. Technol. 49 (7), 740–760. http://www.sei.cmu.edu/cmmi/tools/dev/index.cfm (accessed 12.08.13). Inoki, M., Fukazawa, Y., 2007. Core Asset scoping method: product line positioning based on levels of coverage and consistency. In: First International Workshop on Management and Economics of Software Product Lines (MESPUL 07). John, I., Eisenbarth, M., 2009. A decade of scoping: a survey. In: Proceedings of the 13th International Software Product Line Conference, Carnegie Mellon University. John, I., Knodel, J., Lehner, T., Muhing, D., 2006. A practical guide to product line scoping. In: 10th International Software Product Line Conference, 2006, IEEE. Johnson, D.S.,2002. A theoretician’s guide to the experimental analysis of algorithms. In: Data Structures, Near Neighbor Searches, and Methodology: Proceedings of 5th & 6th DIMACS Implementation Challenges. American Mathematical Society, Providence, pp. 215–250. Johnson, M., Kirchain, R., 2010. Developing and assessing commonality metrics for product families: a process-based cost-modeling approach. IEEE Trans. Eng. Manage. 57 (4), 634–648. Jung, H., 1998. Optimizing value and cost in requirements analysis. IEEE Softw. 15 (July/August (4)), 74, 78. Kang, K.C., Donohoe, P., Koh, E., Lee, J., Lee, K., 2002. Using a marketing and product plan as a key driver for product line asset development. In: Software Product Lines. Springer, pp. 366–382. Kano, N., Seraku, N., Takahashi, F., Tsuji, S., 1984. Attractive quality and must-be quality. J. Jpn. Soc. Qual. Control 14 (2), 39–48. Karlsson, J., Ryan, K., 1997. A cost-value approach for prioritizing requirements. IEEE Softw. (September/October), 67–74. Kazman, R., Asundi, J., Klien, M., 2002. Making architecture design decisions: An economic approach. No. CMU/SEI-2002-TR-035. Software Engineering Institute, Carnegie-Mellon University, Pittsburgh, PA. Kishi, T., Noda, N., Katayama, T., 2002. A method for product line scoping based on a decision-making framework. In: Software Product Lines. Springer, pp. 348–365. Lee, J., Kang, S., Kim, C., 2009. Software architecture evaluation methods based on cost benefit analysis and quantitative decision making. Empir. Softw. Eng. 14 (4), 453–475. Lee, J., Kang, S., Lee, D., 2010. A comparison of software product line scoping approaches. Int. J. Softw. Eng. Knowl. Eng. 20 (5), 637–663. Matzler, K., Hinterhuber, H.H., 1998. How to make product development projects more successful by integrating Kano’s model of customer satisfaction into quality function deployment. Technovation 18 (1), 25–38. Moorthy, K.S., 1984. Market segmentation: self-selection, and product line design. Market. Sci. 3 (4), 288–307. Muller, J., 2011. Value-based portfolio optimization for software product lines. In: 15th International Software Product Line Conference (SPLC), 2011. IEEE. Nam, D., Park, C.H., 2000. Multiobjective simulated annealing: a comparative study to evolutionary algorithms. Int. J. Fuzzy Syst. 2 (2), 87–97. Nilsson-Witell, L., Fundin, A., 2005. Dynamics of service attributes: a test of Kano’s theory of attractive quality. Int. J. Serv. Ind. Manage. 16 (2), 152–168. Noor, M.A., Rabiser, R., Grünbacher, P., 2008. Agile product line planning: a collaborative approach and a case study. J. Syst. Softw. 81 (6), 868–882. Northrop, L., Clements, P.C., 2007. A Framework for Software Product Line Practice, Version 5.0, SEI – 2007. http://www.sei.cmu.edu/productlines/index.html Oates, B.J., 2005. Researching Information Systems and Computing. Sage. Park, S.Y., Kim, S.D., 2005. A systematic method for scoping core assets in product line engineering. In: 12th Asia-Pacific Software Engineering Conference. APSEC’05, 2005. IEEE. ˜ J., Hinchey, M.G., Ruiz-Cortés, A., Trinidad, P., 2007. Building the core archiPena, tecture of a NASA multiagent system product line. In: Agent-Oriented Software Engineering VII. Springer, pp. 208–224. Peterson, D.R., 2004. Economics of software product lines. In: Software ProductFamily Engineering. Springer, pp. 381–402.
106
H.I. Alsawalqah et al. / The Journal of Systems and Software 98 (2014) 79–106
Pohl, K., Böckle, G., Van Der Linden, F., 2005. Software product line engineering: foundations, principles and techniques. Springer. Poulin, J.S., 1997. Measuring Software Reuse: Principles, Practices, and Economic Models. Poulin, J.S., Caruso, J.M., 1993. A reuse metrics and return on investment model. In: Proceedings of Advances in Software Reuse, Selected Papers from the Second International Workshop on Software Reusability, 1993. IEEE. Riebisch, M., Streitferdt, D., Philippow, I., 2001. Feature scoping for product lines. Proc. PLEES 3. Rogers, E.M., 2010. Diffusion of Innovations. Free Press, New York. Rommes, E., 2003. A people oriented approach to product line scoping. PLEES 3, 23–27. Schmid, K.,2002. A comprehensive product line scoping approach and its validation. In: Proceedings of the 24th International Conference on Software Engineering. ACM. Schmid, K., Schank, M., 2000. PuLSE-BEAT – a decision support tool for scoping product lines. In: Software Architectures for Product Families. Springer, pp. 65–75. Sireli, Y., Kauffmann, P., Ozan, E., 2007. Integration of Kano’s model into QFD for multiple product design. IEEE Trans. Eng. Manage. 54 (2), 380–390. Sundararajan, A., 2004. Nonlinear pricing of information goods. Manage. Sci. 50 (12), 1660–1673. Taborda, L.J., 2004. Generalized release planning for product line architectures. In: Software Product Lines. Springer, pp. 238–254. Ullah, M.I., Ruhe, G., Garousi, V., 2010a. Decision support for moving from a single product to a product portfolio in evolving software systems. J. Syst. Softw. 83 (12), 2496–2512. Ullah, M.I., Wei, X., Nault, B.R., Ruhe, G., 2010b. Balancing business and technical objectives for supporting software product evolution. Int. J. Softw. Eng. Comput. 2, 75–93. van der Linden, F.J., Schmid, K., Rommes, E., 2007. Software Product Lines in Action. Springer. Van Laarhoven, P.J., Aarts, E.H., 1987. Simulated Annealing. Springer. White, J., Dougherty, B., Schmidt, D.C., 2009. Selecting highly optimal architectural feature sets with Filtered Cartesian Flattening. J. Syst. Softw. 82 (8), 1268–1284.
Withey, J., 1996. Investment Analysis of Software Assets for Product Lines, DTIC Document. Xu, Q., Jiao, R.J., Yang, X., Helander, M., Khalid, H.M., Opperud, A., 2009. An analytical Kano model for customer need analysis. Design Stud. 30 (1), 87–110. Zelkowitz, M.V., Wallace, D.R., 1998. Experimental models for validating technology. Computer 31 (5), 23–31.
Hamad I. Alsawalqah is an assistant professor at University of Jordan, Jordan. He received his BS in Computer Information Systems from University of Jordan in spring 2004. He received his MA in Management Information Systems from Amman Arab University for Graduate Studies in spring 2006. In summer 2008, he received his second MA in Software Engineering from Korea Advanced Institute of Science and Technology (KAIST). In 2014, he received his PhD in Information and Communications Engineering from KAIST. His research interests are in software engineering and Product Management. Sungwon Kang received his BA from Seoul National University, Korea in 1982, and received his MS and PhD in computer science from the University of Iowa, USA in 1989 and 1992, respectively. From 1993, he was a principal researcher of Korea Telecom R&D Group until October 2001 when he joined KAIST and is currently an associate professor of the university. Since 2003, he has been an adjunct faculty member of Carnegie-Mellon University, USA, for the Master of Software Engineering Program. He is the editor of Korean Journal of Software Engineering Society. His current research areas include software architecture, software modeling and analysis, software testing, and formal methods. Jihyun Lee is an assistant professor at Daejeon University, Korea. She received the BS degree in Information and Communications Engineering from the Chonbuk National University and the MS and the PhD in Computer Sciences from the Chonbuk National University. She worked as a research assistant professor at KAIST from 2005 to 2011. Her research interests include SPL, software testing, software architecture, and business process maturity.