Expert Systems With Applications 123 (2019) 345–356
Contents lists available at ScienceDirect
Expert Systems With Applications journal homepage: www.elsevier.com/locate/eswa
System portfolio selection with decision-making preference baseline value for system of systems construction Yajie Dou a,∗, Zhexuan Zhou a, Xiangqian Xu a, Yanjing Lu b a b
College of System Engineering, National University of Defense Technology, Changsha, Hunan 410073, PR China The Institute of Logistic Science and Technology, 2 Feng Ti South Road, Beijing 100071, PR China
a r t i c l e
i n f o
Article history: Received 14 September 2018 Revised 4 December 2018 Accepted 25 December 2018 Available online 6 January 2019 Keywords: Weapon system portfolio selection (WSPS) MCDM Preference baseline value Multi-objective programming Interval-number
a b s t r a c t For multi-objective problems, most corresponding methods will terminate when the non-dominated set (Pareto set) is acquired, while some redundant systems may exist in it. The same situation may appear in the weapon system portfolio selection as it always tends to be a multi-objective problem. The paper employs a baseline-based method to decide whether to select a redundant system. First, the problem is analyzed in depth to demonstrate the connotation of baseline and related concepts are defined. Then, based on the definition of the redundancy system, baseline system, and baseline value, the weapon system portfolio selection approach with regard to a baseline value is proposed. Starting from the two core parts of the approach, selection strategy analysis, and weapon systems ranking, the strategy analysis of weapon system selection under a single objective and multiple objectives were performed. Subsequently, the interval number theory was employed to extend the VIKOR method to the E-VIKOR method, with a linear programming model as the weighting method under uncertainties to rank the candidate weapon systems. Finally, a case with three different ranking results of candidate weapon systems are studied under different weighting schemes. Based on the ranking results, baseline value, and selection strategy, the weapon system portfolio refining was developed. © 2019 Elsevier Ltd. All rights reserved.
1. Introduction Joint operations and system confrontation have become important development trends of modern warfare. “Joining with each other” is the ultimate target for the development of future weapons. The traditional weapon development mode is department centered, implying that weapons are planned independently to satisfy the individual requirements of different departments. These scattered planning processes result in incompatibility in the joining of weapons. Therefore, a portfolio selection mode is proposed in weapon planning to realize military demands by the combination of capabilities from multiple weapons, instead of by a single complex and expensive weapon. The portfolio selection theory that aims to realize the maximum benefit by the combination of multiple projects, items, systems, or other entities, was proposed by Markowitz (1952). The theory initiated a new era on studying finance with mathematical tools, and has been applied subsequently to study selection problems in various fields, such as research and development (R&D)
∗
Corresponding author. E-mail addresses:
[email protected] (Y. Dou),
[email protected] (Z. Zhou),
[email protected] (X. Xu). https://doi.org/10.1016/j.eswa.2018.12.045 0957-4174/© 2019 Elsevier Ltd. All rights reserved.
project selection (Chien, Huynh, & Huynh, 2018; Ismail & Pham, 2017; Tassak, Kamdem, Fono, & Andjiga, 2017), capital budgeting in healthcare (Angelis, Kanavos, & Montibeller, 2016), and defense acquisition (Dou, Zhang, Ge, Jiang, & Chen, 2015). It facilitates decision-making when a subset of alternatives are expected to be chosen to achieve the overall benefits under certain constraints such as limited resources and multiple, incommensurate, or even conflicting objectives (Ge, Hipel, Fang, Yang, & Chen, 2014). Similarly, the decision makers in military organizations are facing selection problems as well in, for example, resource allocation, R&D, or system acquisition (Mohagheghi, Mousavi, Vahdani, & Shahriari, 2017; Zhou, Xu, Dou, Tan, & Jiang, 2018). In regard to military system portfolio selection, the most typically investigated analysis techniques include multiple objective analysis (Ma et al., 2016; Bramerdorfer & Zaˇ voianu, 2017), multiple criteria analysis (Sabbaghian et al., 2016; Degolia & Loubeau, ´ Marasovic, ´ Aljinovic, ´ & Zdravka, 2017), value analysis (Poklepovic, 2012), expert judgments(Zhang & Yang, 2017), etc. Other existing methods, such as the Monte Carlo technique (Nalpas, Simar, & Vanhems, 2017), risk analysis (Wang, Li, & Watada, 2017), Pareto analysis (Doerner, Gutjahr, Hartl, Strauss, & Stummer, 2004), and resource-allocation techniques (Ko & Lin, 2008) are less frequently utilized because they overemphasize on mathematical or quanti-
346
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
tative features, and rely heavily on the balance of constraints and value optimization. It can be summarized from the literature that the current portfolio selection methods rely primarily on the multi-criterion decision-making (MCDM) model (Li & Liu, 2015), which is regarded as the most effective strategy to describe the portfolio value. In the MCDM methods, a decision-maker must evaluate a number of systems on multiple dimensions, and subsequently select a nondominated portfolio set from candidate system portfolios. Most literatures will terminate after acquiring the non-dominated portfolios, where certain systems may be selected repeatedly in different portfolios. Therefore, some redundant systems (systems in portfolio A1 but not portfolio A2, where A1 and A2 are two portfolios in the non-dominated set, and A1 ⊂ A2) may exist in the non-dominated set. To further optimize the initial portfolio set, a decision-maker should decide whether to keep or reduce those redundant systems to refine the non-dominated set. Hence, a baseline-value-based method is introduced to the system portfolio selection problem. A baseline value represents the threshold to determine whether to maintain redundant systems in some portfolio schemes. By comparing the value of the baseline value and a redundant system, it is reasonable for a decisionmaker to choose to maintain or reduce a redundant system. In regard to baseline-based decisions (Clemen & Smith, 2009a;Liesiö & Punkka, 2014), researchers have realized the importance of baseline value in linear addition models. Studies regarding the importance of baseline value have been performed preliminarily. The existing research about baseline value is based on the foundation that a baseline is a known point value. However, owing to the incomplete information of systems, baseline systems and baseline values are difficult to determine in practice, because a baseline value is linked to the values of the systems. Therefore, we herein propose an innovative technique to handle the incomplete attribute information of systems to assist in determining baseline values. The primary contribution of this paper is that the baseline idea is applied into the further optimization of non-dominated portfolio set. The selection strategy on a single objective and multiple objectives are discussed to verify the mechanism of judging whether a redundant system should be selected. Subsequently, the “Vlsekriterijumska Optimizacija I Komprois-no Resenie” (VIKOR) method is extended to rank systems with incomplete information represented by intervals. Based on the systems ranking result, a baseline value can be determined to re-optimize the initial portfolio set. The paper is structured as follows. In Section 2, the weapon system portfolio selection problem is described and analyzed. Subsequently, Section 3 proposes the portfolio selection model based on the baseline system. In Section 4, an illustrative case is studied on system portfolio selection to examine the feasibility of the proposed model. Finally, Section 5 concludes the work of this study with a discussion. 2. Problem description In recent years, scholars Clemen and Smith have reported that the results of a linear addition model depend on a certain baseline value, that is, a model parameter is related to the value of a weapon system option (Clemen & Smith, 2009b). The model proved that policymakers from a strategic perspective tend to add a weapon system with a higher risk and a poor cash flow, rather than simply abandoning it. Even if such a candidate weapon system does not exist, the baseline value affects the decision-maker to determine which weapon systems are higher than the baseline value and are worthy of choice. This section focuses on why baseline values are required, how the baseline values are defined, and how the baseline values is used for portfolio selection.
2.1. Analysis of vested portfolios While selecting non-dominated solutions for the combined selection of weapon systems, an interesting phenomenon in the portfolio of all non-dominated weapon systems is found. 1) Although the portfolio of the non-dominated weapon system could reach the decision goal eventually, the internal structure of the portfolio is vastly different. 2) The portfolio of a certain weapon system becomes a non-dominated solution without the value of the combined scheme of some weapon systems. From the perspective of decision-making, some “optional or undifferentiated” weapon systems exist. For the two obvious phenomena above, we control the portfolio maturity index determined by the objective factors. Thus, the following three questions arise: Question 1: Are there any weapon systems selected or not selected under certain decision-making preferences? Undoubtedly, the selection of the optimal weapon system portfolio is closely related to the Pareto optimal analysis strategy designed by decision makers. Nevertheless, “optional or undifferentiated” weapon systems are ubiquitous in the general selection results. However, from the perspective of practical application, some problems exist in the portfolio of weapon systems. Question 2: How does a decision-maker set the critical point to select/un-select a weapon system? From the perspective of value judgment or decision preference, whether there is a value at baseline, or critical point, which will “choose or not choose difference” quite apart from the two types of weapon system. A weapon system above the baseline will definitely be selected and the weapon system below the baseline will not be selected. What should the relationship be between the baseline value and the values of selected or unselected systems? Question 3: Does a relationship exist between the critical point for selecting a weapon system and the component of the final optimal portfolio? If such a critical point exists and the point value is determined, then the possible size of the optimal portfolio is determined. A certain relationship exists between the critical point value and the structure of the optimal portfolio, which will affect the components structure, especially the number of components. Hitherto, various studies have been performed on optimization problems under limited resource constraints, such as weapon system portfolio selection and resource allocation. Perhaps the linear and combinatorial value model is the most typically used to consider multi-objective scenarios. In this model, the value of the portfolio is the sum of all the weapon systems that are included in the portfolio, and the combined value function is used to evaluate the overall performance of the integrated weapon systems. The combined value can be maximized by the standard integer linear programming (ILP) under certain resource constraints. The existing literature (Liesiö & Punkka, 2014) has recognized the importance of baseline values in linear addition model applications. The methods to normalize and analyze baseline values are highly limited. First, the existing methods defining the baseline values assumes that not adding a weapon system is at least better than adding a weapon system even if it exhibits the worst performance on every attribute. Next, these methods require policymakers to define a hypothetical weapon system accurately. For decision makers, “selecting a project” or “not selecting a system” are the same. Through studying the relationship between the baseline value and the portfolio scale, the decision-maker adopts the baseline value to obtain the combinatorial selection results that satisfy the decision objectives. Nevertheless, (1) The existing baseline definition does not consider the component-level value and how the results of the “selected or not” are affected, especially a lack of decision-making on different target situation analyses; (2) In the existing research, the baseline value is considered as a point value
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
whose information is actually difficult to access in real decisionmaking. Therefore, it is crucial to extend the baseline value research into the field of uncertainty information. Define, evaluate, and set the baseline values. Subsequently, establish the baseline system. Once the baseline value and the baseline system are determined, they can assist in deciding which weapon systems are selected. Meanwhile, the problem of weapon systems ordering considering uncertain information is given full consideration. In this case, the portfolio of weapon system can be selected based on the interval baseline value. Finally, the model of the combined selection with baseline values in the case of uncertain information is verified by an example. 2.2. Basic concepts and related definitions Assume that there are n attributes denoted by Xi for each weapon system candidate j, j = 1, . . . , m. Further, the maximum and minimum preference attribute performances are xi ∗ and xi 0 , respectively. The performance of the jth alternative weapon system is represented by the vector x j = (x1 j , . . . , xn j ) ∈ X1 × . . . × Xn , based on the multi-criteria decision model. The additive model of the weapon system is defined as follows:
v (x j ) =
n
wi vi (xij )
(1)
i=1
where wi > 0 is the weight of the attribute and vi ( · ) is the value function of the specified attribute. According to the standardized guidelines, vi (x∗i ) = 1, vi (x0i ) = 0, ni=1 wi = 1, and the value function of the selection model v(xj ) ∈ [0, 1]. Let z ∈ 0, 1m denote the selection result; subsequently, z j = 1 if and only if the jth weapon system is selected.
V (z ) =
m
[ ( z j v ( x j ) + ( 1 − z j )vB ] =
j=1
m
z j [v(x j ) − vB ] + mvB (2)
j=1
where vB is the baseline value. Under the budget value B, the maximum of Eq. (2) can be solved by ILP.
mvB + maxm { z∈{0,1}
m
z j [ ( v ( x j ) − vB ]|
j=1
m
c j z j ≤ B]}
(3)
j=1
where cj > 0 is the cost of the jth weapon system. In general, many linear constraints appear in Eq. (3); therefore, the problem can be modeled. To ensure the versatility of the results, ZF ⊆{0, 1}m was used to represent a set of feasible weapon systems that satisfies a particular constraint, regardless of the specific form of the constraint. Golabi, Kirkwood, and Sicherman (1981) was aware of this problem and defined an addition with the attribute xB ∈ X1 × . . . × Xn . Subsequently, the baseline value can be achieved by a value function as follows:
vB = v(xB ) ∈ [0, 1]
(4)
As shown, the baseline value is the value that the decisionmaker has identified to exhibit a certain attribute performance of the weapon system. In this study, the baseline value is determined by the value of the weapon system. The necessary definitions are as follows: Definition 1. Redundant system. In the obtained non-dominated portfolio set, one portfolio scheme A1 may be included in another portfolio scheme A2. The difference systems A2 − A1 between the two portfolio schemes are regarded as redundant systems. Definition 2. Baseline system. The baseline system refers to the weapon system that represents a decision-maker’s threshold preference that involves (or not)
347
a baseline system in a portfolio that will not influence the evaluation of the decision makers for the portfolio. Definition 3. Baseline values. The baseline value is the value of a system that renders no difference regardless of whether it is selected. That is, the baseline value is the value of the baseline systems. Therefore, from the definitions above, the redundant systems can be a single system or a collection of systems. The baseline system is a redundant system or system set chosen by a decisionmaker. In this study, the baseline system is considered as a single system. The baseline value is related closely to the preferences of the decision makers. It determines whether a redundant system should be selected. The baseline value resembles a “sieve” for the non-dominated solutions. In other words, the decision makers prefer to choose a weapon system that ranks above the baseline value. Those below the baseline value are discarded. That is to say, the non-dominated portfolio set is further optimized to achieve a refinement on the redundant systems. 2.3. Process of weapon system portfolio selection with baseline value According to the definitions above, the baseline values are set by the decision makers based on their preferences. Once the baseline value is achieved, all portfolios in the non-inferior solutions, named redundant systems, can be classified based on this value. The redundant systems whose values are higher than the baseline are recommended in the decision-making, and those whose values are equal to the baseline value are considered optional. The redundant systems that are less valuable than the baseline system are discarded. Therefore, obtaining the baseline value is the key to the “refined” selection for the acquired non-dominated weapon system portfolios. As shown in Fig. 1, the process of weapon system portfolio selection with the baseline value can be described as the following four steps: Step 1: rank the weapon system. As for the multi-criteria model, the calculation complexity of the multi-criteria evaluation can be summarized into several cases: (1) Criterion weight is clear and the weight is a point value; (2) Criteria weight is uncertain and the weight is a point value; (3) Criteria weight is clear and the weight is an interval number; (4) Criterion weight is uncertain and the weight is an interval number. The first three cases can be regarded as the specializations of the last case. Therefore, the last case is adopted as the generalized form of the criteria weights in this study. Step 2: Select the baseline system from the redundant system. After the non-dominated portfolios are achieved, the decision makers can obtain numerous redundant systems through the analysis. However, not all of the redundant systems are baseline systems that can reflect the preference of the decision makers. Therefore, the decision makers are required to identify a baseline system from the redundant systems. Step 3: Analysis of redundant system selection strategies for different scenarios. By generalizing the value model of multicriteria plus sum, we can observe the value relationship between the “redundant part” of the weapon system portfolio under the single target situation and the weapon system portfolio with a small number of component elements, which will be extended to the weapon system single target sorting problem. For multi-target situations, we also perform value analysis on both. The redundant system selection strategies for single-objective and multi-objective scenarios are designed, separately. Step 4: “Refine” portfolio selection. In the weapon system ranking, including the non-dominated solution set of redundant systems and baseline values under certain situations, we can, according to step 3, include the strategy of the Pareto solutions pro-
348
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
Fig. 1. Schematic diagram of weapon systems portfolio selection with a preferred baseline value.
posed by the portfolio of refining and screening again. Through the selection process, it is finally clear which portfolios of the redundant systems need to be retained, and which portfolios of the redundant systems need to be eliminated or tagged optional. The portfolio selection method with the preferred baseline value is proposed in this study to handle the existence of the “optional” portfolios in the non-dominated solutions, and refine the weapon system portfolios. In fact, this method can also be used to transform the existing dominated solution. It can inform the decision-maker what new construction solutions “can be attempted” and which new construction solutions “must not”, and reduce the scope of reselection.
In particular, when l = k + 1, the scheme A2 has one more weapon system (k + 1 ) than A1. If scheme A1 is better than scheme A2, then
k+1 1 w(i ) · v(x(i ) ) − k+1 w (i ) · v ( x (i ) ) ( j ) j=1 j=1 w i=1 i=1 k+1 ( j ) k (i ) · v (x(i ) ) − k w( j ) · k+1 w(i ) · v (x(i ) ) i=1 w j=1 j=1 w i=1 = >0 k k+1 ( j ) ( j) · j=1 w j=1 w
v ( A 1 ) − v ( A 2 ) = k
According to the introduction herein, the baseline value determines whether a reduction system should remain in the Pareto solutions. Therefore, the selection strategies on single and multiple objectives are analyzed. In addition, the weapon system ranking is key for the baseline-based selection of weapon system portfolios. The E-VIKOR method is proposed to solve the ranking problem under uncertain criterion preferences.
w( j ) + w(k+1)
j=1 k j=1
w( j ) ·
k
k
i=1 (i )
i=1
k
w
k
w (i ) · v ( x (i ) ) −
i=1
Assume that two weapon system portfolio schemes A1 and A2 appear in the non-dominated weapon system portfolios. If all systems in scheme A1 are included in scheme A2, it is dilemmatic to determine the better one of A1 and A2. In general, two portfolio schemes are considered: A1 = {(1 ) , (2 ) , . . . , (k )} and A2 = {(1 ) , (2 ) , . . . , (k ) , . . . , (l )}, (i ) ∈ {1 , 2 , . . . , m} , i = 1 , 2 , . . . , l. The values of portfolios A1 and A2 are as follows:
v ( A1 ) =
k i=1
v ( A2 ) =
l i=1
w (i )
k
j=1
l
w( j )
w (i )
j=1
× v ( x ( i ) ) = k
j=1
× v ( x ( i ) ) = l w( j )
k
1 w( j )
1
( j) j=1 w
i=1
l i=1
w (i ) × v ( x (i ) ) (5) w (i ) × v ( x (i ) ) (6)
w(i ) · v(x(i ) )−
(7)
Further,
w(k+1)
3.1. Analysis on selection strategy on single objective
w( j )
· v(x(i ) ) + w(k+1) · v(x(k+1) ) >0 k+1 ( j ) ( j) · w j=1 j=1 w w(k+1) ki=1 w(i ) · v(x(i ) ) − kj=1 w( j ) · w(k+1) v(x(k+1) ) = >0 k k+1 ( j ) ( j) · j=1 w j=1 w =
3. A portfolio selection model for the baseline system
k
k
1
⇔
k
w( j ) · w(k+1) v(x(k+1) ) > 0
j=1 k i=1
w (i )
k
j=1
w( j )
· v(x(i ) ) > v(x(k+1) )
⇔ v(A1 ) > v(x(k+1) )
(8)
Therefore, when the value of system (k + 1 ) is lower than that of A1, A1 should maintain the original portfolio. When the value of system (k + 1 ) is greater than that of A1, A1 should add system (k + 1 ) to the original portfolio. When the value of system (k + 1 ) equals to that of A1, the portfolio schemes A1 and A2 are considered equivalent. When l > k + 1, to facilitate the sequential presentation, the virtual portfolio A¯ 1 is introduced, where A2 = A1 ∪ A¯ 1, A¯ 1 = {(k + 1 ) , . . . , ( l )}. If portfolio scheme A1 is better than that of A2, the following inequality holds:
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
v ( A 1 ) − v ( A 2 ) = k
j=1
l =
j=k+1
w( j ) ·
k
k
1
i=1
w( j )
i=1
l
1
w ( i ) · v ( x ( i ) ) − l
j=1
w( j )
349
w (i ) · v ( x (i ) )
i=1
w(i ) · v(x(i ) ) − kj=1 w( j ) · li=k+1 w(i ) v(x(i ) ) >0 k l ( j) · ( j) j=1 w j=1 w
(9) Further, the following conclusion can be derived. l
w( j )
j=k+1
⇔
k
w (i ) · v ( x (i ) ) −
i=1 k w (i ) k ( j) j=1 w i=1
k
w( j ) ·
j=1
· v ( x (i ) ) >
l i=k+1
l i=k+1
l
w (i ) v ( x (i ) ) > 0
w (i )
j=k+1
w( j )
v ( x (i ) )
(10)
⇔ v(A1 ) > v(A¯ 1 )
Therefore, when the value of the virtual portfolio scheme A¯ 1 is lower than that of A1, the original portfolio scheme A1 should remain unchanged. When the value of A¯ 1 is equal to that of A1, the portfolio schemes A1 and A2 are equivalent. When the value of portfolio A¯ 1 is greater than that of A1, the portfolio scheme A2 is better than that of A1.
Fig. 2. VIKOR method of compromise.
(3) If v1 (A1 ) > v1 (A¯ 1 ) and v2 (A1 ) ≤ v2 (A¯ 1 ), or v1 (A1 ) < v1 (A¯ 1 ) and v2 (A1 ) ≥ A2(A¯ 1 ), then A1 is superior to A2 on objective 1 but dominated to A2 on objective 2, or A1 is dominated to A2 on objective 1 but superior to A2 on objective 2. It is difficult to decide which portfolio is more superior.
3.2. Analysis on selection strategy for multiple objectives For multiple objectives, more than one evaluation criterion are concerned. Unlike the single-objective problem, the multi-objective selection strategy should consider a tradeoff among different criteria to reach a balance. Regarding m alternative systems, the values of the system j on two objectives are v1 (xj ) and v2 (xj ). Let w( j ) , j = 1, 2, . . . , m be the weights of the systems and A a system portfolio scheme. The values of A on objectives 1 and 2 are expressed by v1 (A) and v2 (A), respectively. Assuming that two portfolio schemes A1, A2 are in the non-dominated solutions and A1 is included by A2, i.e., any system in A1 can be found in A2, but not vice versa (the number of systems in A2 is more than that in A1). It is more complicated to compare A1 and A2 on a multi-objective case because more situations should be considered. First, for every objective, the selection strategy is analyzed. Nevertheless, we let A¯ 1 be the virtual portfolio, where A2 = A1 ∪ A¯ 1. Based on the idea in 3.1, the following conclusions are achieved: For objective 1: (1) If v1 (A1 ) > v1 (A¯ 1 ), then v1 (A1 ) > v1 (A¯ 1 ) ⇔ v1 (A1 ) > v1 (A2 ). This means that when the value of the virtual portfolio barA1 is lower than that of A1, A1 is better than A2. (2) If v1 (A1 ) = v1 (A¯ 1 ), then v1 (A1 ) = v1 (A¯ 1 ) ⇔ v1 (A1 ) = v1 (A2). This means that when the value of A¯ 1 is equal than that of A1, A1 is equivalent to A2. (3) If v1 (A1 ) < v1 (A¯ 1 ), then v1 (A1 ) < v1 (A¯ 1 ) ⇔ v1 (A1 ) < v1 (A2). This means that when the value of the virtual portfolio A¯ 1 is greater than that of A1, A2 is better than A1. For objective 2: (1) If v2 (A1 ) > v2 (A¯ 1 ), then the following formula holds: v2 (A1 ) > v2 (A¯ 1 ) ⇔ v2 (A1 ) > v2 (A2). This means that when the value of the virtual portfolio A¯ 1 is lower than that of A1, A1 is better than A2. (2) If v2 (A1 ) < v2 (A¯ 1 ), then v2 (A1 ) < v2 (A¯ 1 ) ⇔ v2 (A1 ) < v2 (A2 ). This means that when the value of the virtual portfolio A¯ 1 is equal that of A1, A1 is equivalent to A2. (3)If v2 (A1 ) < v2 (A¯ 1 ), then v2 (A1 ) < v2 (A¯ 1 ) ⇔ v2 (A1 ) < v2 (A2 ). This means that when the value of the virtual portfolio A¯ 1 is greater than that of A1, A2 is better than A1. To summarize, we consider the two objectives together. (1) if v1 (Z1 ) > v1 (Z¯ 1 ) and v2 (A1 ) ≥ v2 (A¯ 1 ), then the portfolio A1 is superior to A2. (2) if v1 (A1 ) < v1 (A¯ 1 ) and v2 (A1 ) ≤ v2 (A¯ 1 ), then the portfolio A1 is dominated to A2.
3.3. Weapon system ranking based on interval number comparison In the prior study, the TOPSIS method was adopted widely to make decisions on multi-criterion problems. As a general classical MCDM method, the TOPSIS uses a number of experts’ opinions to evaluate the criteria scores given by an AHP matrix (Opricovic & Tzeng, 2007). The primary idea of TOPSIS is that a scheme will be better if it is closer to the positive ideal solution (PIS), and further from the negative ideal solution (NIS) (Opricovic & Tzeng, 2004). However, one of the salient drawbacks of TOPSIS is that it focuses only on the distance from the scheme point to the PIS or the NIS without considering the relevant importance of these distances. Hence, we employed another method called VIKOR, which was proposed for multi-criteria optimization and has been used widely. 3.3.1. Basic ideas of VIKOR Vlsekriterijumska Optimizacija I Komprois-no Resenie (VIKOR) is a compromise decision method proposed by Opricovic and Tzeng et al. It is derived from the Lp-metric of the compromise approach, which is characterized by maximizing the “group benefits” and minimizing the “individual regrets”, such that the compromise can ultimately be accepted by the decision-maker. The basic idea is: First, define the ideal solution and negative ideal solution. Subsequently, the distance between the evaluation object and the ideal solution is compared with the order of the evaluation object. The VIKOR method yields the most recent compromise solution from the ideal solution, which means that the attributes ask each other for concessions. In 2007, the VIKOR method was applied to the level above the relationship to promote it. Consider a two-criterion problem as an example to illustrate the compromise solution of the VIKOR method (Opricovic & Tzeng, 2007), as shown in Fig. 2. In Fig. 2, f1∗ denotes the ideal solution, and f2∗ denotes the negative ideal solution. The compromise solution Fc is the closest to the optimal solution F∗ among all. It is the result of the mutual compromise between the two solutions. The corresponding corrective quantities are f1∗ − f1c and f2∗ − f2c . Step 1: Calculate the ideal solution fi∗ and the negative ideal solution fi− for each index.
fi∗ = [(max fi j |i ∈ I1 ), (min fi j |i ∈ I2 )] j
j
(11)
350
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
fi− = [(min fi j |i ∈ I1 ), (max fi j |i ∈ I2 )] j
j
(12)
where fi j represents the ith evaluation value of jth program, I1 represents a set of benefit criteria, and I2 represents a set of cost criteria. Step 2: Calculate the “Group Benefit” value Sj and the “Individual Sorry” value Rj for each option.
Si =
n
ωi ( fi∗ − fi j )/( fi∗ − fi− )
(13)
i=1
Rj = max[ωi ( fi∗ − fi j )/( fi∗ − fi− )] i
L c1 U f , f11 11 L U
S1
(14)
f21 , f21 L ··· U f m1 , f m1
S2 ··· Sm
··· ···
L c2 U f , f12 12 L U
f22 , f22 L ··· U f m2 , f m2
L cn U f ,f 1Ln 1Un
··· ··· ···
f 2n , f 2n L ··· U fmn , fmn
W = [w1 , w2 , . . . wn ]
(18)
where S1 , S2 , . . . , Sm are the available systems for the decision makers and C1 , C2 , . . . , Cn are the criteria that measure the performance of all the systems. fij is the ratio of the system Si to criterion Cj , the upper and lower limits are fij U and fij L , wj is the weight of the criterion Cj . In this study, the proposed E-VIKOR algorithm includes the following steps (Zhang & Wei, 2013): Step 1: Determine PIS and NIS.
∗ ∗ f1 , · · · , fn = max fiUj | j ∈ I or min fiLj | j ∈ J , j = 1, 2, . . . , n S∗ =
ωi indicates the weight of the ith index; the smaller the Sj , the larger is the group benefit; the smaller the Rj , the smaller is the individual regret. Step 3: Calculate the benefit ratio value Q generated by each scheme.
Q j = v(S j − S− )/(S∗ − S− ) + (1 − v )(R j − R− )/(R∗ − R− ),
(15)
where S− = min j S j , R− = min j R j , S∗ = max j R j and R∗ = max j R j . v is the largest group utility weight indicating the attitudes of the decision makers. v > 0.5 shows that most decision makers prefer the system; v = 0.5 means the decision makers exhibit a consistent attitude; v < 0.5 means the majority of decision makers exhibit a negative attitude. Step 4: According to Qj , Rj , and Sj , the alternatives can be sorted (the smaller the better). Step 5: When the following two conditions are reached, the system with the smaller Q is better. Condition 1: Q (a∇ ) − Q (a ) ≥ 1/(J − 1 ), where a∇ represents the second scheme sorted by Q; Q represents the optimal scheme sorted, and J is the number of alternatives. Condition 2: Acceptable decision confidence. The S value of the first system must be better than that of the second. Further, the R value of the first system must be better than that of the second. When several systems exist, the first, second, third, fourth, and other systems must be compared to satisfy condition 2. Criteria: If the relationship between the first and second schemes is ranked while satisfying condition 1 and condition 2, then the first scheme is accepted as the optimal scheme; if the relationship between the first and second schemes is ranked while satisfying condition 2, the first and second schemes are accepted as the optimal scheme simultaneously.
s∗ = { f1∗ , . . . , fn∗ } = {(max fiUj | j ∈ I )or (max fiLj | j ∈ J )}, j = 1, 2, . . . , n i
i
(16)
3.3.2. Design of E-VIKOR ranking algorithm In regard to weapon systems, owing to incomplete and fuzzy information, uncertainties should be considered in the ranking process. Therefore, the interval probability method is embedded into the VIKOR method to form an extended VIKOR (E-VIKOR). First, assume that a ratio matrix exists for multiple criteria, shown as follows:
(17)
(19)
i
i
− − f1 , · · · , fn = min fiLj | j ∈ I or max fiUj | j ∈ J , j = 1, 2, . . . , n S− =
i
(20)
i
The criteria for multi-criteria decision-making can be divided into two categories: benefit type and cost type, represented by I − and J, respectively. Obviously,S∗ is PIS, NIS. and S represents Step 2: Calculate interval RLi , RUi and xLi , xUi , i = 1, 2, . . . , m
f ∗ − f U
fL−f∗ j ij j ∈ I, w j f i−j − f j∗ | j ∈ J f j∗ − f j− j j
(21)
f ∗ − f L
fU − f ∗ j ij ij j j ∈ I, w j ∈ J | − − ∗ ∗ j fj −fj fj −fj
(22)
RLi = max w j i = 1, . . . , m
RUi = max w j i = 1, . . . , m xLi =
j∈I
wj
f ∗− f U j
ij
f j∗ − f j−
+
i = 1, . . . , m xUi =
j∈I
wj
f ∗− f L ij j f j∗ − f j−
i = 1, . . . , m
+
j∈J
j∈J
wj
fL−f∗
wj
ij
j
f j− − f j∗
fU − f ∗
ij
j
f j− − f j∗
QiU = v
xLi − x∗ x− − x∗ xUi − x∗ x− − x∗
(24)
Step 3: calculate the interval Qi = QiL , QiU .
QiL = v
(23)
+ (1 − λ )
+ (1 − λ )
RLi − R∗ R− − R∗ RUi − R∗ R− − R∗
(25)
(26)
where x∗ = min xLi , x− = max xUi , R∗ = min RLi , R− = max RUi . i
i
i
i
Typically, it is recommended to use λ = 0.5, which represents the majority of the criteria for the strategy weight. Step 4: Select the optimal optional object with the smallest Q value. A new method for comparing interval numbers is applied in this study (Sayadi, Heydari, & Shahanaghi, 2009): Let Qi = Qi L , QiU and Qt = Qt L , Qt U be two interval numbers, and the smallest of them must be selected. When Qi L = QiU and Qt L = Qt U , then the interval number Qi and Qt are the definite real numbers. Subsequently, we have
p(Qi ≥ Qt ) =
1 if Qi > Qt 1/2 i f Qi = Qt 0 if Qi < Qt
(27)
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
351
Fig. 3. VIKOR method of compromise. Fig. 4. Criteria weight set space. U
L
U
When Qi = Qi = Qi and Qt = Qt
p(Qi ≥ Qt ) =
⎧ ⎨1 ⎩
if
Qi −Qt L Qt U −Qt L
0
if
L,
Qi > Qt U if
L
Qt ≤ Qi ≤ Qt
U
(28)
Qi < Qt L
When Qi U = Qi L and QtU = Qt L = Qt , p(Qi ≥ Qt ) can be represented as £º.
p ( Qi ≥ Qt ) =
⎧ ⎨1 U i f ⎩
Qi −Qt QiU −QiL
0
if
if QiU
p ( Qi ≥ Qt ) =
w∈
0 Sw
=
w
∈n+
|wi ≥ 0,
QiL ≤ Qt ≤ QiU
(29)
< Qt ,
s s
(30)
where s = QiU − Qi L QtU − Qt L . The probability of Qt > Qi may be given by the following formula:
s p(Qt ≥ Qi ) = s
(31)
Step 5: Based on all Qi probability degrees, we assume that the degree of possibility is greater than 0.5, and is a sort of all optional objects. 3.3.3. Criteria weights evaluation analysis Whether in a weapon system evaluating process or in a weapon system portfolio evaluating process, criterion weights are critical for deciding the value of a system or a portfolio. However, when asking decision makers for criterion weights, it is difficult to determine the accurate values of weights owing to limited knowledge or insufficient information. Therefore, a weighting model handling incomplete information is necessary. To extract the specific rules or knowledge from a decision-maker’s preferences, a preference planning method is introduced to obtain the weight information. The method maps the initial preference of experts into interval values or linear constraints, and performs an analysis to obtain the feasible weight parameter space according to the decision makers’ preference. For example, it is difficult for a decision-maker to provide the accurate weight information as w1 = 0.5 and w2 = 0.2; however, it is possible to demonstrate his preferences in the form of linear inequality constraints, e.g., w1 > 2w2 . Thus, in this section,
n
wi = 1
(32)
i=1
QiL > Qt ,
In general, the most typical case is that Qi U = Qi L and Qt U = Qt L . The two intervals Qi and Qt are shown in Fig. 3(Liu, 2009). The shaded rectangle, s is divided into two parts, denoted by s and s , by the line y = x. Let Qi = Qi L QiU , Qt = Qt L QtU , Qi U = Qi L and Qt U = Qt L ; subsequently, the probability of Qi ≥ Qt can be given as follows:
the incomplete criterion weight information is considered as a fea0 . In general, the criterion weight sible criterion weight set Sw ⊆ Sw can be characterized using the vector w = (w1 , w2 , . . . , wn )T :
Park, Kim, and Wan (1996) proposed the basic forms of several incomplete weight information expressions, which can be used to portray the decision makers’ preferences in this study: (1) Weak preference relationship: wi ≥ wj ; (2) Strict preference relationship: wi − w j ≥ αi j ; (3) With multiple preference: wi ≥ α ij wj ; (4) Interval preference relationship: αi ≤ wi ≤ αi + εi ; (5) The difference between the preferences: j = k = l, wi − w j ≥ wk − wl . For ∀i, j, α ij , α i , ε i > 0 exist. The inequalities above comprise the vector convex set space that can collectively include all the preference information of the decision-maker (Salo & Hämäläinen, 2010), which is shown in Fig. 4. Regardless of the number of linear inequalities of the weight information, the weight information aggregation space always contains a convex polyhedron boundary. Only when these weight information expressions are consistent with each other and no conflicting case exists, can one obtain a non-empty convex space feasible weight set (Vilkkumaa, Salo, & Liesiö, 2014). In this study, 0 , it implies insuftwo extreme cases were obtained. When Sw = Sw ficient arbitrary weight information when the feasible weight of the collection space is maximal; However, any point estimate in 0 information space can be mapped to the criterion weight the Sw value in the case of complete information. Therefore, the vertices of the convex space of the weight information can be described using a matrix:
w1 , w2 , . . . , wt = ext(conx(Sw ))
Wext = w1 , w2 , . . . , wt ∈n+×t
(33)
(34)
As described in Fig. 4, when the criteria weight information is 0 unknown, the three vertices of the convex information space Sw are (0,1,0), (1,0,0), and (0,0,1); when the criteria weights are constrained by w2 ≤ w3 ≤ 3w2 and 2w1 ≤ w3 ≤ 4w1 , the four vertices of the convex information space Sw are (1/5, 2/5, 2/5), (1/9, 4/9, 4/9), (3/19, 4/19, 12/19), and (3/11, 2/11, 6/11).
352
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
Table 1 Multi-criteria index value table of candidate system (m = 20). criteria
Detecting Target Command Combat perceived positioning and execution ability(%) ability(%) decision ability(%) ability(%)
Target precision technology(Ratio)
Electric drive technology (Ratio)
Integrated information technology (Ratio)
Automatic command technology (Ratio)
Fire control system technology (Ratio)
Comprehensive protection technology (Ratio)
Technology System to raise Integration costs cost (Million (Million yuan) yuan)
System1 System2 System3 System4 System5 System6 System7 System8 System9 System10 System11 System12 System13 System14 System15 System16 System17 System18 System19 System20
[62,69] [68,71] [62,95] [97,99] [62,94] [62,80] [77,83] [58,95] [60,62] [56,79] [57,65] [69,88] [91,99] [70,92] [60,82] [50,88] [81,97] [78,89] [72,80] [76,80]
[6,7 ] [9,9 ] [5,7 ] [8,8 ] [7,9 ] [6,9 ] [6,9 ] [6,8 ] [5,7 ] [5,6 ] [7,8 ] [7,9 ] [5,7 ] [6,9 ] [4,5 ] [8,9 ] [4,6 ] [6,8 ] [5,7 ] [5,6 ]
[5, 9] [4,6 ] [5,9] [6,8] [7,9] [5,9] [5,8] [4,5] [6,8] [7,8] [6,9] [7,9] [5,9] [4,6] [6,8] [7,7] [5,7] [7,8] [7,7] [5,6]
[6,8] [5,5] [6,7] [7,8] [5,9] [5,6] [7,7] [5,7] [8,8] [4,8] [6,6] [7,8] [4,9] [5,9] [4,6] [6,8] [8,8] [6,6] [4,8] [4,6]
[1, 7] [5, 9] [4,9] [5, 7] [4, 5] [0, 6] [1,1] [6,9] [0,8] [2,9] [0,5] [5,7] [1,2] [1,5] [3,6] [8,8] [5,5] [2,7] [4,8] [5,9]
[0,3] [1,7] [6,8] [3,4] [1,8] [3,8] [4,5] [2,4] [3,4] [8,9] [3,5] [2,7] [2,7] [2,7] [3,7] [2,8] [3,9] [1,2] [2,8] [3,8]
[4,5] [3,4] [5,5] [4,7] [4,5] [2,4] [1,6] [2,5] [2,4] [4,4] [0,3] [3,9] [2,4] [7,8] [4,5] [1,4] [4,8] [2,6] [2,6] [9,9]
[19,50] [36,100] [64, 77] [21,72] [26,59] [42,52] [19,46] [63,86] [52,60] [24,94] [87,88] [62,75] [19,58] [60,66] [12,49] [75,78] [33,81] [19,60] [38,99] [29,75]
[80,90] [79,96] [84,99] [79,88] [86,89] [77,79] [91,94] [89,93] [75,81] [78,79] [77,90] [74,90] [72,80] [72,79] [92,95] [72,98] [80,83] [79,92] [94,95] [88,91]
[89,94] [93,94] [81,85] [83,91] [85,87] [78,88] [77,78] [80,85] [76,97] [86,94] [81,98] [82,83] [87,91] [83,84] [84,93] [87,88] [87,96] [78,89] [96,99] [83,93]
[53,59] [62,71] [55,70] [85,100] [55,90] [78,82] [75,76] [58,82] [66,80] [57,95] [93,97] [50,51] [84,98] [63,86] [73,98] [89,98] [70,99] [76,100] [79,83] [50,70]
[10,18] [2,9] [3,13] [5,11] [9,20] [16,18] [11,16] [5,9] [2,4] [9,18] [6,9] [10, 17] [7,20] [2,9] [1,14] [16, 19] [14,16] [1,14] [8,10] [4,12]
System Maintenance cost (Million yuan) [2,3] [3,4] [6,6] [7,10] [3,6] [5,6] [7,9] [2,7] [7,9] [1,2] [2,9] [9,10] [4,7] [4,7] [7,10] [8,9] [6,9] [5,9] [2,8] [6,7]
Table 2 Positive and negative solutions for all weapon system ratings. Value criteria
Detecting perceived ability
Target positioning ability
Command and decision ability
Combat execution ability
Target precision technology
fi ∗ − fi Integrated information technology 9 4
99% 50% Automatic command technology 1 8
99% 72% Fire control system technology
99% 76% Comprehensive protection technology 3 9
100% 50% Technology to raise costs
9 4 System Integration cost
9 4 System Maintenance cost
46 87
4 16
2 9
2 8
4. Illustrative case study 4.1. Examples and data description In this case, the candidate system set shown in Table 1 contains 20 weapon system candidates and 13 criteria, including four capability criteria, six technology criteria, and three cost criteria. The data were generated by the Monte Carlo method within certain value ranges. In detail, the criteria are detecting perceived ability, target positioning ability, command and decision ability, combat execution ability, target precision technology, electric drive technology, integrated information technology, automatic command technology, fire control system technology, comprehensive protection technology, technology to enhance costs, system integration costs, and system maintenance costs. Among them, the first four capability criteria refer to key functions in the OODA cycle model, which describes the military decision-making process as a dynamic cycle consisting of four aspects: observe, orient, decide, and act. The technology criteria are technique embodiments of the four capabilities above. The cost criteria demonstrate the budget constraints in the technical field. Owing to uncertainty information, the capability criterion values are determined within the interval of [0%, 100%]. Meanwhile, the technical criterion values are within [0,9] according to technical maturity evaluation rules. The cost ranges within [0,100] million according to the published prices of some weapons. Using weapon system 3 in Table 1 as an example, the 13 criteria val-
Electric drive technology
ues are ([62%,95%],[84%,99%], [81%,85%], [55%,70%], [5,7 ], [5,9], [6,7],[4,9], [6,8], [5,5], [64, 77], [3,13],[6,6]). Assume that the weights for all criteria Ci , i = 1, 2, 3 . . . , 13 are w1 , w2 , w3 , . . . , w13 , respectively, and w1 + w2 + w3 + · · · + w13 = 1. Under the condition that the weight information is incomplete, it is only known that w1 ≤ w2 ≤ w3 , w1 ≥ 0.05. Subsequently, according to the method in Section 3.3.3, three extreme weight sets can be calculated as follows: Weight Scheme 1 =(0.0625,0.0625,0.0625,0.0625,0.0825,0.0825, 0.0825,0.0825,0.0825,0.0825,0.0125,0.1187,0.1188). Weight Scheme 2 =(0.0625,0.0625,0.0625,0.0625,0.0825,0.0825, 0.0825,0.0825,0.0825,0.0825,0.0125,0.0125,0.2250). Weight Scheme 3 =(0.0625,0.0625,0.0625,0.0625,0.0825,0.0825, 0.0825,0.0825,0.0825,0.0825,0.0833,0.0833,0.0833). Next, using the proposed E-VIKOR method, the PIS and NIS are found, as shown in Table 2. From the table, the highest perceived perception is 99%, and the worst estimate of the 20 groups is 50%. Further, 20 sets of military weapon system to be evaluated in the target positioning capacity of the numerical performance is concentrated. More specifically, the optimal value is 99% and the worst value is 72%. The results concerning the target precision screening technology, electric drive technology, and integrated information are technology the same for the 20 groups. Subsequently, RLi , RUi and xLi , xUi were
calculated. The specific values are shown in Table 3 below. RLi , RUi
U
U
and xLi , xi were used to calculate Qi = QiL , Qi weights. The results are shown in Table 4.
under different
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
353
Table 3 Interval of [RLi , RUi ] and [xLi , xUi ] under different weighting schemes. Weighting
Weighting 1
Value
[xL
xU ]
[RL
RU ]
Weighting 2 [xL
xU ]
[RL
RU ]
[xL
xU ]
[RL
RU ]
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20
0.2337 0.2474 0.2986 0.2721 0.1496 0.3379 0.3078 0.2483 0.2371 0.3105 0.1174 0.3912 0.1431 0.2513 0.2940 0.3760 0.3915 0.1215 0.1770 0.4588
0.6026 0.5744 0.7519 0.5957 0.6847 0.7415 0.6349 0.6735 0.6074 0.7201 0.5926 0.7965 0.6933 0.7179 0.7660 0.7532 0.7770 0.6246 0.6817 0.8339
0.0583 0.0677 0.0604 0.0979 0.0499 0.1187 0.0841 0.0672 0.0804 0.0825 0.0485 0.1188 0.0439 0.0627 0.0907 0.1147 0.0946 0.0449 0.0333 0.0825
0.1383 0.0888 0.0943 0.1445 0.1547 0.1382 0.1261 0.0895 0.1233 0.1340 0.1365 0.1467 0.1537 0.0897 0.1462 0.1439 0.1330 0.1226 0.1177 0.0981
0.1821 0.2704 0.3695 0.3515 0.1083 0.2719 0.3231 0.2312 0.3323 0.246 0.0917 0.4499 0.1497 0.3061 0.4042 0.379 0.3759 0.1905 0.1355 0.5269
0.486 0.5531 0.7391 0.6683 0.603 0.6775 0.6408 0.7166 0.7177 0.6003 0.6718 0.8128 0.6436 0.761 0.8087 0.7409 0.7949 0.6445 0.7359 0.855
0.0516 0.0677 0.1144 0.1855 0.0329 0.0854 0.1593 0.0672 0.1524 0.0825 0.0485 0.225 0.0624 0.0649 0.1718 0.2173 0.1463 0.085 0.0333 0.1372
0.0738 0.0888 0.1388 0.2737 0.12 0.1262 0.2389 0.1696 0.2337 0.0977 0.2586 0.278 0.186 0.1699 0.277 0.2467 0.2521 0.2324 0.223 0.1859
0.1688 0.2346 0.3169 0.1978 0.0997 0.2827 0.2163 0.2786 0.2313 0.2646 0.187 0.3672 0.08 0.2737 0.2182 0.3596 0.3176 0.0709 0.1585 0.4084
0.5663 0.6451 0.7577 0.5786 0.6422 0.6912 0.5617 0.7037 0.5947 0.7638 0.61 0.7655 0.6392 0.7134 0.698 0.7274 0.7651 0.5823 0.7219 0.8328
0.0516 0.0677 0.0552 0.0687 0.035 0.0833 0.059 0.0672 0.0564 0.0825 0.0833 0.0833 0.0439 0.0627 0.0768 0.0825 0.0664 0.0419 0.0333 0.0825
0.097 0.1091 0.0943 0.1014 0.1085 0.097 0.0885 0.0892 0.0866 0.0984 0.0958 0.103 0.1078 0.0825 0.1026 0.101 0.0934 0.0861 0.1087 0.0897
Table 4 QL and QU values for different weight schemes.
Weighting 3
4.3. Selection analysis
Weight
Weight1
Value
[QL
QU ]
Weight2 [QL
QU ]
Weight3 [QL
QU ]
System1 System2 System3 System4 System5 System6 System7 System8 System9 System10 System11 System12 System13 System14 System15 System16 System17 System18 System19 System20
0.1842 0.2322 0.2379 0.374 0.0909 0.5058 0.342 0.2309 0.2775 0.3374 0.0624 0.5431 0.0615 0.2143 0.3596 0.5157 0.4436 0.0503 0.0416 0.4409
0.7711 0.5476 0.6941 0.7917 0.8959 0.8676 0.7434 0.6196 0.7129 0.8353 0.7567 0.9412 0.8978 0.6513 0.9178 0.8993 0.8712 0.722 0.7414 0.7671
0.0974 0.1879 0.3481 0.4815 0.0109 0.225 0.4095 0.1613 0.4012 0.2022 0.0317 0.6265 0.0982 0.2056 0.488 0.5643 0.4175 0.1709 0.0295 0.4977
0.3416 0.4163 0.6401 0.8689 0.5125 0.5739 0.7799 0.6881 0.8197 0.4652 0.8404 0.9723 0.6738 0.7179 0.9677 0.8613 0.9077 0.769 0.8097 0.8122
0.1852 0.3339 0.3056 0.3167 0.0302 0.4688 0.2648 0.3597 0.2577 0.4514 0.406 0.5243 0.0756 0.3266 0.3837 0.5138 0.3798 0.0567 0.0575 0.5458
0.7453 0.8768 0.8529 0.7819 0.8709 0.8268 0.6858 0.7839 0.6948 0.8837 0.7656 0.9151 0.8644 0.7459 0.8684 0.8771 0.8515 0.6834 0.924 0.8718
4.2. Ranking results in different weight schemes
According to the obtained intervals RLi , RUi and xLi , xUi under the uncertainty background, the comparisons of values between each of the two systems in weight schemes 1,2, and 3 were obtained, as shown inTable 5–7, respectively. Based on these comparisons, the rankings of the weapon system under the three weight schemes were derived, as shown in Table 8. The optimal weapon system A12 exhibited excellent stability, ranking the first among all. In addition, weapon system 18 ranked the last in weight schemes 1 and 3, while weapon system 1 was the worst in weight scheme 2. The ranking of system 16 was stable as well, i.e., second, third, and third in weight schemes 1,2, and 3, respectively. Weapon system 10 and weapon system 11 were affected significantly by different weight schemes, and the ranking fluctuated. Thus, the incomplete weight information can be captured well by extreme points to assist experts in the implementation of weapon system decision-making.
The 27 weapon system portfolios listed in Table 9 are examples of non-dominated portfolios. Assume that the selected redundant weapon systems include the following: System 2–8, system 14, system 15, system 20. According to the weapon systems ranking under the three weight schemes, set system 7 as the baseline value system. In the case of weight scheme 1, the weapon systems ranked below system 7 were system 9, system 5, system 13, system 1, system 3, system 14, system 8, system 11, system 19, system 2, and system 18; the ones ranked higher than system 7 were system 12, system 16, system 6, System 1, system 15, system 20, system 10, and system 4. Thus, the redundant systems that must be removed were system 5, system 3, system 14, system 8, and system 2. Further, the portfolios containing the removed redundant systems will be discarded, for example P3, P5, and P13. 5. Discussion and conclusions In multi-criterion portfolio optimization, a decision-maker must evaluate a number of projects on multiple dimensions and subsequently select the set of projects that optimizes the portfolio’s overall value. It is believed that the standard multi-criterion decision-making methods may confuse practitioners when they are deciding whether to choose a redundant system. For example, if A1 and A2 are both portfolio schemes in a non-dominated set, and A1 − A2 = S1. Subsequently, a decision-maker may encounter the problem of whether to choose S1. The baseline value represents a decision-maker’s preference on the value of a portfolio. Herein, it is proven that the redundant system should be removed/maintained if its value is below/above the baseline value. Compared to other standard portfolio methods of multi-criterion, the baseline-valuebased portfolio selection method rescans twice on the structure of a non-dominated set. The effect of introducing a baseline value into portfolio selection on multiple criteria is that the “worth or not worth doing ” redundant systems are identified and subsequently, the system portfolios are refined. In conclusion, the multi-criteria decision model was applied to determine the baseline values reflecting the decision-maker’s preference in the portfolio selection, and its performance was tested. First, the results of the selection of the weapon system portfolio of non-dominated solution presented by some of the system “op-
354
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
Table 5 Comparison values under the uncertainty situation with weighting program 1.
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20
S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
S12
S13
S14
S15
S16
S17
S18
S19
S20
0.50 0.35 0.48 0.68 0.52 0.83 0.61 0.41 0.53 0.68 0.40 0.89 0.50 0.42 0.74 0.86 0.79 0.37 0.38 0.72
0.65 0.50 0.67 0.89 0.63 0.99 0.83 0.59 0.73 0.86 0.53 1.00 0.61 0.60 0.90 1.00 0.96 0.49 0.50 0.94
0.52 0.33 0.50 0.73 0.53 0.89 0.66 0.41 0.56 0.72 0.42 0.94 0.52 0.43 0.78 0.91 0.84 0.38 0.39 0.78
0.32 0.11 0.27 0.50 0.39 0.73 0.41 0.19 0.32 0.51 0.25 0.81 0.38 0.21 0.60 0.76 0.66 0.22 0.23 0.55
0.48 0.37 0.47 0.61 0.50 0.74 0.56 0.42 0.50 0.62 0.40 0.81 0.48 0.42 0.68 0.77 0.70 0.37 0.38 0.64
0.17 0.01 0.11 0.27 0.26 0.50 0.19 0.05 0.14 0.30 0.13 0.63 0.25 0.07 0.41 0.55 0.43 0.10 0.11 0.29
0.39 0.17 0.34 0.59 0.44 0.81 0.50 0.25 0.39 0.59 0.31 0.87 0.42 0.27 0.67 0.83 0.74 0.27 0.28 0.65
0.59 0.41 0.59 0.81 0.58 0.95 0.75 0.50 0.65 0.79 0.48 0.98 0.57 0.52 0.84 0.96 0.91 0.44 0.45 0.87
0.47 0.27 0.44 0.68 0.50 0.86 0.61 0.35 0.50 0.67 0.38 0.92 0.48 0.37 0.74 0.88 0.81 0.34 0.35 0.74
0.32 0.14 0.28 0.49 0.38 0.70 0.41 0.21 0.33 0.50 0.25 0.78 0.37 0.23 0.59 0.73 0.64 0.22 0.23 0.54
0.60 0.47 0.58 0.75 0.60 0.87 0.69 0.52 0.62 0.75 0.50 0.92 0.58 0.53 0.80 0.89 0.83 0.47 0.47 0.78
0.11 0.00 0.06 0.19 0.19 0.37 0.13 0.02 0.08 0.22 0.08 0.50 0.19 0.03 0.32 0.42 0.32 0.06 0.07 0.19
0.50 0.39 0.48 0.62 0.52 0.75 0.58 0.44 0.52 0.63 0.42 0.81 0.50 0.44 0.69 0.77 0.71 0.39 0.40 0.65
0.58 0.40 0.57 0.79 0.58 0.93 0.73 0.48 0.63 0.77 0.47 0.97 0.56 0.50 0.83 0.95 0.88 0.43 0.44 0.84
0.26 0.10 0.22 0.40 0.32 0.59 0.33 0.16 0.26 0.41 0.20 0.68 0.31 0.17 0.50 0.62 0.53 0.18 0.19 0.44
0.14 0.00 0.09 0.24 0.23 0.45 0.17 0.04 0.12 0.27 0.11 0.58 0.23 0.05 0.38 0.50 0.39 0.08 0.09 0.25
0.21 0.04 0.16 0.34 0.30 0.57 0.26 0.09 0.19 0.36 0.17 0.68 0.29 0.12 0.47 0.61 0.50 0.13 0.15 0.38
0.63 0.51 0.62 0.78 0.63 0.90 0.73 0.56 0.66 0.78 0.53 0.94 0.61 0.57 0.82 0.92 0.87 0.50 0.51 0.82
0.62 0.50 0.61 0.77 0.62 0.89 0.72 0.55 0.65 0.77 0.53 0.93 0.61 0.56 0.81 0.91 0.85 0.49 0.50 0.80
0.28 0.06 0.22 0.45 0.36 0.71 0.35 0.13 0.26 0.46 0.22 0.81 0.35 0.16 0.56 0.75 0.62 0.18 0.20 0.50
Table 6 Comparison values under the uncertainty situation with weighting program 2.
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20
S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
S12
S13
S14
S15
S16
S17
S18
S19
S20
0.50 0.79 1.00 0.90 0.58 0.92 0.97 0.87 0.98 0.85 0.77 0.52 0.79 0.93 0.91 0.66 0.98 0.90 0.76 0.84
0.21 0.50 0.97 0.98 0.42 0.77 1.00 0.73 1.00 0.62 0.67 0.72 0.65 0.81 0.98 0.84 1.00 0.78 0.65 0.95
0.00 0.03 0.50 0.89 0.09 0.25 0.75 0.37 0.77 0.09 0.43 1.00 0.31 0.44 0.92 0.97 0.83 0.46 0.40 0.89
0.10 0.02 0.11 0.50 0.00 0.03 0.31 0.10 0.35 0.00 0.21 0.78 0.08 0.14 0.61 0.60 0.47 0.18 0.18 0.45
0.42 0.58 0.91 1.00 0.50 0.76 0.97 0.77 0.97 0.64 0.72 0.96 0.70 0.82 1.00 0.99 0.98 0.81 0.70 1.00
0.08 0.23 0.75 0.97 0.24 0.50 0.90 0.55 0.90 0.31 0.55 0.99 0.48 0.62 0.98 1.00 0.93 0.62 0.53 0.97
0.03 0.00 0.25 0.69 0.03 0.10 0.50 0.20 0.54 0.02 0.30 0.91 0.16 0.25 0.76 0.79 0.64 0.29 0.28 0.66
0.13 0.27 0.63 0.90 0.23 0.45 0.80 0.50 0.81 0.33 0.51 0.99 0.43 0.57 0.92 0.95 0.86 0.58 0.49 0.89
0.02 0.00 0.23 0.65 0.03 0.10 0.46 0.19 0.50 0.02 0.28 0.87 0.15 0.23 0.73 0.74 0.61 0.27 0.26 0.61
0.15 0.38 0.91 1.00 0.36 0.69 0.98 0.67 0.98 0.50 0.63 0.86 0.59 0.75 1.00 0.94 0.99 0.73 0.61 0.99
0.23 0.33 0.57 0.79 0.28 0.45 0.70 0.49 0.72 0.37 0.50 0.92 0.44 0.53 0.84 0.84 0.77 0.54 0.48 0.77
0.48 0.28 0.00 0.22 0.04 0.01 0.09 0.01 0.13 0.14 0.08 0.50 0.01 0.02 0.35 0.27 0.23 0.05 0.06 0.16
0.21 0.35 0.69 0.92 0.30 0.52 0.84 0.57 0.85 0.41 0.56 0.99 0.50 0.63 0.94 0.97 0.88 0.63 0.54 0.91
0.07 0.19 0.56 0.86 0.18 0.38 0.75 0.43 0.77 0.25 0.47 0.98 0.37 0.50 0.89 0.92 0.82 0.51 0.45 0.85
0.09 0.02 0.08 0.39 0.00 0.02 0.24 0.08 0.27 0.00 0.16 0.65 0.06 0.11 0.50 0.47 0.37 0.14 0.14 0.35
0.34 0.16 0.03 0.40 0.01 0.00 0.21 0.05 0.26 0.06 0.16 0.73 0.04 0.08 0.53 0.50 0.40 0.12 0.13 0.33
0.02 0.00 0.17 0.53 0.02 0.07 0.36 0.14 0.39 0.01 0.23 0.77 0.12 0.18 0.63 0.60 0.50 0.21 0.20 0.48
0.10 0.22 0.54 0.82 0.19 0.38 0.71 0.42 0.73 0.27 0.46 0.95 0.37 0.49 0.86 0.88 0.79 0.50 0.44 0.80
0.24 0.35 0.60 0.82 0.30 0.47 0.72 0.51 0.74 0.39 0.52 0.94 0.46 0.55 0.86 0.87 0.80 0.56 0.50 0.80
0.16 0.05 0.11 0.55 0.00 0.03 0.34 0.11 0.39 0.01 0.23 0.84 0.09 0.15 0.65 0.67 0.52 0.20 0.20 0.50
Table 7 Comparison values under the uncertainty situation with weighting program 3.
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20
S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
S12
S13
S14
S15
S16
S17
S18
S19
S20
0.50 0.72 0.68 0.65 0.48 0.81 0.52 0.69 0.52 0.82 0.71 0.89 0.51 0.63 0.76 0.87 0.75 0.35 0.53 0.89
0.28 0.50 0.45 0.40 0.32 0.58 0.27 0.44 0.27 0.61 0.46 0.71 0.33 0.37 0.54 0.67 0.52 0.18 0.37 0.69
0.32 0.55 0.50 0.45 0.35 0.63 0.31 0.49 0.32 0.66 0.51 0.75 0.36 0.42 0.59 0.71 0.57 0.21 0.40 0.74
0.35 0.60 0.55 0.50 0.38 0.71 0.35 0.55 0.35 0.73 0.58 0.82 0.40 0.47 0.65 0.79 0.63 0.23 0.43 0.82
0.52 0.68 0.65 0.62 0.50 0.73 0.53 0.64 0.53 0.76 0.66 0.82 0.52 0.60 0.71 0.79 0.70 0.40 0.55 0.81
0.19 0.42 0.37 0.29 0.27 0.50 0.16 0.33 0.16 0.55 0.34 0.67 0.27 0.26 0.46 0.62 0.43 0.10 0.32 0.66
0.48 0.73 0.69 0.65 0.47 0.84 0.50 0.70 0.50 0.85 0.74 0.92 0.49 0.63 0.78 0.90 0.76 0.33 0.52 0.93
0.31 0.56 0.51 0.45 0.36 0.67 0.30 0.50 0.30 0.70 0.53 0.80 0.37 0.42 0.61 0.76 0.59 0.20 0.41 0.80
0.48 0.73 0.68 0.65 0.47 0.84 0.50 0.70 0.50 0.84 0.73 0.91 0.49 0.63 0.77 0.90 0.76 0.33 0.52 0.92
0.18 0.39 0.34 0.27 0.24 0.45 0.15 0.30 0.16 0.50 0.32 0.62 0.25 0.24 0.41 0.56 0.39 0.10 0.30 0.60
0.29 0.54 0.49 0.42 0.34 0.66 0.26 0.47 0.27 0.68 0.50 0.79 0.35 0.38 0.58 0.76 0.56 0.17 0.39 0.79
0.11 0.29 0.25 0.18 0.18 0.33 0.08 0.20 0.09 0.38 0.21 0.50 0.19 0.15 0.31 0.44 0.29 0.05 0.24 0.47
0.49 0.67 0.64 0.60 0.48 0.73 0.51 0.63 0.51 0.75 0.65 0.81 0.50 0.58 0.70 0.79 0.68 0.37 0.52 0.80
0.37 0.63 0.58 0.53 0.40 0.74 0.37 0.58 0.37 0.76 0.62 0.85 0.42 0.50 0.68 0.82 0.66 0.24 0.45 0.85
0.24 0.46 0.42 0.35 0.29 0.54 0.22 0.39 0.23 0.59 0.42 0.69 0.30 0.32 0.50 0.64 0.48 0.15 0.34 0.67
0.13 0.33 0.29 0.21 0.21 0.38 0.10 0.24 0.10 0.44 0.24 0.56 0.21 0.18 0.36 0.50 0.33 0.06 0.26 0.54
0.25 0.48 0.43 0.37 0.30 0.57 0.24 0.41 0.24 0.61 0.44 0.71 0.32 0.34 0.52 0.67 0.50 0.16 0.36 0.70
0.65 0.82 0.79 0.77 0.60 0.90 0.67 0.80 0.67 0.90 0.83 0.95 0.63 0.76 0.85 0.94 0.84 0.50 0.64 0.95
0.47 0.63 0.60 0.57 0.45 0.68 0.48 0.59 0.48 0.70 0.61 0.76 0.48 0.55 0.66 0.74 0.64 0.36 0.50 0.75
0.11 0.31 0.26 0.18 0.19 0.34 0.07 0.21 0.08 0.40 0.21 0.53 0.20 0.15 0.33 0.46 0.30 0.05 0.25 0.50
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
355
Table 8 Ranking of the weapon systems under three weighting scenarios. Weight System
Weight1 Ranking
Weight2 Ranking
Weight3 Ranking
Weight System
Weight1 Ranking
Weight2 Ranking
Weight3 Ranking
System1 System2 System3 System4 System5 System6 System7 System8 System9 System10
13 19 14 8 11 3 9 16 10 7
20 18 9 4 19 15 8 13 7 17
18 8 10 12 19 5 16 11 15 4
System11 System12 System13 System14 System15 System16 System17 System18 System19 System20
17 1 12 15 5 2 4 20 18 6
12 1 16 11 2 3 5 10 14 6
9 1 17 13 6 3 7 20 14 2
Table 9 Eliminate the redundant portfolio of weapon systems after the portfolio selection results. Portfolio
System Group
Portfolio
System Group
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14
S20 S15,S20 S14,S20 S14,S15,S20 S8,S14,S20 S8,S14,S15,S20 S6,S8,S15,S20 S6,S8,S14,S20 S6,S8,S14,S15,S20 S4,S6,S14,S18,S19,S20 S4,S6,S7,S8,S14,S20 S2,S4,S6,S7,S14,S19 S2,S3,S4,S6,S7,S14 S2,S3,S4,S6,S7,S14,S20
P15 P16 P17 P18 P19 P20 P21 P22 P23 P24 P25 P26 P27
S2,S3,S4,S6,S7,S14,S17 S2,S3,S4,S5,S6,S14,S20 S2,S3,S4,S5,S6,S14,S17 S2,S3,S4,S5,S6,S14,S17,S20 S2,S3,S4,S5,S6,S11,S20 S2,S3,S4,S5,S6,S11,S14,S20 S2,S3,S4,S5,S6,S11,S14,S17 S2,S3,S4,S5,S6,S10 S2,S3,S4,S5,S6,S10,S14 S2,S3,S4,S5,S6,S9 S2,S3,S4,S5,S6,S8,S20 S2,S3,S4,S5,S6,S8,S14 S2,S3,S4,S5,S6,S7
Improvements can be made for future studies in the following ways: First, more attention should be given to the baseline value application of the additive model in the definitions of system value and portfolio value modeling, and the computing model can be more flexible and innovative. Second, we will investigate the different baseline value definition how to influence the final system portfolio set. Finally, the set of the refined system portfolios are obtained by ranking and comparing system; thus, it is necessary to improve the quality and efficiency of ranking method. CRediT authorship contribution statement Yajie Dou: Conceived and designed the experiments, analyzed the data; contributed reagents/materials/analysis tools; wrote the paper. Zhexuan Zhou: Conceived and designed the experimentsw. Xiangqian Xu: Performed the experiments; contributed reagents/materials/analysis tools. Yanjing Lu: Analyzed the data. Acknowledgements
timal” phenomenon to perform the analysis, motivated us to focus on the reasons for the baseline study. It is important to determine the baseline value for the portfolio selection; subsequently, the redundant system, baseline system, and baseline value were defined and the method of selecting the weapon system based on the baseline value was presented. The two key steps of the method were sorting the weapon systems and selecting the redundant system in different situations. The selection strategy analysis was performed for the single-objective and multi-objective situations separately. Criteria weights analysis was implemented to handle the hardship caused by information uncertainty when decision makers were asked to assign weights to demonstrate their preferences. The theory of interval number comparison was introduced, and the E-VIKOR method with weight-constrained linear programming was proposed to sort the weapon systems. Finally, the ranking of the weapon systems in three different weight schemes was calculated, and the result was re-screened based on the “refined” portfolio selection. Although the baseline system definition and baseline value identification depend on Pareto set analysis and expert’s judgement, the method proposed in this study offers several advantages. First, it emphasizes the innovative outlook that the redundant system influences whether a weapon system portfolio should be reserved or discarded, and how to refine the selected portfolio with baseline value. Second, single objective and multi-objective situations are both considered for analysis the effect of baseline value in a portfolio refined procedure. It directly figures out the key of a portfolio’s being reserved or discarded influenced by the ranking order of redundant system. Finally, the E-VIKOR model accommodates a comprehension system ranking method from multi-criteria decision-making views, which is significant and valuable in practical decision making.
The work was supported by the National Key R&D Program of China under Grant SQ2017YFSF070185, the National Natural Science Foundation of China under Grant 71690233 and 71571186. In addition, Yajie Dou wants to thank, in particular, the love, support and care for him and his two cute children from his wife over the past seven years, Haoqi Zhao, whom he owes a proposal, Will you marry me? Supplementary material Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.eswa.2018.12.045. References Angelis, A., Kanavos, P., & Montibeller, G. (2016). Resource allocation and priority setting in health care: A multi-criteria decision analysis problem of value? Global Policy, 8(52), 76–83. Bramerdorfer, G., & Zaˇ voianu, A. C. (2017). Surrogate-based multi-objective optimization of electrical machine designs facilitating tolerance analysis. IEEE Transactions on Magnetics, 53(8), 1–11. Chien, C. F., Huynh, N. T., & Huynh, N. T. (2018). An integrated approach for ic design r and d portfolio decision and project scheduling and a case study. IEEE Transactions on Semiconductor Manufacturing, 31(1), 76–86. Clemen, R. T., & Smith, J. E. (2009a). On the choice of baselines in multiattribute portfolio analysis: A cautionary note. Decision Analysis, 6(4), 256–262. Clemen, R. T., & Smith, J. E. (2009b). On the choice of baselines in multiattribute portfolio analysis: A Cautionary note. INFORMS. Degolia, J., & Loubeau, A. (2017). A multiple-criteria decision analysis to evaluate sonic boom noise metrics. Journal of the Acoustical Society of America, 141(5), 3624. Doerner, K., Gutjahr, W. J., Hartl, R. F., Strauss, C., & Stummer, C. (2004). Pareto ant colony optimization: A metaheuristic approach to multiobjective portfolio selection. Annals of Operations Research, 131(1–4), 79–99. Dou, Y., Zhang, P., Ge, B., Jiang, J., & Chen, Y. (2015). An integrated technology pushing and requirement pulling model for weapon system portfolio selection in defence acquisition and manufacturing. Proceedings of the Institution of Mechanical Engineers Part B Journal of Engineering Manufacture, 229(6), 1046–1067.
356
Y. Dou, Z. Zhou and X. Xu et al. / Expert Systems With Applications 123 (2019) 345–356
Ge, B., Hipel, K. W., Fang, L., Yang, K., & Chen, Y. (2014). An interactive portfolio decision analysis approach for system-of-systems architecting using the graph model for conflict resolution. IEEE Transactions on Systems Man and Cybernetics Systems, 44(10), 1328–1346. Golabi, K., Kirkwood, C. W., & Sicherman, A. (1981). Selecting a portfolio of solar energy projects using multiattribute preference theory. Management Science, 27(2), 174–189. Ismail, A., & Pham, H. (2017). Robust Markowitz mean-variance portfolio selection under ambiguous covariance matrix. Working paper or preprint https: //hal.archives-ouvertes.fr/hal-01385585. Ko, P. C., & Lin, P. C. (2008). Resource allocation neural network in portfolio selection. Pergamon Press, Inc. Li, J. J., & Liu, L. W. (2015). An MCDM model based on KL-AHP and TOPSIS and its application to weapon system evaluation. Atlantis Press. Liesiö, J., & Punkka, A. (2014). Baseline value specification and sensitivity analysis in multiattribute project portfolio selection. European Journal of Operational Research, 237(3), 946–956. Liu, F. (2009). Acceptable consistency analysis of interval reciprocal comparison matrices. Fuzzy Sets and Systems, 160(18), 2686–2700. Ma, X., Liu, F., Qi, Y., Wang, X., Li, L., Jiao, L., et al. (2016). A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables. IEEE Transactions on Evolutionary Computation, 20(2), 275–298. Markowitz, H. (1952). Portfolio selection. Journal of Finance, 7(1), 77–91. Mohagheghi, V., Mousavi, S. M., Vahdani, B., & Shahriari, M. (2017). R&d project evaluation and project portfolio selection by a new interval type-2 fuzzy optimization approach. Neural Computing and Applications, 28(12), 3869–3888. Nalpas, N., Simar, L., & Vanhems, A. (2017). Portfolio selection in a multi-moment setting: A simple Monte-Carlo-FDH algorithm. European Journal of Operational Research, 263(1), 308–320. Opricovic, S., & Tzeng, G. H. (2004). Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. European Journal of Operational Research, 156(2), 445–455.
Opricovic, S., & Tzeng, G. H. (2007). Extended VIKOR method in comparison with outranking methods. European Journal of Operational Research, 178(2), 514–529. Park, K. S., Kim, S. H., & Wan, C. Y. (1996). An extended model for establishing dominance in multiattribute decisionmaking. Journal of the Operational Research Society, 47(11), 1415–1420. ´ T., Aljinovic, ´ B., & Zdravka (2012). Portfolio selection model Poklepovic´ , Marasovic, based on technical, fundamental and market value analysis. In Proceedings of the European conference on operational research euro xxv -book of abstracts. Sabbaghian, R. J., Zarghami, M., Nejadhashemi, A. P., Sharifi, M. B., Herman, M. R., & Daneshvar, F. (2016). Application of risk-based multiple criteria decision analysis for selection of the best agricultural scenario for effective watershed management. Journal of Environmental Management, 168, 260–272. Salo, A., & Hämäläinen, R. P. (2010). Preference programming – Multicriteria weighting models under incomplete information. Springer Berlin Heidelberg. Sayadi, M. K., Heydari, M., & Shahanaghi, K. (2009). Extension of VIKOR method for decision making problem with interval numbers. Applied Mathematical Modelling, 33(5), 2257–2262. Tassak, C. D., Kamdem, J. S., Fono, L. A., & Andjiga, N. G. (2017). Characterization of order dominances on fuzzy variables for portfolio selection with fuzzy returns. Journal of the Operational Research Society, 68(12), 1491–1502. Vilkkumaa, E., Salo, A., & Liesi, J. (2014). Multicriteria portfolio modeling for the development of shared action agendas. Group Decision and Negotiation, 23(1), 49–70. Wang, B., Li, Y., & Watada, J. (2017). Multi-period portfolio selection with dynamic risk/expected-return level under fuzzy random uncertainty. Elsevier Science Inc.. Zhang, N., & Wei, G. (2013). Extension of VIKOR method for decision making problem based on hesitant fuzzy set. Applied Mathematical Modelling, 37(7), 4938–4947. Zhang, Y., & Yang, X. (2017). Online portfolio selection strategy based on combining experts’ advice. Computational Economics, 50(1), 1–19. Zhou, Z., Xu, X., Dou, Y., Tan, Y., & Jiang, J. (2018). System portfolio selection under hesitant fuzzy information. In International conference on group decision and negotiation (pp. 33–40).