Expert Systems With Applications 123 (2019) 1–17
Contents lists available at ScienceDirect
Expert Systems With Applications journal homepage: www.elsevier.com/locate/eswa
A progressive sorting approach for multiple criteria decision aiding in the presence of non-monotonic preferences Mengzhuo Guo, Xiuwu Liao∗, Jiapeng Liu School of Management, The Key Lab of the Ministry of Education for Process Control and Efficiency Engineering, Xi’an Jiaotong University, Xi’an, 710049 Shaanxi, PR China
a r t i c l e
i n f o
Article history: Received 17 July 2018 Revised 16 November 2018 Accepted 9 January 2019 Available online 9 January 2019 Keywords: Multiple criteria decision making Multiple criteria decision aiding Multiple criteria sorting Non-monotonic preference Value function
a b s t r a c t A new decision-aiding approach for multiple criteria sorting problems is proposed for considering the non-monotonic relationship between the preference and evaluations of the alternatives on specific criteria. The approach employs a value function as the preference model and requires the decision maker (DM) to provide assignment examples of a subset of reference alternatives as preference information. We assume that the marginal value function of a non-monotonic criterion is non-decreasing up to the criterion’s most preferred level, and then it is non-increasing. For these non-monotonic criteria, the approach starts with linearly increasing and decreasing marginal value functions but then allows such functions to deviate from the linearity and switches them to more complex ones. We develop several algorithms to help the DM resolve the inconsistency in the assignment examples and assign non-reference alternatives. The algorithms not only incorporate the DM’s evolving cognition of the preference, but also take into account the trade-offs between the capacity for satisfying incremental preference information and the complexity of the preference model. The DM is guided to evaluate the results at each iteration and then provides reactions for the subsequent iterations so that the proposed approach supports the DM to work out a satisfactory preference model. We demonstrate the applicability and validity of the proposed approach with an illustrative example and a numerical experiment. © 2019 Elsevier Ltd. All rights reserved.
1. Introduction The multiple criteria sorting (MCS) problem involves assigning a finite set of alternatives evaluated on a family of criteria to classes that are given in a preference order. MCS has been widely applied in many fields, such as finance (Angilella & Mazzù, 2015; Zopounidis & Doumpos, 2001), industry (Brito, de Almeida, & Mota, 2010; Neves, Martins, Antunes, & Dias, 2008; Norese & Carbone, 2014), ecommerce (Del Vasto-Terrientes, Valls, Zielniewicz, & Borràs, 2016; Marin, Isern, Moreno, & Valls, 2013; Quijano-Sánchez, Díaz-Agudo, & Recio-García, 2014; Yuan, Cheng, Zhang, Liu, & Lu, 2015), and ´ others (Corrente, Doumpos, Greco, Słowinski, & Zopounidis, 2017; ´ Corrente, Greco, & Słowinski, 2016; Grigoroudis & Siskos, 2002; ´ ´ ´ Kadzinski & Ciomek, 2016; Kadzinski, Ciomek, Rychły, & Słowinski, ´ ´ 2016; Kadzinski & Słowinski, 2013; 2015; Liang, Liao, & Liu, 2017). There are four main types of approaches for addressing the MCS problems in the literature: (1) methods based on outranking
∗
Corresponding author. E-mail addresses:
[email protected] (M. Guo),
[email protected] (X. Liao),
[email protected] (J. Liu). https://doi.org/10.1016/j.eswa.2019.01.033 0957-4174/© 2019 Elsevier Ltd. All rights reserved.
relationships (Almeida-Dias, Figueira, & Roy, 2010; 2012; Janssen & ´ ´ Nemery, 2013; Kadzinski, Corrente, Greco, & Słowinski, 2014; Köksalan, Mousseau, Özpeynirci, & Özpeynirci, 2009); (2) methods motivated by value functions (Doumpos, Zanakis, & Zopounidis, 2001; ´ ´ ´ Greco, Kadzinski, & Słowinski, 2011; Greco, Mousseau, & Słowinski, ´ ´ ´ 2010a; Kadzinski, Ciomek, & Słowinski, 2015; Kadzinski, Greco, & ´ Słowinski, 2013; Zopounidis & Doumpos, 1998; 1999; 2001); (3) methods based on weighted Euclidean distance (Çelik, Karasakal, & ˙ 2015; Chen, Hipel, & Kilgour, 2007); and (4) rule inductionIyigün, ´ ´ oriented methods (Dembczynski, Greco, & Słowinski, 2009). In this paper, we focus on methods motivated by value functions. Methods that are based on value functions require the DM to provide preference information, which can be either direct or indirect. Direct preference information refers to the specification of the parameter values of the preference model, such as criteria weights. Indirect preference information, including assignment examples specifying a desired assignment of a corresponding reference alternative, requires less cognitive effort from the DM than the direct ´ ´ one (Corrente, Greco, Kadzinski, & Słowinski, 2013). Indirect preference information is used in disaggregation-aggregation paradigms such as the UTADIS method which selects a specific set of
2
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
parameters to work out an assignment recommendation (Doumpos et al., 2001). However, different sets of parameters can achieve different assignments for a non-reference alternative, although they represent the preference information provided by the DM. In this perspective, a recommendation is considered to be more determined if it is valid for all values for the model parameters. To avoid selecting a set of parameters in an arbitrary ´ way, Greco, Mousseau, and Słowinski (2008, 2010a) implemented the robust ordinal regression (ROR) technique to give a more robust recommendation. To select a representative value function, Greco et al. (2011) designed an approach to help the DM select an appropriate set of parameters according to multiple objectives. In addition to the methods based on a disaggregationaggregation paradigm, Ulu and Köksalan (2001) and Köksalan and Ulu (2003) assumed that the DM has a linear value function, and they developed an interactive approach to sort alternatives. Köksalan and Özpeynirci (2009) provided a progressive sorting approach to place some reference alternatives into classes and then used that information to categorize other alternatives. Ulu and Köksalan (2014) developed a heuristic sorting approach assuming that an increasing quasi-concave value function is consistent with the preference of the DM. In most of the studies mentioned above, the preferences on the criteria are assumed to be monotonic with respect to the evaluations of alternatives. This assumption seems to be reasonable in many situations. However, in some real cases, the DM’s marginal value functions are non-monotonic (Despotis & Zopounidis, 1995; Rezaei, 2018). For example, a doctor assigns a person to different health statuses according to his or her physical examination indicators such as blood pressure, heart rate and blood sugar. One is at good health status if his or her blood pressure, heart rate and blood sugar stabilize within a specific interval. For instance, a person at rest with a heart rate of 70 beats per minute (bpm) is healthier than a person with a heart rate of 50 bpm or someone with a heart rate of 100 bpm (Sobrie, Lazouni, Mahmoudi, Mousseau, & Pirlot, 2016). In finance, a firm can be sorted to different levels according to its financial indicators. According to Despotis and Zopounidis (1995), a large value of cash to total assets ratio indicates that a firm lacks investment opportunities because its cash is not taken full advantage of, whereas a small value of this indicator implies that the firm spends too much on general expenses and does not have enough cash to invest in projects. In marketing, a good use of colors in a brand can signal quality, affect perception and contribute to brand cognition. However, the relationship between brand perception and color attribute, for example the color hue, is not necessarily monotonic. There is usually a range for each color to perfectly highlight the customers’ perceptions (Ghaderi, Ruiz, & Agell, 2015). There are a few studies that have considered non-monotonic marginal value functions; however, most of them focus on multiple criteria ranking problems. For example, Despotis and Zopounidis (1995) assumed the DM has quadratic-like marginal value functions for non-monotonic criteria. The approach requires the DM to specify the most preferred levels for non-monotonic criteria, and it develops a linear programming approach to infer the marginal value functions. Derived from UTASTAR, a method called UTA-NM is proposed to admit any possible shapes for nonmonotonic marginal value functions (Kliegr, 2009). However, it introduces many auxiliary binary variables to define the monotonicity of the marginal value functions, and thus, it requires substantial computational time even for a five-alternative experiment. Eckhardt and Kliegr (2012) proposed a heuristic attribute preprocessing algorithm called local preference transformation, which allows the consideration of non-monotonic criteria with the UTA method. Doumpos (2012) proposed an approach considering a broader class of non-monotonic marginal value functions, which
can decrease only at the lower and/or the upper end of the criterion’s scale. The introduction of some binary variables that define the monotonicity of the marginal value functions over each subinterval leads to a non-linear integer programming problem that has proved to be an NP-hard problem. Even applied in a small dataset, it may require a lot of computational time. Ghaderi, Ruiz, and Agell (2017) utilized a linear fractional programming model to minimize total variations in the slopes of the marginal value functions and at the same time to maximize the discriminatory power at successive characteristic points. It is challenging to develop a sorting model in the presence of non-monotonic preferences. As noted in Branke, Corrente, Greco, ´ Słowinski, and Zielniewicz (2016), a noteworthy difficulty is how to reach a compromise between the complexity of marginal value functions and the capacity for restoring the preference information provided by DM. The existing MCS methods aim at obtaining marginal value functions to restore assignment examples as much as possible. When considering a MCS problem with non-monotonic criteria, these methods allow any shapes for marginal value functions. As a consequence, the compatible set of marginal value functions is much larger than that regarding only monotonic preferences, and it consists of highly flexible and complex marginal value functions. Moreover, the highly complex marginal value functions cause an over-fitting problem, in which case the monotonicity arbitrarily changes many times (Ghaderi et al., 2015). This decreases the interpretability of the obtained preference model. Meanwhile, the recommendations may turn out to be trivial for decisionaiding. For example, the recommended possible assignments for most non-reference alternatives are usually the whole range of classes. This requires more cognitive effort so that the decisionaiding approach no longer supports the DM. In this study, we propose a progressive decision aiding approach for MCS problems in the presence of non-monotonic preferences. The approach requires the DM to provide a set of assignment examples as the preference information and adopts a disaggregation-aggregation paradigm to develop a sorting model. To cope with the mentioned difficulty, the proposed approach starts with a set of marginal value functions in the simplest shape, i.e., the marginal value functions for non-monotonic criteria linearly increase up to a specific criterion value and then decrease (e.g., the marginal value function with a solid line in Fig. 2(a)), and those for monotonic criteria linearly increase or decrease (e.g., the marginal value function with a solid line in Fig. 2(b)). On this basis, we develop an interactive algorithm to resolve inconsistency in the assignment examples. By solving several mathematical programming problems, the algorithm assists the DM to decide either to revise some of the assignment examples or to allow more slope variations to gradually increase the degree of the complexity of the marginal value functions. In this way, at each step, we only take into account marginal value functions in a specific shape. Then, we develop another algorithm to iteratively assign non-reference alternatives. This algorithm permits the DM to provide incremental preference information about the given possible assignments at each iteration, and it utilizes the information to update possible assignments for other non-reference alternatives. Finally, the sorting model is obtained by minimizing the sum of variations in the slope. As opposed to the previous studies, this paper makes the following contributions. First, to the best of our knowledge, although several methods have been proposed for multiple criteria ranking problems with non-monotonic criteria (Despotis & Zopounidis, 1995; Doumpos, 2012; Ghaderi et al., 2015; 2017; Kliegr, 2009), our work is the first decision-aiding approach to address MCS problems. The second contribution of this paper consists in interacting with the DM to reach a compromise between the complexity of marginal value functions and the capacity to represent the DM’s
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
preference. In this way, analyst cooperates with the DM in working out a satisfactory sorting model and obtaining a better understanding of his or her preference system. Third, to cope with the difficulty in the MCS problem with non-monotonic criteria, this paper proposes a progressive approach that enriches the work of Branke et al. (2016), i.e., starting with simpler marginal value functions but then switching them to more complex ones as soon as the preference information can no longer be represented. Finally, the paper experimentally analyzes the relationship between the complexity of the marginal value functions and the quality of the obtained assignments. Although some previous studies have investigated stability of the recommended results (Doumpos & Zopouni´ dis, 2014; Kadzinski, Ghaderi, Wasikowski, & Agell, 2017), our research contributes to the investigation on the impact of a set of parameters describing an MCS problem in the presence of nonmonotonic preferences. The paper is organized as follows. In next section, we briefly introduce the notation and basic models that will be used in the study. Section 3 introduces the proposed approach in detail, followed by an illustrative example based on a real dataset. Further experiment analysis is presented in Section 5. Finally, in Section 6, we conclude this paper.
3
where U(a) ∈ [0, 1], a ∈ A is the global value and the marginal value is defined by u j (xkj ), k = 1, 2, . . . , n j (A ). The key condition for the additive form is mutual preference independence (Keeney & Raiffa, 1976; Wakker, 1989), thus a low value on one criterion can be compensated by large values on other criteria, and the greatest marginal value for each criterion could be regarded as its weight which could be interpreted as trade-offs between criteria. Given the assignment examples, we transform them into the following ´ constraints (Kadzinski et al., 2014):
⎫
U (a∗ ) ≥ bLDM (a∗ )−1 , ⎪ AR ( a∗ ), ∀a∗ ∈ AR ⎪ ⎪ ⎬ U (a∗ ) + ε ≤ bRDM (a∗ ) , AR EDM , b0 = 0, ⎪ AR ⎪ b p = 1 + ε, EBase−Sort ⎪ ⎭ bh − bh−1 ≥ ε , h ∈ H,
where ε is an arbitrarily small positive value. To normalize the value functions so that U(a) ∈ [0, 1], ∀a ∈ A, let: n (A )
u j (x j,∗ ) = u j (x1j ), u j (x∗j ) = u j (x j j u j (x j,∗ ) = 0,
m j=1
( )+κ ≤
•
•
•
•
u j xkj
A = {a1 , a2 , . . . , ai , . . . , an } is a finite set of alternatives. AR ⊆A is a set of reference alternatives. G = {g1 , g2 , . . . , g j , . . . , gm } is a finite set of m evaluation criteria, gj : A → R for all j ∈ J = {1, 2, . . . , m}. GM is the set of monotonic criteria and GNM is the set of non-monotonic criteria. C1 , C2 , . . . , C p are p pre-defined classes such as C1 ≺C2 ≺≺Cp where Ch+1 is preferred to Ch , h = 1, 2, . . . , p − 1; moreover, H = {1, . . . , h, . . . p}. All classes are defined by a vector of thresholds b = (b0 , b1 , . . . , b p ) such that b0 < b1 < . . . b p−1 < b p . We use bh+1 and bh to be the lower and upper bound of class Ch , respectively. X j = {g j (ai )|ai ∈ A} is the set of evaluation values of all aln (A )
ternatives on criterion gj , j ∈ J. Let x1j , x2j , . . . , xkj , . . . , x j j
be
the ordered values of X j , xkj < xkj +1 , k = 1, . . . , n j (A ) − 1, where
n j (A ) = X j and nj (A) ≤ n. In this study, we set all criteria evaluations in Xj as the characteristic points. Without loss of generality, for gj ∈ GM , the greater gj (ai ) the better alternative ai on criterion gj . For gj ∈ GNM , let x∗j ∈ X j be the most preferred level
(
), ∀g j ∈ GM
(2.4)
u j (x∗j ) = 1, ∀ j ∈ J
To ensure the monotonicity for gj 2. Notation and definition
(2.3)
u j xkj +1
(2.5)
∈ GM ,
let:
), k = 1 , 2 , . . . , n j ( A ) − 1, ∀g j ∈ GM ,
(2.6)
where κ is a predefined positive constant, that is, the smallest difference in marginal values between two consecutive characteristic points. We assume the DM can provide information about x∗j , ∀g j ∈ GNM . In order that u j (xkj ) < u j (xkj +1 ) for xkj < xkj +1 ≤ x∗j , and u j (xkj ) > u j (xkj +1 ) for x∗j ≤ xkj < xkj +1 , the constraints are constructed as follows:
u j (xkj ) + κ ≤ u j (xkj +1 ), k ∈ ≤ x∗j , g j ∈ G
NM
{1, 2, . . . , n j (A ) − 1}|xkj +1 (2.7)
u j (xkj ) − κ ≥ u j (xkj +1 ), k ∈ ≥ x∗j , g j ∈ G
NM
{1, 2, . . . , n j (A ) − 1}|xkj (2.8)
As for the characteristic points with the least marginal value, n (A )
we cannot fix xj, ∗ , gj ∈ GNM either on x1j or x j j u j (x j,∗ ) = 0, we provide following constraints: n (A )
u j (x j,∗ ) ≤ u j (x1j ), u j (x j,∗ ) ≤ u j (x j j
. To normalize
), ∀g j ∈ GNM
(2.9)
n (A )
and x j,∗ ∈ {x1j , x j j } be the least preferred level. The closer gj (ai ) to x∗j , the better alternative ai on criterion gj , and the closer gj (ai ) to xj, ∗ , the worse alternative ai on criterion gj . Preference information. To represent the DM’s preference information, we assume that the DM provides a set of assignment examples consisting of holistic judgements for reference alternatives a∗ ∈ AR ⊆A and their desired assignment: ∗
a → CLDM (a∗ ) , CRDM (a∗ ) ,
(2.1)
where LDM (a∗ ), RDM (a∗ ) ∈ H. [CLDM (a∗ ) , CRDM (a∗ ) ] is an interval of contiguous classes. An assignment can either be precise if LDM (a∗ ) = RDM (a∗ ) or imprecise otherwise. Let each assignment example be denoted by AR (a∗ ), a∗ ∈ AR . An additive criteria aggregation model. In this paper, we would use a model in the form of an additive value function (Belahcene, Mousseau, Pirlot, & Sobrie, 2017):
U (a ) =
m j=1
u j g j (a ) ,
(2.2)
n (A )
u j (x j,∗ ) ≥ u j (x1j ) − My1j , u j (x j,∗ ) ≥ u j (x j j
n (A )
) − My j j
, ∀g j ∈ GNM (2.10)
n (A )
y1j , y j j
n (A )
∈ {0, 1}, y1j + y j j n (A )
where y1j , y j j n (A ) xj j , gj
≤ 1, ∀g j ∈ GNM
(2.11)
are binary variables for two extremes x1j and
∈ GNM , respectively. M is a large positive constant. Con-
straints (2.9) and (2.10) imply u j (x j,∗ ) = u j (x1j ) if y1j = 0 and n (A )
n (A )
u j (x j,∗ ) = u j (x j j ) if y j j = 0. Constraint (2.11) guarantees that at least one marginal value of the extreme characteristic points should be equal to 0. The simplicity of piecewise linear marginal value functions. In this study, the simplicity of the marginal value functions is described by the sum of variations in the slope. The slope for the kth subin´ terval is defined as (Słowinski, Greco, & Mousseau, 2013):
θ jk =
u j (xkj +1 ) − u j (xkj ) xkj +1 − xkj
, k ∈ {1, . . . , n j ( A ) − 1}, j ∈ J
(2.12)
4
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
Fig. 1. Marginal value functions obtained in Example 1.
Table 1 Data for Example 1. Alternative
g1
g2
Category
a1 a2 a3 a4 a5
3 1 2 3 5
1 4 5 2 1
C2 C1 C1 C2 C1
marginal value functions. Although all of them have the same number of characteristic points, the marginal value function 1 is said to be in the simplest form due to an invariant slope in most subintervals. Similarly, marginal value function 3 is more complex than marginal value function 2 because the former has more slope variations. Let Eqs. (2.4)–(2.11) be the constraint set EBase−Nor , Eqs. (2.12)– (2.13) be the constraint set ESlope . The set ESort is constructed as follows: R
Define I j = {2, . . . , k∗ − 1, k∗ + 1, . . . , n j (A ) − 1}, j ∈ J as the set ∗ of indices of all interior characteristic points except for xkj = x∗j on
A EDM ,
criterion gj . Let the non-negative variable ∈ I j , j ∈ J constrain the absolute difference in slope over the two consecutive subintervals (Ghaderi et al., 2017):
ESlope ,
γ jk , k
k θ j − θ k−1 ≤ γ jk , k ∈ I j , j ∈ J j
(2.13)
Note that differently from the definition of the sum of varia´ tions in the slope in Słowinski et al. (2013), we only take into account the characteristic points whose indices are in the set Ij , j ∈ J, which excludes the criteria evaluation x∗j . The variation in the slope at x∗j is obviously greater than that at any other characteristic points because the monotonicity direction is changed at x∗j . Therefore, it may obtain an inappropriate marginal value functions for some criteria. Here, we give a simple example to explain this process. Example 1. Suppose the DM classifies five alternatives into two classes. The alternatives are evaluated on two criteria, i.e., g1 is non-monotonic and g2 is monotonic. Table 1 presents the criteria evaluations and the assignment examples. Let x∗1 = x31 and x∗2 = x52 . Fig. 1(a) shows the marginal value functions with minimizing k k j∈{1,2},k∈{2,3,4} γ j and Fig. 1(b) with minimizing j∈{1,2},k∈{2,4} γ j . Both preference models could correctly assign the alternatives to their desired classes. Nevertheless, the first marginal value function for g1 is supposed to be non-monotonic, but the difference between u1 (x31 ) and u1 (x41 ) is minor. This indicates that the marginal value function for g1 approximates it to be monotonic, which wrongly estimates the DM’s real preference. Conversely, the marginal value function for g1 in Fig. 1(b) is visibly in a non-monotonic shape, which is more interpretable than that in Fig. 1(a). The marginal value function is said to be simpler if the sum of variations in the slope is less. More specifically, for non-monotonic criteria, when the sum is zero, the marginal value function linearly increases to x∗j and then it linearly decreases, in other words, it takes the simplest form. For example, Fig. 2 presents some
⎫ ⎬
EBase−Nor , ESort
⎭
(2.14)
Consistency of the assignment examples. Given a pair (U, b), U is an additive value function and b is a vector of thresholds. A set of assignment examples is said to be consistent with (U, b) if ESort is feasible and ε ∗ = Max ε , s.t. : ESort , is greater than 0. Possible assignments and necessary assignments (Corrente et al., ´ 2013; Greco et al., 2008; 2010a; Greco, Słowinski, Figueira, & Mousseau, 2010b). Given assignment examples and a corresponding set of compatible pairs (U, b ), for each a ∈ A we denote the possible assignment CP (a) as the indices of classes Ch for which there is at least one compatible pair (U, b) assigning a to Ch , denoted by a → P Ch . The necessary assignment is defined as the set of indices of classes Ch for which all compatible pairs (U, b) assign a to Ch , denoted by a → N Ch . 3. The proposed approach In this section, we will analyze the difficulties encountered in the MCS problems with non-monotonic criteria, outline the proposed approach and discuss the details that include resolving the inconsistency, assigning non-reference alternatives and selecting a representative value function. We propose a progressive decision-aiding approach that refers to the simplest explanation is most likely to be the correct one, i.e., when more than one value function is compatible with the provided preference information, an analyst opts for utilizing the most linear marginal value functions, which makes the additive value ´ functions much closer to a weighted sum form (Słowinski et al., 2013). In this regard, we assume that the marginal value functions are in some specific forms at the beginning of the approach, i.e., for monotonic criteria, the marginal value functions are linearly increasing or decreasing (e.g., the marginal value functions 1 in Fig. 2(b)), and for non-monotonic criteria, the marginal value functions are linearly increasing before the most preferred level and then are linearly decreasing (e.g., the marginal value functions
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
5
Fig. 2. Examples for simple and complex marginal value functions.
1 in Fig. 2(a)). However, the initially considered set of marginal value functions may prove unsuitable for many reasons, e.g., inconsistency in assignment examples. Thus, we constitute an interactive algorithm to resolve the inconsistency accounting for the reactions from the DM. In this algorithm, we iteratively ask the DM to change the assignments for some reference alternatives without modifying the marginal value functions, and once the DM denies the changes, we switch the marginal value functions to more complex ones to represent the DM’s preference information, which usually leads to an increased degree of complexity of the marginal value functions (e.g., from marginal value functions 1 to 2, 2 to 3 in Fig. 2). Subsequently, we propose another algorithm to iteratively infer the assignments for non-reference alternatives from the adjusted assignment examples. The DM is permitted to add incremental preference information accounting for the possible assignments obtained in previous iterations, i.e., placing a non-reference alternative to a specific class in its range of possible assignment interval. This would yield narrower possible assignment intervals for other non-reference alternatives so that the recommendations become more stable and prudent. The ultimate assignments for nonreference alternatives and marginal value functions are obtained by minimizing the sum of variations in the slope. The framework of the proposed approach is summarized in Fig. 3, and the details are illustrated as follows. The method is divided into four parts. First, we collect the initial information. The DM is asked to provide assignment examples for some of his or her most familiar alternatives. For gj ∈ GNM the DM is encouraged to provide the most preferred level x∗j ∈ X j as the definitions in Despotis and Zopounidis (1995). Moreover, the DM can provide some special information, such as attitude about risk (see Section 3.1). Second, after transforming the DM’s assignment examples into a set of constraints, we begin to identify and resolve the inconsistency in the assignment examples by Algorithm 1 (see Section 3.2). Then, Algorithm 2 is developed to assign non-reference alternatives (see Section 3.3). In the last part, we propose a procedure for selecting a representative value function called UTADIS-Par (see Section 3.4).
Fig. 3. Framework for the proposed progressive decision-aiding approach.
2015):
×
3.1. Initial preference information The DM is encouraged to provide some assignment examples and to give the most preferred level x∗j for gj ∈ GNM . Moreover, the DM can provide statements such as I am risk appetite (aversion) before (after) the most preferred level to constrain the shape for the marginal value functions: •
If the marginal value function is desired to be concave (risk ´ aversion), we add the following constraints (Kadzinski et al.,
u j xkj ≤ u j xkj −2 + u j xkj −1 − u j xkj −2
•
xkj − xkj −2 xkj −1 − xkj −2
, ∃g j ∈ G, ∀k ∈ {3, . . . , n j (A )}
(3.1)
If the marginal value function is desired to be convex (risk ´ appetite), we add the following constraints (Kadzinski et al., 2015):
u j xkj ≥ u j xkj −2 + u j xkj −1 − u j xkj −2 ×
xkj − xkj −2 xkj −1 − xkj −2
, ∃g j ∈ G, ∀k ∈ {3, . . . , n j (A )}
(3.2)
6
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
assignment [CLDM (a∗ ) , CRDM (a∗ ) ] at tth iteration. Note that at the be-
Algorithm 1 Resolve inconsistency.
t
Input: Assignment examples AR (a∗ ), ∀a∗ ∈ AR . ( a∗ ) ← AR ( a∗ ); 1: t ← 1, γ1 ← 0, S1 ← ∅, SA1 ← ∅, SD1 ← ∅, AR 1 2: Solve LP(1), get Ft∗ and γt∗ ; 3: if Ft∗ = 0 then ψt ← a∗ ∈ AR |σt+ (a∗ ) = 0 or σt− (a∗ ) = 0 ; 4: Solve MIP(1), get St and δ ∗ ; 5: for each a∗ ∈ St do 6: Give adjusted assignment [CL (a∗ ) , CR (a∗ ) ] for a∗ referred to 7: t
Algorithm 2 Assigning non-reference alternatives. Input: Revised assignment examples AR (a∗ ), a∗ ∈ AR . R 1: s ← 1, A0 ← A\A ; 2: while A0 = ∅ or the DM could provide preference information do for each ai ∈ A0 do 3: Solve LP(2) and get possible assignment [CLs (ai ) , CRs (ai ) ]; 4: end for 5: Give As ⊆ A0 ; 6: 7: for each ai ∈ As do Set ai → [CLDM (a ) , CRDM (a ) ]; 8: i
s
LP(1):
Ft = Min
a∗ ∈AR
(σt+ (a∗ ) + σt− (a∗ )),
R
A s.t. : EBase −Sort , ESlope , EBase−Nor
U (a∗ ) + σt+ (a∗ ) ≥ bLtDM (a∗ )−1
U (a∗ ) − σt− (a∗ ) + ε ≤ bRtDM (a∗ )
AtR (a∗ ), ∀a∗ ∈ AR (3.3)
t
results of MIP(1); if the DM agrees to change desired assignment for a∗ 8: then SAt+1 ← SAt ∪ {a∗ } and LtDM (a∗ ) ← Lt (a∗ ), RtDM (a∗ ) ← 9: Rt (a∗ ); else 10: SDt ← SDt ∪ {a∗ }, SAt+1 ← SAt ; 11: end if 12: end for 13: if SAt+1 = SAt then 14: Go to step 5; 15: else 16: DM (a∗ ) ← LDM (a∗ ), RDM (a∗ ) ← RDM (a∗ ), ∀a∗ ∈ AR ; Lt+1 17: t t t+1 SDt+1 ← SDt , γt+1 ← γt∗ + δ ∗ , t ← t + 1, and go to step 2; 18: end if 19: 20: end if Output: New assignment examples AR (a∗ ), ∀a∗ ∈ AR .
s
t
ginning of Algorithm 1, we set LDM (a∗ ) = LDM (a∗ ) and RDM ( a∗ ) = 1 1 DM ∗ R ∗ R ∗ R (a ) so that A1 (a ) = A (a ).
σt+ (a∗ ), σt− (a∗ ) ≥ 0, ∀a∗ ∈ AR
k∈I j
j∈J
(3.4)
γ jk ≤ γt
(3.5)
Suppose Ft∗ is the optimal value of LP(1) at t-th iteration. The examples are consistent if Ft∗ = 0. Denote ψt = ∗ assignment a ∈ AR |σt+ (a∗ ) = 0 or σt− (a∗ ) = 0 as the set of inconsistent reference alternatives and |ψ t | as the number of elements in set ψ t at tth iteration. In Eq. (3.5), γ t is a given limitation on the sum of variations in the slope at tth iteration, and it can be obtained from the last iteration. Notably, γ1 = 0 indicates that the initial marginal value functions are in a linear form. Let γt∗ be the value of γ t when LP(1) is at the optimum. When Ft∗ = 0, the following mixed integer linear programming is constructed to determine the subset of reference alternatives that should be revised, and the corresponding violation on the variations in the slope is minimal. The DM either agrees to change the assignments for some reference alternatives or refuses to change any of them, in which case we increase the degree of the complexity of the marginal value functions. Let SAt be the set of reference alternatives that the DM agrees to change their desired classes from [CLDM (a∗ ) , CRDM (a∗ ) ] to [CLDM (a∗ ) , CRDM (a∗ ) ], and let SDt be t
t
the set of reference alternatives that the DM refuses to change at tth iteration.
MIP(1): Min
δ
AR EBase −Sort , ESlope , EBase−Nor
s.t. :
U (a∗ ) + Mv(a∗ ) ≥ bLtDM (a∗ )−1
U (a∗ ) − Mv(a∗ ) + ε ≤ bRtDM (a∗ )
AtR (a∗ ), ∀a∗ ∈ AR (3.6)
i
end for Solve LP(3), get δs∗ ; 10: if δs∗ = 0 then 11: 12: Revise adjustments or remove some of them from As , go back to step 7; 13: end if Set A0 ← A0 \As , s ← s + 1; 14: 15: end while Output: New possible assignments. 9:
v ( a∗ ) ∈ {0, 1 }, ∀a∗ ∈ AR
(3.7)
v(a∗ ) = 0, ∀a∗ ∈ SAt ∪ SDt
(3.8)
a∗ ∈AR
j∈J
3.2. Resolving inconsistency Although a simpler model is easier to explain, it may not be able to perfectly reproduce the DM’s preference system given the assignment examples. It is necessary to design a rule that switches simpler marginal value functions to more complex ones. In this regard, we propose Algorithm 1 to progressively increase the degree of complexity of the marginal value functions to address the inconsistency. In the algorithm, we solve LP(1) to check the inconsistency and solve MIP(1) to obtain the optimal solution. Let AtR (a∗ ), ∀a∗ ∈ AR consist of a reference alternative and its desired
v(a∗ ) ≤ |ψt |
k∈I j
γ jk ≤ γt∗ + δ
(3.9)
(3.10)
where v(a∗ ) is a binary variable that is equal to one if the corresponding assignment for the reference alternative a∗ should be changed. Constraint (3.8) is constructed in order not to evaluate assignments for the reference alternatives in set SAt and SDt . Constraint (3.9) ensures that no more than |ψ t | reference alternatives are to be revised. Constraint (3.10) relaxes the sum of variations in the slope by δ which is to be minimized. Let us denote the optimal value of the MIP(1) with δ ∗ . Let St = {a∗ ∈ AR |v(a∗ ) = 1} be the set of the alternatives to be revised when MIP(1) at the optimum at tth iteration. Given the above mathematical programming models, we propose Algorithm 1 to progressively revise the inconsistency.
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
7
Algorithm 1 starts by checking inconsistency by solving LP(1), and once there is inconsistency, we enter a loop to revise it. Before solving MIP(1), set v(a∗ ) = 0, ∀a∗ ∈ SAt ∪ SDt in order not to evaluate these alternatives whose revises are either accepted or refused in the preceding iterations. Notably, the sets SA1 and SD1 are empty at the beginning of the algorithm. Subsequently, given a new assignment [CL (a∗ ) , CR (a∗ ) ] for each a∗ ∈ St , the DM de-
solutions. The algorithm can be perceived as a support for the DM to better understand his or her preferences in the course of resolving inconsistency. The details of Algorithm 1 are presented in Section 4.2.
cides whether or not to change it. If the DM refuses to change the current assignment, then set SDt+1 = SDt ∪ {a∗ }. Otherwise, set SAt+1 = SAt ∪ {a∗ } and update the assignment for a∗ in AtR (a∗ ). If no revises are accepted by the DM, MIP(1) selects a new set St with v(a∗ ) = 0, ∀a∗ ∈ SAt ∪ SDt . The algorithm jumps out of the loop and goes to the next iteration when there is at least one assignment example being accepted to be changed, i.e., SAt+1 = SAt . Beware that we must relax the constraint on the sum of the variations in the slope by δ ∗ before solving LP(1) at the next iteration. The algo-
A γ ∗ be the sum of the variations in the slope and EDM be the con straint set with AR (a∗ ), which consists of the reference alternative
t
t
LP(2):
Min s.t. :
3.3. Assigning non-reference alternatives When there is no inconsistency in the assignment examples, let R
a∗ and its adjusted assignment at the last iteration in Algorithm 1. Then, we provide Algorithm 2 to give the possible assignments for non-reference alternatives. We solve the following linear programming model LP(2) to obtain the possible assignments for each non-reference alternative ai ∈ A0 where A0 = A\AR . At sth iteration, −1 As = A0 \ sz=1 Az :
σs+ (ai ) + σs− (ai ) ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬
R
A EDM , EBase−Nor , ESlope ,
U (ai ) + σs+ (ai ) ≥ bh−1 , U (ai ) − σs− (ai ) + ε ≤ bh ,
σs+ (ai ), σs− (ai ) ≥ 0, k ∗ j∈J k∈I j γ j ≤ γ , ⎫ U (ai ) ≥ bLDM (ai )−1 , ⎪ s−1 ⎪ , C ] , ∀ a ∈ A , ∀ai → [CLDM DM s −1 i ⎪ (ai ) Rs−1 (ai ) s−1 ⎪ U (ai ) + ε ≤ bRDM (ai ) , ⎪ s−1 ⎬
E (ai →PCh )
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ .. ⎪ i f s ≥ 2 ⎪ ⎪ . ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ U (ai ) ≥ bLDM (ai )−1 , ⎪ ⎪ ⎪ 1 ⎪ ⎪ ⎭ ∀ai → [CLDM , C ] , ∀ a ∈ A , DM 1 i ⎭ ( ai ) R1 ( ai ) 1 U (ai ) + ε ≤ bRDM (ai ) , 1
rithm ends when the adjusted assignment examples do not cause any inconsistencies. Note that in the situation where even the most complicated model could not represent the preference information provided by the DM, i.e., no solution for MIP(1), we go back to the previous step to reconsider the change for the assignment examples. Although the non-monotonic value functions are often flexible enough to represent the preference information so that this situation rarely occurs, we still need to take it into account. Algorithm 1 takes into account the trade-off between the complexity of the marginal value functions and the capacity for restoring the preference information provided by the DM. Actually, although the proposed approach starts with the simplest value function so that some of preference information may not be satisfied (where the inconsistency happens), we do not force the DM to change their assignment examples. Instead, we just report the possible wrongly assigned examples and give the DM two options, i.e., either keep the original assignments or re-consider them. The reason of inconsistency is now resulted from two facts: on one hand, the expressiveness of the model decreases because of simpler value functions are considered. In this case, the DM provides correct assignment examples and dose not agree to change the assignments. So the proposed approach will iteratively obtain a more complicated model which could represent more preference information. On the other hand, the DM makes some mistakes or wants to update his/her information, which is a situation similar to that one introduced in Mousseau, Figueira, Dias, da Silva, and Clímaco (2003). In this case, we give the DM a chance to re-consider his/her preference information and if the DM compromises to change some original assignments, the model could be in a simpler shape and the decision process goes on. If not, we will keep the original assignment examples and iteratively find other
The possible assignment for ai ∈ A0 is composed of h ∈ H, such that E(ai → P Ch ) is feasible and σs∗ (ai ) = Min(σs+ (ai ) + σs− (ai )) s.t.: E(ai → P Ch ) equals to 0. Let CPs (ai ) = {Ls (ai ), Ls (ai ) + 1, . . . , Rs (ai )} be the set of indices of classes obtained by the preference model at sth iteration. The alternative ai can be assigned to any classes whose index is in set CPs (ai ). Given ai ∈ As ⊆A0 , the DM could place it into a class in the range of [CLDM (a ) , CRDM (a ) ] s
i
s
i
DM where Ls (ai ) ≤ LDM s (ai ) ≤ Rs (ai ) ≤ Rs (ai ), more specifically, when DM LDM s (ai ) = Rs (ai ) the alternative ai is placed to a precise class. The DM is allowed to add ai → [CLDM (a ) , CRDM (a ) ] as incremental preferi
s
i
s
ence information at each iteration, and such information is added to the constraints for calculating possible assignments at the next iteration. The following linear programming model aims to determine whether the current model could satisfy the additional information at the s-th iteration:
LP(3):
Min δs s.t. : U (ai ) ≥ bLDM s (ai )−1
U (ai ) + ε ≤ bRDM s ( ai )
∀ai ∈ As (3.11)
k∈I j
j∈J
ai → [CLDM , CRDM ], s ( ai ) s ( ai )
γ ≤ γ + δs k j
U (ai ) ≥ bLDM (ai )−1 , s−1
U (ai ) + ε ≤ bRDM (ai ) , s−1 .. . U (ai ) ≥ bLDM (ai )−1 , 1
U (ai ) + ε ≤ bRDM (ai ) ,
∗
(3.12)
⎫ ⎪ ⎪ ⎪ ⎬
ai → [CLDM (ai ) , CRDM (ai ) ], ∀ai ∈ As−1 ,⎪ ⎪ s−1
s−1
if s ≥ 2
⎪ ⎪ ⎪ ⎪ ⎭ ai → [CLDM (ai ) , CRDM (ai ) ], ∀ai ∈ A1 , ⎪ 1 1
1
(3.13)
8
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
In the above model, for each ai ∈ As at sth iteration, constraints (3.11) put it in a specific assignment interval provided by the DM. Constraints (3.13) take into account the introduced preference information in the previous iterations. Let δs∗ be the optimal value of LP(3) at the sth iteration. Since in this step, the preference model is sufficient to represent the adjusted assignment examples, we could not further relax the restrictions on the sum of variations in the slope, the marginal value functions do not need any modifications to restore the incremental preference information if δs∗ = 0; otherwise, the DM should reconsider the provided information. Algorithm 2 stops until the DM has no further adjustments for alternatives in A0 or all non-reference alternatives have necessary assignments. Let us refer to the final assignments for nonreference alternatives as [CLDM (a ) , CRDM (a ) ], ∀ai ∈ A\AR . The details i i of Algorithm 2 are presented in Section 4.3. 3.4. Selecting representative marginal value functions The approaches adapted by the ROR take into account the whole set of compatible marginal value functions, and they end with more than one additive value function in most situations. As a consequence, the possible assignments obtained by the ROR are usually non-univocal; especially in the presence of non-monotonic preferences, many non-reference alternatives will be assigned to the whole range of the classes, which increases the DM’s cognitive effort. In this regard, many researchers have proposed procedures for selecting a representative value function and have utilized the obtained sorting model to help the DM assign non-reference alternatives. The proposed approach constrains the sum of the variation in the slope so that only a subset of compatible marginal value functions is taken into account at each iteration. Thus, the proposed approach is expected to obtain more supportive assignment results. In this section, given the same assignment examples, we will compare different assignment results utilizing different procedures for selecting a representative value function with and without constraining the sum of variations in the slope. We will account for (note that γ ∗ is the maximal slope variations after resolving inconsistency): •
UTADIS I, which adapts the proposal of Zopounidis and Doumpos (2001). The purpose is to select a model that has the maximum distance between two thresholds, i.e.:
Max s.t. :
ε , R
A EDM , EBase−Nor , ESlope ,
ε ≤ ε ,
k∈I j
j∈J
γ ≤γ , k j
U (ai ) + ε ≤ bRDM (ai )
s.t. :
ai → [CLDM (ai ) , CRDM (ai ) ], ∀ai ∈ A\AR ,
κ ,
AR EDM , EBase−Nor , ESlope ,
κ
≤ κ ,
j∈J
k∈I j
γ jk ≤ γ ∗ ,
U (ai ) ≥ bLDM (ai )−1
U (ai ) + ε ≤ bRDM (ai )
s.t. :
γ
k j∈J k∈I j j , R A EDM , EBase−Nor , ESlope ,
U (ai ) ≥ bLDM (ai )−1 U (ai ) + ε ≤ bRDM (ai )
ai → [CLDM (ai ) , CRDM (ai ) ], ∀ai ∈ A\AR
To evaluate the recommended results with and without the proposed approach, i.e., whether or not to constrain the sum of variations in the slope, we define a coefficient as follows:
|Q | |Q |
τk j ( a i ) , x{ (ai )} − Min{C q (ai )} + 1 ) |Q |(|Q | − 1 )(Ma q q 2
τ ( ai ) =
k=1
j=k+1
(3.14)
Cq
where τk j (ai ) ={
1,if C k (ai )−C j (ai )=0, 0,if C k (ai )−C j (ai ) =0
with ai ∈ A. The index τ (ai ) mea-
sures the similarity of the assignments obtained by different procedures for selecting a representative value function. In Eq. (3.14), |Q| is the number of the given procedures and Cq (ai ) is the index of the class to which alternative ai is finally assigned with the use of qth procedure (e.g., one of UTADIS I, UTADIS Dis and UTADISPar). (Max{C q (ai )} − Min{C q (ai )} + 1 ) is the maximum range of the q
q
indices of the possible classes. The value of τ (ai ) is within [0, 1]. When more different procedures assign an alternative to a same class, the index increases, which indicates that the less the τ (ai ), the more different the assignment results for ai with the use of different procedures. More specifically, when all procedures obtain different assignments for ai , τ (ai ) = 0, and when all procedures obtain the same assignment for ai , τ (ai ) = 1. For example, Appendix D shows that given 5 procedures, the assignments for alternative a5 are C3 , C3 , C4 , C3 and C4 , and for alternative a19 are C3 , C3 , C3 , C2 and C4 . Apparently, in view of the range of the possible assignments, a19 could be assigned to 3 classes whereas a5 could only be assigned to 2 classes, so it is easier for the DM to specify a more precise assignment for a5 than for a19 with the use of 5 different procedures. Thus the recommended results for a5 are said to be better than a19 because the DM only need to assign a5 to one of two possible classes while the DM should spend more effort on determining the assignment for a19 . According to )+(0+1+0 )+(0+1 )+0 ) Eq. (3.14), we have τ (a5 ) = 2×((1+0+1+0 = 0.2 > 20×(4−3+1 )
τ (a19 ) =
2×((1+1+0+0 )+(1+0+0 )+(0+0 )+0 ) 20×(4−2+1 )
= 0.1.
3.5. An extension of the proposed method
where ε is the pre-defined value in the previous mathematical models. UTADIS Dis adapts UTAMP2 to non-monotonic criteria sorting problems (Beuthe & Scannella, 2001). The purpose is to obtain a set of marginal value functions that maximizes the minimum differences of the marginal values at consecutive characteristic points, i.e.:
Max
Min
∗
U (ai ) ≥ bLDM (ai )−1
•
•
where κ is the pre-defined value in the previous mathematical models and we use κ to replace κ in Eqs. (2.6)–(2.8). UTADIS-Par, which minimizes the sum of all variations in the slope. Its purpose is to obtain the simplest marginal value functions, i.e.:
ai → [CLDM (ai ) , CRDM (ai ) ], ∀ai ∈ A\AR ,
The DM is not always capable of providing precise information about x∗j on each criterion. Generally, the DM prefers to give n (A )
an interval [η , η¯ j ] where x∗j ∈ [η , η¯ j ], x1j ≤ η ≤ η¯ j ≤ x j j j
j
j
, gj ∈
GNM .
In this case, we give general versions of Algorithms 1 and 2. For each non-monotonic criterion, the general algorithms set one criteria evaluation in the range of [η , η¯ j ] as x∗j and j
switch it to another criteria evaluation iteratively until all combi∗ , X∗ = nations (x∗1 , x∗2 , . . . , x∗j , . . . , x∗m ) ∈ X1∗ × X2∗ × . . . × X j∗ × . . . × Xm j
{xkj ∈ X j |η j ≤ xkj ≤ η¯ j , g j ∈ GNM } are enumerated. In general Algo-
rithm 1, we only adjust assignments for reference alternatives when there are no compatible marginal value functions in all combinations. It ends when at least one combination could achieve a compatible pair (U, b) and the minimal relaxation on the sum of variations in the slope is permitted. Similarly, in general Algorithm 2, we iteratively calculate possible assignment for each nonreference alternative utilizing all combinations.
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
9
Table 2 Criteria evaluations of firms.
Fig. 4. An example for a multiple-peaked value function.
Example 2. Now suppose both g1 and g2 in Example 1 are nonmonotonic and the DM provides x∗1 ∈ [2, 4] and x∗2 ∈ [1, 5] instead of precisely setting x∗1 = 3 or x∗2 = 5 as the most preferred level. Then, there are total 3 × 5 combinations, since x∗1 ∈ {2, 3, 4} and x∗2 ∈ {1, 2, 3, 4, 5}. All 15 combinations should be considered when checking the inconsistency and specifying the possible assignments. 3.6. Consider more types of non-monotonicity In this section, we will show how to utilize the algorithm to handle other types of non-monotonicity, i.e., the value functions which are worse along with the increase of the criterion value until a specific value and the more the better from then on (‘U’ shape), and the value functions with multiple peaks. As for value functions in the ‘U’ shape, the constraints are similar to those in Eqs. (2.4) to (2.14) but some notations should be changed. We present these constraints in Appendix A. As for the value function with multiple peaks, we segment the value function into several single peaked (or ‘U’ shaped) value functions at the points where the preference direction changes. Therefore, constraints for normalization and monotonicity are easier to be constructed. For example, in Fig. 4 we divide a value function into two parts which are both in a single peaked shape. The constraints are presented in Appendix B. However, such process requires more expert knowledge about the locations where the preference direction changes. Note that in the most situations, especially a managerial context, the DM’s marginal value functions are naturally considered to be stable and without too many changes of preference directions. In this regard, the proposed approach could handle the most practical situations where the DM’s preference only changes once. Although in some cases the more complicated marginal value functions indeed exist (e.g.: multiple-peaked shape), the proposed approach could be applied but requires more notations and expert knowledge. Therefore, it is easier to more clearly introduce the idea, i.e., the progressive approach, in terms of the value functions in the most basic and common type because of less notations and more straightforward constructions on constraints. 4. Illustrative example In this section, the proposed progressive decision-aiding approach for MCS problems in the presence of non-monotonic preference will be demonstrated by a basic experiment, i.e., the DM sets
Firm
Cash to total assets
Long term debt and stockholder’s equity to fixed assets
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a16 a17 a18 a19 a20
3.8 5.84 0.04 4.89 0.57 16.7 3.16 25.42 17.99 3.98 0.76 24.16 2.53 35.06 0.72 24 8.86 10.58 16.35 1.7
2.4 1.96 1.14 2.92 1.72 2.32 4.1 3.35 1.34 3.26 2.74 2.83 2.54 9.56 0.97 2.5 29.06 4.03 3.6 5.92
Total liabilities to total assets 60.7 63.7 64.26 55.04 64.7 53.29 23.9 59.03 73.84 84.95 84.44 70.51 81.05 61.08 99.67 99.92 47.4 89.64 56.55 85.83
specific criteria evaluations as the most preferred levels for nonmonotonic criteria, and an extended experiment, i.e., the whole criteria evaluations can be the most preferred levels. In Section 4.1, we briefly introduce the related data. In Section 4.2, we check the inconsistency in assignment examples through Algorithm 1, and then employ Algorithm 2 to assign non-reference alternatives. In Section 4.4, we give the results implementing general Algorithms 1 and 2. The marginal value functions and results obtained in both the basic and extended experiments are compared in Section 4.5. 4.1. Data description In Ghaderi et al. (2017), three methods for a multiple criteria ranking problem with non-monotonic criteria are compared on the same dataset that consists of fifteen firms evaluated on three financial ratios. Although the referred data are used, we randomly select five additional firms in the CSMAR1 database. Twenty firms are involved in the experiment, and the data are presented in Table 2. All these firms are classified into four categories: C1 groups the firms in the worst financial state, C2 groups the firms in a lower-intermediate financial state, C3 groups the firms in an upper-intermediate financial state and C4 groups the firms in the best financial state. 4.2. Resolving inconsistency Suppose the DM has the following assignment examples: a∗2 → C4 , a∗3 → C2 , a∗9 → C3 , a∗10 → C2 , a∗12 → C1 , a∗17 → C3 . Assume the three criteria are non-monotonic in the basic experiment, i.e., x∗1 = x11 = 5.84, x∗2 = x62 = 2.32 and x∗3 = x63 = 59.03. In this 1 experiment, we set κ = 0.0 0 07 and ε = 0.001. Iteration t = 1: We begin to check consistency by applying the simplest marginal value function that is linearly increasing before x∗j and is linearly decreasing after x∗j , i.e., γ1 = 0. The optimal value for the sum of classification errors F1 = 0 and |ψ1 | = 2, which indicates that the constraints for assignment examples are inconsistent. Fig. 5(a) shows the explanatory marginal value functions at this iteration. 1 http://www.gtarsc.com/, the stock code for firm a16 is 600519, firm a17 is 600555, firm a18 is 603108, firm a19 is 603222, and firm a20 is 300009.
10
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
Fig. 5. Examples for marginal value functions at each iteration.
We solve MIP(1) to find the assignments must be revised by minimally relaxing the constraints on the sum of variations in the slope. δ ∗ = 0 and S1 = {a∗12 , a∗17 } indicate that if we change the assignment for a∗12 and a∗17 at the same time, there is no need to relax the constraints on the sum of the variations in the slope. More specifically, the solution of MIP(1) obtains a specific value function and global values of the alternative whose corresponding v equal to 1, as well as the value of the thresholds. Therefore, we could determine the new classes for both a12 and a17 are C2 . Fig. 5(b) reveals that the marginal value functions are still in the simplest form, i.e., j∈J k∈I γ jk = 0, if the DM agrees to revise assignments j
for both a∗12 and a∗17 .2 Suppose that the DM insists a∗12 should be in the previous class C1 and agrees to change a∗17 from C3 to C2 . Thus, SD1 = {a∗12 } and SA2 = {a∗17 }. Since SA2 = SA1 , set SD2 = SD1 and go to next iteration. Iteration t = 2: Solve LP(1) with updated constraints in the last iteration (assign a∗17 to C2 and keep a∗12 in C1 ), there are still classification errors because |ψ2 | = 1. In MIP(1), we set v(a∗12 ) = 0 and v(a∗17 ) = 0 in order not to estimate the reference alternatives that have already been evaluated by the DM. The solutions of MIP(1) are δ ∗ = 0.001362 and S2 = {a∗9 }, which indicates that we only need to relax the sum of variation in the slope by 0.001362 if we assign a∗9 from C3 to C2 . Suppose the DM refuses to change assignment for a∗9 , then SD2 = {a∗9 , a∗12 }, SA3 = SA2 = {a∗17 } and S2 = ∅. Because no revisions are accepted, we go back to step 2 and find other possible revisions. In Fig. 5(c), it can be observed that the marginal value function for the third criterion is not as parsimonious as that in Fig. 5(a) and (b). It has a clear variation in the slope around the criterion value of 85.
2 Although marginal value functions in Fig. 5(b) are different with those in (a), they are all linearly increasing or decreasing except for the point x∗j
Set v(a∗9 ) = 0 and solve MIP(1), we have a new set S2 = {a∗10 } and δ ∗ = 0.001368. Apparently, the optimal solution is greater than 0.001362 because a∗9 is no longer considered in MIP(1). Suppose the DM still refuses it, then we set SD2 = {a∗9 , a∗10 , a∗12 } and solve MIP(1) for the third time in this iteration. Reference alternative a∗3 is selected this time, and the DM agrees to change its assignment, then we update SA3 = {a∗3 , a∗17 }. Since SA3 = SA2 , we set γ3 = γ2 + δ ∗ = 0.003946 and SD3 = SD2 . Iteration t = 3: In this iteration, there are no classification errors in the newly revised assignment examples. The adjusted assignment examples are a∗2 → C4 , a∗3 → C3 , a∗9 → C3 , a∗10 → C2 , a∗12 → C1 and a∗17 → C2 . We change the assignments for a∗3 and a∗17 from their previous classes to adjacent classes but keep assignments for the reference alternatives in SD1 , SD2 and SD3 unchanged. Although the shape of the marginal value functions becomes complex, this variation is very minor because the DM compromises by revising 1/3 assignment examples. Due to the trade-off between the complexity of the marginal value functions and the capacity for restoring all preference information, the less assignment examples are revised, the more relaxations on the sum of variations in the slope, leading to a more complicated model. Fig. 5(a),(c) and (d) present the marginal value functions obtained by minimizing the total classification errors at each iteration. It is important to highlight that the marginal value functions are gradually modified from a parsimonious form to a more complex form. 4.3. Assigning non-reference alternatives Algorithm 2 aims at determining possible assignments for nonreference alternatives and permits the DM to add incremental preference information according to the current results. Thus, we continue to run Algorithm 2 after resolving inconsistency. The adjusted assignment examples are a∗2 → C4 , a∗3 → C3 , a∗9 → C3 , a∗10 →
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17 Table 3 Assignments at the first and second iteration. Firm
L1
R1
L2
R2
Firm
L1
R1
L2
R2
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
3 4 3 1 3 3 3 1 3 2
4 4 3 1 4 4 4 1 3 2
3 4 3 1 3 3 3 1 3 2
4 4 3 1 4 3 3 1 3 2
a11 a12 a13 a14 a15 a16 a17 a18 a19 a20
2 1 2 2 1 1 2 2 3 1
2 1 2 2 1 1 2 2 4 1
2 1 2 2 1 1 2 2 3 1
2 1 2 2 1 1 2 2 3 1
C2 , a∗12 → C1 , a∗17 → C2 , and the sum of variations in the slope is γ ∗ = 0.003946. Iteration s = 1: At the first iteration, A0 = {a1 , a4 , a5 , a6 , a7 , a8 , a11 , a13 , a14 , a15 , a16 , a18 , a19 , a20 } and calculate the possible assignment for each ai ∈ A0 . The results are shown in Table 3. We can observe that 9 of the non-reference alternatives have a necessary assignment and 5 of non-reference alternatives are assigned to a range of two classes. The DM could place some non-reference alternatives to a specific class, which is regarded as newly introduced preference information to help determine the assignments for other nonreference alternatives. Suppose A1 = {a7 , a19 } and the DM provides that a7 → C3 and a19 → C4 . We solve the following linear programming to check whether these two alternatives could be assigned synchronously.
Min δs , s.t. : AR EDM , EBase−Nor , ESlope , k j∈J k∈I j γ j ≤ 0.003946 + δs , U ( a7 ) ≥ b2 , U ( a7 ) + ε ≤ b3 , U (a19 ) ≥ b3 , U (a19 ) + ε ≤ b4 , The optimal value δ1∗ = 0, which indicates that we cannot assign a7 → C3 and a19 → C4 synchronously without any modifications of the marginal value functions. Thus, the DM should pursue an analysis or remove one of them. Suppose the DM removes a19 → C4 , the incremental preference information is a7 → C3 . Set A0 = A0 \A1 and go to next iteration. Iteration s = 2: In this iteration, the updated information from the DM is: a∗2 → C4 , a∗3 → C1 , a∗9 → C4 , a∗10 → C2 , a∗12 → C1 , a∗17 → C3 and a7 → C3 , with the sum of variations in the slope γ ∗ = 0.003946. The results are shown in Table 3. We can observe that possible assignments for a6 and a19 are narrower because more preference information is introduced. However, two non-reference alternatives are still assigned to more than one class. The DM thinks that it is difficult to specify the assignments for these two non-reference alternatives, so Algorithm 2 stops. 4.4. Extended experiment In this experiment, we utilize general Algorithms 1 and 2 to enumerate all situations where all criteria evaluations can be set as the most preferred level. Thus, there are 20 × 20 × 20 combinations. Given the same initial preference information and parameters, we resolve inconsistency and assign non-reference alternatives. Table 4 presents the possible assignments for non-reference
11
Table 4 Assignments results obtained by the general algorithms. Firm
L
R
Firm
L
R
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
3 4 3 1 4 2 3 1 3 2
4 4 3 1 4 4 3 4 3 2
a11 a12 a13 a14 a15 a16 a17 a18 a19 a20
1 1 1 2 1 1 2 1 2 1
2 1 2 4 1 1 2 2 4 1
alternatives after resolving the inconsistency. Compared with the results obtained by the basic algorithm in Table 3, the general algorithms obtain fewer necessary assignments because all criteria evaluations could be the most preferred level, which expands the compatible set of marginal value functions. One can observe that only one alternative could be assigned to all four classes, which means that when adding constraints on the shape of the value functions, the number of possible classes to which an alternative could be assigned decreases. 4.5. Selection of a representative value function In this section, given the same assignment examples and different parameters including ε for the minimal distance between two consecutive class thresholds, κ for the minimal difference between two consecutive marginal values and γ or the minimal relaxation on the sum of variations in the slope, the comparison of different representative marginal value functions obtained by both basic experiment and extended experiment are presented. Table 5 summarizes the parameters in different procedures for selecting representative marginal value functions, i.e., UTADIS I with κ = 0.0 0 01 and κ = 0.0 0 05 (denoted as UTADIS I-0.0 0 01 and UTADIS I-0.0 0 05, respectively), UTADIS Dis with ε = 0.01 and ε = 0.001 (denoted as UTADIS Dis-0.01 and UTADIS Dis-0.001, respectively), and UTADISPar. Fig. 6 demonstrates that the proposed approach could obtain more stable marginal value functions in both basic and expanded experiments. In Fig. 6(a), since the DM gives a fixed most preferred level for each criterion, the marginal value functions are in a similar shape, and the only differences are on the greatest marginal values. At each step in the algorithm, only a minimal increase of the degree of complexity is allowed. As a consequence, although different procedures for selecting a representative value function are utilized, the obtained marginal value functions have less slope variations and similar shapes. More specifically, the figures clearly suggest that all five procedures identify u(x∗2 ) as the most important criterion because it has the greatest share among three criteria. However, different procedures give different importance to the other two criteria; for example, according to the results obtained by the UTADIS-Par, the third criterion is more important than the first one, thus giving u(x∗2 ) ≥ u(x∗3 ) ≥ u(x∗1 ), whereas the marginal value function for the third criterion estimated by the UTADIS Dis0.01 is not relevant, with a nearly zero value for the entire scale. There is a similar problem with the first marginal value function estimated by the UTADIS I-0.0 0 01. It can be observed in Fig. 6(b) that although the most preferred levels obtained by different procedures are different, most of the marginal value functions are in similar shapes. For instance, for the first criterion, the UTADIS Dis-0.001 determines x∗1 = 2.53 whereas the others set x∗1 = 5.84. However, the marginal value functions obtained by the UTADIS Dis-0.001, UTADIS I-0.0005 and UTADIS-Par
12
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17 Table 5 Parameters for selection of representative marginal value functions in basic and extended experiments.
Basic κ Basic ε Basic γ ∗ Extended κ Extended ε Extended γ ∗
UTADIS I-0.0 0 01
UTADIS I-0.0 0 05
UTADIS-Par
UTADIS Dis-0.01
UTADIS Dis-0.001
0.0 0 01 − 0.004 0.0 0 01 − 0.0016
0.0 0 05 − 0.004 0.0 0 05 − 0.0016
0.0 0 05 0.001 − 0.0 0 05 0.0 0 01 −
0.01 0.004 − 0.01 0.0016
− 0.001 0.004 − 0.001 0.0016
Fig. 6. Marginal value functions obtained by five procedures in basic and extend experiments.
have the same shape when the criterion evaluations are greater than 5.84, while the marginal value functions obtained by the UTADIS Dis-0.01 and UTADIS I-0.0 0 01 have a similar shape in the whole criteria scale. It is interesting to note that all five procedures obtain a linearly decreasing marginal value function for the second criterion; however, the margin values of x∗2 are different. The marginal value functions for the third criterion, estimated by the UTADIS-Par and UTADIS Dis-0.01 have minor differences between marginal values of x13 and x∗3 . Thus, it is not easy to discriminate the marginal values of the criteria evaluations between them, while the other three marginal value functions are more discriminatory. The marginal value functions obtained by different procedures in both basic and extended experiments have no sudden falls in the whole scale, which makes the marginal value functions less complex but easier to interpret. All obtained marginal value functions for the first and third criteria are quadratic and smooth; however, their importance differs. Moreover, the second criterion is the most important one in both the basic and extended experiments. Table 6 presents the assignment results and its corresponding τ (ai ) to compare the results obtained with and without the proposed approach. It can be observed that 8/14 non-reference alternatives (e.g.,a1 , a5 , a6 , a7 , a11 , a13 , a14 and a15 ) obtain greater value of τ (ai ) with the proposed approach, which indicates that constraining the sum of variations in the slope helps to obtain similar assignments for the same alternative, although different procedures for the selection of a representative value function are utilized. The global values and assignments obtained with the use of different representative value functions in both the basic and extended experiments are presented in Appendix D and Appendix E, respectively.
Table 6 Comparison of the assignments resulting from different procedures with and without the proposed approach (C1 (ai ), C2 (ai ), C3 (ai ) and C4 (ai ) are obtained by UTADIS I-0.0 0 01, UTADIS I-0.0 0 05, UTADIS Dis0.01 and UTADIS Dis-0.001, respectively). Firm
Without proposed approach
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a16 a17 a18 a19 a20
With proposed approach
C (ai )
C (ai )
C (ai )
C4 (ai )
τ (ai )
τ (ai )
4 4 3 1 1 2 1 1 3 2 1 1 1 1 1 2 2 1 1 1
4 4 3 1 1 2 2 1 3 2 1 1 1 1 1 2 2 1 1 1
3 4 3 1 3 3 3 1 3 2 2 1 2 2 1 2 2 1 3 1
3 4 3 1 3 2 3 1 3 2 2 1 2 2 2 2 2 1 3 1
0.17 1 1 1 0.11 0.25 0.06 1 1 1 0.17 1 0.17 0.17 0.25 1 1 1 0.11 1
0.2 1 1 0.3 0.15 0.3 0.1 1 1 1 0.2 1 1 0.25 1 1 1 0.2 0.1 1
1
2
3
Although there are usually more than one procedures for selection of a representative value function, we prefer to use UTADISPar method. On one hand, by controlling the slope variations, we could obtain a model which could restore the preference information but is in the simplest form. The number of compatible value
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
13
Fig. 7. Estimated marginal value functions in Despotis and Zopounidis (1995).
Fig. 8. Estimated marginal value functions in Kliegr (2009).
functions is decreased so the possible assignments are less but the necessary assignments are more. The DM may be easier to make decisions because more determined assignments are recommended. On the other hand, a simper explanation is more likely ´ to be the correct one (Słowinski et al., 2013). Moreover, a parsimonious value function is easier to be interpreted and more suitable for a DM whose preference is naturally assumed to be stable.
4.6. A comparison with despotis UTA and UTA-NM We adapt the Despotis UTA and the UTA-NM to sorting problems in order to compare them with the proposed approach. Given the same preference information, the value functions obtained by the basic approach, the extended approach, Despotis UTA and UTANM are presented and compared. The value functions inferred by the basic and the extended approach, Despotis UTA and UTA-NM are presented in Fig. 6–8, respectively. According to the results by Despotis UTA, the marginal value function for the second criterion is in a monotonically decreasing shape, while the first and the third marginal value functions are non-monotonic. Note that the marginal value function for the third criterion is in a complicated shape with lots of sharp changes of the slope. Different with the results of Despotis UTA, the marginal value function for the first criterion obtained by UTA-NM has a quadratic shape, but has nearly zero value for the entire scale. Since the UTANM takes into account the shape penalization in its model, the marginal value functions for the second and the third criteria are more parsimonious than those obtained by Despotis UTA. However, there is a sudden fall around the criteria value of 70, which may not perfectly represent the DM’s actual preference. Both Despotis UTA and the proposed basic approach require the DM’s information about the location of the criteria value at which the monotonicity direction changes, but the marginal value functions in Fig. 6 are smoother and have less sharp changes of slope, making it easier to interpret. As for UTA-NM, although it could estimate the marginal value functions in any shapes, it prefers to utilize the functions with less monotonicity direction changes. That’s why the marginal value function for the second criterion has a
similar shape with that obtained by the proposed extended approach. All of the proposed approach, Despotis UTA and UTA-NM could perfectly assign the alternatives to the desired categories, however, the marginal value functions estimated by Despotis UTA and UTA-NM are different. Compared to Despotis UTA, the proposed approach could handle the situation where the DM has no prior knowledge about the most preferred level, and considers the parsimony of the model. Compared to UTA-NM, though the proposed approach only considers the marginal value functions in a specific shape, it could obtain similar but smoother results than UTA-NM. Moreover, the proposed approach requires less computational cost than UTA-NM. 5. A Simulation experiment In this section, we verify the benefit of the proposed approach through a simulation experiment that mines the relationship between the complexity of the model and the number of obtained ´ necessary assignments. As noted in Kadzinski et al. (2017), there is a trade-off between the expressiveness of the marginal value functions and the number of necessary relations in a multiple ranking problem. In addition, their results show that a simpler model, especially a linear one, can obtain more necessary binary relationships than a complex one, which could reduce the DM’s cognitive efforts. We implement an experimental analysis to further demonstrate that the proposed approach enhances the ability to find more necessary assignments in MCS problems with non-monotonic criteria. 5.1. Experiment setup We use slope relaxation to measure the simplicity of the marginal value functions and two indicators, i.e., absolute number of necessary assignments (the total number of obtained necessary assignments) and relative number of necessary assignments of obtained necessary assignments−number of assignment examples ( total numbernumber ), to of alternatives−number of assignment examples measure the model’s ability to find necessary assignments, respectively. The more necessary assignments, the more determined the obtained results, the easier for the DM to make decisions. The
14
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17 Table 7 Problem settings in experimental analysis. #non-monotonic criteria
#alternatives
#assignment examples
#classes
Slope relaxations
0, 1, 2, 3
8, 10, 12, 15, 20
3, 4, 5, 6
2, 3, 4, 5
1.0, 1.25, 1.5, 1.75, 2.0
slope relaxation β ≥ 1.0 is a multiplier of the sum of the variations in the slope. We increase the degree of complexity of the marginal value functions by amplifying this parameter. The con k ∗ straint j,k γ j ≤ β × γ is added to LP(2) before we calculate the number of necessary assignments. The other experiment settings are presented in Table 7, i.e., number of non-monotonic criteria (Non), number of alternatives (N), number of assignment examples (AE), number of pre-defined classes (p) and slope relaxation (β ). For brevity, only the monotonicity of three criteria are considered in this experiment. The experiment has been conducted as follows: Step1: The criteria evaluations of N alternatives are randomly drawn from the interval [0, 100] according to a uniform distribution. To guarantee that the assignment examples do not cause any inconsistency, we generate the following polynomial marginal value functions as the actual ones:
u ( x ) = p0
D d=1
x
100
+ pd
(5.1)
where p0 , p1 , . . . , pD ∈ [−1, 1] are random coefficients generated from a uniform distribution. Such a procedure is introduced in Ghaderi et al. (2017). Moreover, we normalize the global values into [0, 1] so that the reference alternatives could be categorized according to the pre-defined thresholds (in this study, the thresholds are 0, 1/P, 2/P, . . . , (P − 1 )/P, 1). Step2: There are 320 different problem settings considering number of alternatives, number of non-monotonic criteria, number of pre-defined classes and number of assignment examples. In all decision problems, we iteratively amplify β so that we consider 1600 problems. Set all criteria evaln (A )
uations as the characteristic points and η = x1j , η¯ j = x j j . j Step3: We initially solve Min a∗ ∈AR (σ + (a∗ ) + σ − (a∗ )), s.t.:ESort to determine γ ∗ then solve LP(2) to calculate the number of necessary assignments with additional con k ∗ ∗ ∗ straints j,k γ j ≤ β × γ , in case of β × γ = 0, set γ = 0.0 0 0 0 01 when γ ∗ = 0. For each problem setting, we repeat it 100 times. The platform for conducting the experimental analysis is a Windows PC with 3.6 gigahertz and 8 gigabytes RAM. We solve all optimization problems using CPLEX 12.7 and implement all processes with Java language. 5.2. Experiment results and analysis We will analyze the relationship between slope relaxation and the number of the obtained necessary assignments in this section. For brevity, we present the most interesting results. In Fig. 9(a), we present the average number of necessary assignments involving the number of non-monotonic criteria with 4 assignment examples, 3 classes and 10 alternatives. As noted ´ in Kadzinski et al. (2017), the number of necessary inferences decreases when increasing the number of criteria. However, in Fig. 9(a), it is apparent that the proposed approach obtained more necessary assignments when the number of non-monotonic criteria increases, which indicates that the proposed approach should be more efficient on obtaining more determined results in a MCS problem with non-monotonic criteria. Moreover, one can observe
that a simpler model can obtain more necessary assignments than a complex one no matter how many non-monotonic criteria are involved. Especially when the slope relaxation is greater than 1.5, the changes of the obtained average number of necessary assignments are minor. This is due to the looser constraints on the shape for the marginal value functions, which seriously expands the compatible set. Fig. 9(b) assesses how the number of necessary assignments is influenced by different number of assignment examples. We observe that when increasing the number of assignment examples, the average relative number of necessary assignments exhibits a clearly increasing trend when 3 and 4 assignment examples are given. Obviously, the more assignment examples provided, the more constraints on the set of compatible set of the marginal value functions. This leads to an easier necessary assignment inference. However, given 5 and 6 assignment examples could obtain very similar relative number of necessary assignments. That’s because the more assignment examples are given, the number of uncertain alternatives is going down. Another aspect that is interesting to observe is the marginal effect on the ability to find necessary assignments. For example, when the slope relaxation is 1.0, the increase of the number of necessary assignments decreases, along with the change of the number of assignment examples provided. This indicates that it is not always effective to let the DM provide as many assignment examples as possible. Fig. 9(c) shows the relationships between the number of alternatives and the slope relaxation with 3 non-monotonic criteria, 3 classes and 4 assignment examples. Apparently, given constraint on the sum of variation of slopes, the more alternatives involved, the greater the average relative number of necessary assignments can be obtained. However, the impact of the number of alternatives is rather marginal when the slope relaxation is greater than or equal to 1.25 (e.g.: at slope relaxation 1.25, 20 alternatives could find less average relative number of necessary assignments than 10, 12 or 15 alternatives). Because we set all the criteria evaluations’ characteristic points, more alternatives bring more characteristic points, which increases the degree of the complexity of the marginal value functions. Thus, the change in the number of necessary assignments is minor although more alternatives are taken into account. Note that the proposed approach could obtain more necessary assignments when the slope relaxation is equal to 1.0. This indicates that constraining the sum of variations in the slope could decrease the impact of increasing the degree of the complexity. Finally, Fig. 9(d) summarizes the results of the experiment in which the number of classes varies from 2 to 5. It is obvious that, given the same number of assignment examples, the more classes, the less number of assignments will be obtained. We explain this phenomenon as follows. More classes often lead to more possibilities to classify an alternative to more than one class, especially for these alternatives with global values around the class thresholds. Moreover, more classes bring more constraints on the thresholds; thus, more preference information is required. For example, four assignment examples are adequate to reconstruct the DM’s preference for a simple two-class case; however, they are inadequate for a five-class case. It is the case that at least one class has no reference alternatives so that their thresholds are randomly set. In summary, the results experimentally analyze the goodness of using simpler value functions. By constraining the variations of
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
15
Fig. 9. Relationship between the necessary assignments and slope relaxation with respect to (a) the number of non-monotonic criteria, (b) the number of assignment examples, (c) number of alternatives and (d) the number of classes. (The x-axis is the slope relaxation.).
slope, the number of compatible value functions is decreased so that the number of possible assignments decreases but the number of necessary assignments increases. Less possible assignments result in more determined recommended assignments for each alternative. Therefore, the proposed approach is more helpful for the DM to make decisions and the obtained value function is easier to ´ be interpreted (Słowinski et al., 2013). Moreover, according to the simulation results, the following practical advices and some intuitive implications for decision-aiding process are derived: •
•
When the potential decision model is extremely complicated, for example, more alternatives are going to be assigned or some of criteria are non-monotonic, the compatible value functions will be massive. The possible assignment intervals for some alternatives could be the whole range of the classes. It is a trivial solution for the DM (for example, when 4 classes are taken into account, a trivial solution tells the DM that an alternative could be assigned to either of 4 classes. This is useless for the DM). To avoid it, we could add some constraints on the slopes so that a simpler model is firstly utilized. Then we could relax the constraints and utilize more complicated models step by step. In this way, the possible assignment intervals could be in a narrower range so that the decision process becomes more helpful because the DM now faces less classes to which the alternatives could be possibly assigned (See Fig. 9 (a) and (c)). It is not always beneficial to encourage the DM to provide as more assignment examples as possible. Although providing more assignment examples could find more necessary assignments, the efficiency in the process of decision-making is de-
•
creased because the DM spends more efforts on providing preference information but relatively less necessary assignments are found (See Fig. 9 (b)). When more classes are taken into account, we could start with a simplest problem which assigns alternatives into only two classes, and iteratively considers more classes. In this way, the DM may not be overwhelmed when lots of pre-defined classes are provided to him/her (See Fig. 9 (d)).
6. Conclusion In this paper, we proposed a progressive decision-aiding approach for the MCS problem in the presence of non-monotonic preferences. The DM provides some assignment examples and the most preferred level for non-monotonic criteria as the preference information. The approach employs additive value functions as the preference model and utilizes the disaggregation-aggregation paradigm to infer the model from the preference information. To cope with the difficulty in the MCS problems with non-monotonic criteria, it employs an interactive algorithm to resolve inconsistency. The algorithm initially assumes the non-monotonic marginal value functions are linearly increasing up to the most preferred level and then linearly decreasing, and the monotonic marginal value functions are either linearly increasing or decreasing. It assists the DM in deciding either to change the assignments for reference alternatives causing inconsistency or to modify the marginal value functions by increasing their degree of complexity. The assignments for non-reference alternatives are obtained by another
16
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17
algorithm that iteratively takes into account the additional information provided by the DM. The ultimate sorting model is obtained by the UTADIS-Par, which aims at minimizing the sum of the variations in the slope. The experiments illustrate and demonstrate that the proposed approach could obtain smoother marginal value functions and more determined recommendations (more necessary assignments) in a MCS problem in the presence of nonmonotonic preferences. We envisage further research in other directions. First, we expect to infer marginal value function in more complex shapes such as S-type functions and polynomial functions (Sobrie, Gillis, Mousseau, & Pirlot, 2018). These types of marginal value functions are easier to interpret because their first and second (partial) derivatives are in relation with the DM’s preferences. Then, we hope to extend the proposed approach to other forms of preference models including outranking relationships. Moreover, we want to make the proposed approach more efficient when addressing MCS problems with more alternatives and criteria. Furthermore, the idea in the proposed approach, i.e., resolving inconsistency and assigning non-reference alternatives in a progressive process, can be applied to the context of group decision-making; such an idea constructs a learning process to help the DM to determine his or her preference. Finally, we plan to do more simulated experiments to verify the applicability of different procedures for selecting a representative value function in MCS problems with the non-monotonic criteria. Credit authorship contribution statement Mengzhuo Guo: Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Conceptualization, Writing - original draft, Writing - review & editing. Xiuwu Liao: Project administration, Resources, Supervision, Validation, Funding acquisition, Conceptualization, Writing - review & editing. Jiapeng Liu: Formal analysis, Funding acquisition, Methodology, Software, Conceptualization, Writing - review & editing. Acknowledgement The research is supported by the National Natural Science Foundation of China (#91546119, #71701160) and the China Postdoctoral Science Foundation (#2017M623201). We are grateful to the anonymous reviewers for their constructive and detailed comments, which helped us improve the previous version of the paper. Appendix A. Constraints for value functions in ‘U’ shape Suppose the marginal value functions for gj , ∀gj shape. For normalization:
∈ GNM
are in ‘U’
n (A )
u j (x j,∗ ) = u j (x1j ), u j (x∗j ) = u j (x j j ), ∀g j ∈ GM u j (x j,∗ ) = 0, mj=1 u j (x∗j ) = 1, ∀ j ∈ J Now we assume the DM could provide information about xj, ∗ for the value functions in ‘U’ shape instead of x∗j . In order that u j (xkj ) > u j (xkj +1 ) for xkj > xkj +1 ≥ x j,∗ , and u j (xkj ) < u j (xkj +1 ) for x j,∗ ≥ xkj > xkj +1 , the constraints are constructed as follows:
u j (xkj ) ≥ u j (xkj +1 ) + κ , k ∈
{1, 2, . . . , n j (A ) − 1}|xkj +1 ≤ x j,∗ , g j ∈ GNM
u j (xkj +1 ) − κ ≥ u j (xkj ), k ∈
{1, 2, . . . , n j (A ) − 1}|xkj ≥ x j,∗ , g j ∈ GNM
As for the characteristic points with the greatest marginal value, n (A )
we cannot fix x∗j either on x1j or x j j
. To normalize the greatest
marginal values, we provide the following constraints: n (A )
u j (x∗j ) ≥ u j (x1j ), u j (x∗j ) ≥ u j (x j j
), ∀g j ∈ GNM n (A )
u j (x∗j ) ≤ u j (x1j ) + My1j , u j (x∗j ) ≤ u j (x j j n (A )
y1j , y j j
n (A )
∈ {0, 1}, y1j + y j j
n (A )
) + My j j
, ∀g j ∈ GNM
≤ 1, ∀g j ∈ GNM
Analogously to the analysis in Section 2, when yOj = 0, O ∈
{1, n j (A )}, the corresponding u j (xOj ) = u j (x∗j ).
Appendix B. Constraints for multiple-peaked value functions Suppose there are L intervals and each interval contains a single peaked value function. Therefore there should be L peak points (i.e.: x∗1 and x∗2 in Fig. 4) and L + 1 valley points (i.e.: xj, ∗ 1 , xj, ∗ 2 j j and xj, ∗ 3 in Fig. 4). In some situations, the number of valley points could be L, which is a simper situation where one of the segmented value function is monotonic. Thus we only consider the situation where all segmented value functions are non-monotonic (For example, in Fig. 4). Analogously to the constraints in Section 2, we provide the following constraints for normalizing the greatest and least marginal values:
u j (x∗j ) ≥ u j (x∗lj ), u j (x j,∗ ) ≤ u j (x j,∗l ),
∀g j ∈ GNM , ∀l ∈ {1, 2, . . . , L}
u j (x∗j ) ≤ u j (x∗lj ) + My∗lj , u j (x j,∗ ) ≥ u j (x j,∗l ) − Mylj,∗ ,
∀g j ∈ GNM ,
∀l ∈ {1 , 2 , . . . , L} y∗lj ∈ {0, 1},
L
y∗lj ≤ L − 1, ∀g j ∈ GNM
l=1
ylj,∗ ∈ {0, 1},
L
ylj,∗ ≤ L, ∀g j ∈ GNM
l=1
The other constraints for monotonicity are easy and similar to those in Section 2, so we will not introduce them here. In this way, we could apply our algorithm when the value functions are in a multiple-peaked shape. Supplementary material Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.eswa.2019.01.033. References Almeida-Dias, J., Figueira, J. R., & Roy, B. (2010). Electre tri-C: A multiple criteria sorting method based on characteristic reference actions. European Journal of Operational Research, 204(3), 565–580. Almeida-Dias, J., Figueira, J. R., & Roy, B. (2012). A multiple criteria sorting method where each category is characterized by several reference actions: The electre Tri-nC method. European Journal of Operational Research, 217(3), 567–579. Angilella, S., & Mazzù, S. (2015). The financing of innovative SMEs: A multicriteria credit rating model. European Journal of Operational Research, 244(2), 540–554. Belahcene, K., Mousseau, V., Pirlot, M., & Sobrie, O. (2017). Preference Elicitation and Learning in a Multiple Criteria Decision Aid perspective. Technical Report. LGI Research report 2017-02, CentraleSupélec. http://www.lgi.ecp.fr/Biblio/PDF/ CR- LGI- 2017- 02.pdf. Beuthe, M., & Scannella, G. (2001). Comparative analysis of UTA multicriteria methods. European Journal of operational research, 130(2), 246–262. ´ Branke, J., Corrente, S., Greco, S., Słowinski, R., & Zielniewicz, P. (2016). Using Choquet integral as preference model in interactive evolutionary multiobjective optimization. European Journal of Operational Research, 250(3), 884–901. Brito, A. J., de Almeida, A. T., & Mota, C. M. (2010). A multicriteria model for risk sorting of natural gas pipelines based on ELECTRE TRI integrating utility theory. European Journal of Operational Research, 200(3), 812–821. ˙ Çelik, B., Karasakal, E., & Iyigün, C. (2015). A probabilistic multiple criteria sorting approach based on distance functions. Expert Systems with Applications, 42(7), 3610–3618. doi:10.1016/j.eswa.2014.11.049. Chen, Y., Hipel, K. W., & Kilgour, D. M. (2007). Multiple-criteria sorting using case-based distance models with an application in water resources management. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 37(5), 680–691.
M. Guo, X. Liao and J. Liu / Expert Systems With Applications 123 (2019) 1–17 ´ Corrente, S., Doumpos, M., Greco, S., Słowinski, R., & Zopounidis, C. (2017). Multiple criteria hierarchy process for sorting problems based on ordinal regression with additive value functions. Annals of Operations Research, 251(1–2), 117–139. ´ ´ R. (2013). Robust ordinal regresCorrente, S., Greco, S., Kadzinski, M., & Słowinski, sion in preference learning and ranking. Machine Learning, 93(2–3), 381–422. ´ Corrente, S., Greco, S., & Słowinski, R. (2016). Multiple criteria hierarchy process for ELECTRE Tri methods. European Journal of Operational Research, 252(1), 191–203. Del Vasto-Terrientes, L., Valls, A., Zielniewicz, P., & Borràs, J. (2016). A hierarchical multi-criteria sorting approach for recommender systems. Journal of Intelligent Information Systems, 46(2), 313–346. ´ ´ Dembczynski, K., Greco, S., & Słowinski, R. (2009). Rough set approach to multiple criteria classification with imprecise evaluations and assignments. European Journal of Operational Research, 198(2), 626–636. Despotis, D., & Zopounidis, C. (1995). Building additive utilities in the presence of non-monotonic preferences. In Advances in multicriteria analysis (pp. 101–114). Springer. Doumpos, M. (2012). Learning non-monotonic additive value functions for multicriteria decision making. OR spectrum, 34(1), 89–106. Doumpos, M., Zanakis, S. H., & Zopounidis, C. (2001). Multicriteria preference disaggregation for classification problems with an application to global investing risk. Decision Sciences, 32(2), 333–386. Doumpos, M., & Zopounidis, C. (2014). The robustness concern in preference disaggregation approaches for decision aiding: An overview. In Optimization in science and engineering (pp. 157–177). Springer. Eckhardt, A., & Kliegr, T. (2012). Preprocessing algorithm for handling non-monotone attributes in the UTA method. In Proceedings of the preference learning: problems and applications in AI (PL-12) workshop. Ghaderi, M., Ruiz, F., & Agell, N. (2015). Understanding the impact of brand colour on brand image: A preference disaggregation approach. Pattern Recognition Letters, 67, 11–18. Ghaderi, M., Ruiz, F., & Agell, N. (2017). A linear programming approach for learning non-monotonic additive value functions in multiple criteria decision aiding. European Journal of Operational Research, 259(3), 1073–1084. ´ ´ Greco, S., Kadzinski, M., & Słowinski, R. (2011). Selection of a representative value function in robust multiple criteria sorting. Computers & Operations Research, 38(11), 1620–1637. ´ Greco, S., Mousseau, V., & Słowinski, R. (2008). Ordinal regression revisited: Multiple criteria ranking using a set of additive value functions. European Journal of Operational Research, 191(2), 416–436. ´ Greco, S., Mousseau, V., & Słowinski, R. (2010a). Multiple criteria sorting with a set of additive value functions. European Journal of Operational Research, 207(3), 1455–1470. ´ Greco, S., Słowinski, R., Figueira, J. R., & Mousseau, V. (2010b). Robust ordinal regression. In Trends in multiple criteria decision analysis (pp. 241–283). Springer. Grigoroudis, E., & Siskos, Y. (2002). Preference disaggregation for measuring and analysing customer satisfaction: The MUSA method. European Journal of Operational Research, 143(1), 148–170. Janssen, P., & Nemery, P. (2013). An extension of the flowsort sorting method to deal with imprecision. 4OR, 11(2), 171–193. ´ Kadzinski, M., & Ciomek, K. (2016). Integrated framework for preference modeling and robustness analysis for outranking-based multiple criteria sorting with ELECTRE and PROMETHEE. Information Sciences, 352, 167–187. ´ ´ Kadzinski, M., Ciomek, K., Rychły, P., & Słowinski, R. (2016). Post factum analysis for robust multiple criteria ranking and sorting. Journal of Global Optimization, 65(3), 531–562. ´ ´ Kadzinski, M., Ciomek, K., & Słowinski, R. (2015). Modeling assignment-based pairwise comparisons within integrated framework for value-driven multiple criteria sorting. European Journal of Operational Research, 241(3), 830–841. ´ ´ Kadzinski, M., Corrente, S., Greco, S., & Słowinski, R. (2014). Preferential reducts and constructs in robust multiple criteria ranking and sorting. OR Spectrum, 36(4), 1021–1053. ´ Kadzinski, M., Ghaderi, M., Wasikowski, J., & Agell, N. (2017). Expressiveness and robustness measures for the evaluation of an additive value function in multiple criteria preference disaggregation methods: An experimental analysis. Computers & Operations Research, 87, 146–164.
17
´ ´ Kadzinski, M., Greco, S., & Słowinski, R. (2013). RUTA: A framework for assessing and selecting additive value functions on the basis of rank related requirements. Omega, 41(4), 735–751. ´ ´ Kadzinski, M., & Słowinski, R. (2013). DIS-CARD: A new method of multiple criteria sorting to classes with desired cardinality. Journal of Global Optimization, 56(3), 1143–1166. ´ ´ Kadzinski, M., & Słowinski, R. (2015). Parametric evaluation of research units with respect to reference profiles. Decision Support Systems, 72, 33–43. Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value trade-offs. Cambridge university press, Cambridge. Kliegr, T. (2009). UTA-NM: Explaining stated preferences with additive non– monotonic utility functions. In Proceedings of the preference learning (PL-09) ECML/PKDD-09 workshop. Köksalan, M., Mousseau, V., Özpeynirci, Ö., & Özpeynirci, S. B. (2009). A new outranking-based approach for assigning alternatives to ordered classes. Naval Research Logistics, 56(1), 74–85. Köksalan, M., & Özpeynirci, S. B. (2009). An interactive sorting method for additive utility functions. Computers & Operations Research, 36(9), 2565–2572. Köksalan, M., & Ulu, C. (2003). An interactive approach for placing alternatives in preference classes. European Journal of Operational Research, 144(2), 429–439. Liang, Q., Liao, X., & Liu, J. (2017). A social ties-based approach for group decisionmaking problems with incomplete additive preference relations. KnowledgeBased Systems, 119, 68–86. doi:10.1016/j.knosys.2016.12.001. Marin, L., Isern, D., Moreno, A., & Valls, A. (2013). On-line dynamic adaptation of fuzzy preferences. Information Sciences, 220, 5–21. Mousseau, V., Figueira, J., Dias, L., da Silva, C. G., & Clímaco, J. (2003). Resolving inconsistencies among constraints on the parameters of an MCDA model. European Journal of Operational Research, 147(1), 72–93. Neves, L. P., Martins, A. G., Antunes, C. H., & Dias, L. C. (2008). A multi-criteria decision approach to sorting actions for promoting energy efficiency. Energy Policy, 36(7), 2351–2363. Norese, M. F., & Carbone, V. (2014). An application of ELECTRE Tri to support innovation. Journal of Multi-Criteria Decision Analysis, 21(1–2), 77–93. Quijano-Sánchez, L., Díaz-Agudo, B., & Recio-García, J. A. (2014). Development of a group recommender application in a social network. Knowledge-Based Systems, 71, 72–85. doi:10.1016/j.knosys.2014.05.013. Rezaei, J. (2018). Piecewise linear value functions for multi-criteria decision-making. Expert Systems with Applications, 98, 43–56. doi:10.1016/j.eswa.2018.01.004. ´ Słowinski, R., Greco, S., & Mousseau, V. (2013). Inferring parsimonious preference models in robust ordinal regression. In Proceedings of the European conference on operational research. Rome, Italy. Sobrie, O., Gillis, N., Mousseau, V., & Pirlot, M. (2018). UTA-Poly and UTA-splines: Additive value functions with polynomial marginals. European Journal of Operational Research, 264(2), 405–418. Sobrie, O., Lazouni, M. E. A., Mahmoudi, S., Mousseau, V., & Pirlot, M. (2016). A new decision support model for preanesthetic evaluation. computer methods and programs in biomedicine, 133, 183–193. Ulu, C., & Köksalan, M. (2001). An interactive procedure for selecting acceptable alternatives in the presence of multiple criteria. Naval Research Logistics, 48(7), 592–606. Ulu, C., & Köksalan, M. (2014). An interactive approach to multicriteria sorting for quasiconcave value functions. Naval Research Logistics, 61(6), 447–457. Wakker, P. P. (1989). Additive representations of preferences: A new foundation of decision analysis. Kluwer Academic Publishers, London. Yuan, T., Cheng, J., Zhang, X., Liu, Q., & Lu, H. (2015). How friends affect user behaviors? an exploration of social relation analysis for recommendation. KnowledgeBased Systems, 88, 70–84. doi:10.1016/j.knosys.2015.08.005. Zopounidis, C., & Doumpos, M. (1998). Developing a multicriteria decision support system for financial classification problems: The finclas system. Optimization Methods and Software, 8(3–4), 277–304. Zopounidis, C., & Doumpos, M. (1999). A multicriteria decision aid methodology for sorting decision problems: The case of financial distress. Computational Economics, 14(3), 197–218. Zopounidis, C., & Doumpos, M. (2001). A preference disaggregation decision support system for financial classification problems. European Journal of Operational Research, 130(2), 402–413.