A multiple attribute decision making three-way model for intuitionistic fuzzy numbers

A multiple attribute decision making three-way model for intuitionistic fuzzy numbers

International Journal of Approximate Reasoning 119 (2020) 177–203 Contents lists available at ScienceDirect International Journal of Approximate Rea...

809KB Sizes 0 Downloads 70 Views

International Journal of Approximate Reasoning 119 (2020) 177–203

Contents lists available at ScienceDirect

International Journal of Approximate Reasoning www.elsevier.com/locate/ijar

A multiple attribute decision making three-way model for intuitionistic fuzzy numbers Peide Liu a,∗ , Yumei Wang a , Fan Jia a , Hamido Fujita, b,c,∗,∗∗ a b c

School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan, Shandong 250014, China Adanlusian Research Institute (DaSCI) “Data Science and Computational Intelligence”, University of Granada, Spain Faculty of Software and Information Science, Iwate Prefectural University, Iwate, 020–0693, Japan

a r t i c l e

i n f o

Article history: Received 3 September 2019 Received in revised form 22 October 2019 Accepted 31 December 2019 Available online 9 January 2020 Keywords: Three-way decisions Relative loss functions Intuitionistic fuzzy numbers Multiple attribute decision making

a b s t r a c t In order to use three-way decision (TWD) to solve multiple attribute decision making (MADM) problems, in this article, a new TWD model with intuitionistic fuzzy numbers (IFNs) is proposed. First of all, we define the relative loss functions to demonstrate some features of loss functions in TWDs, which is the basis for future research. Then, based on the correlation between the loss functions and the IFNs, we get the relative loss functions based on IFNs. At the same time, the classification rules of the TWDs are discussed from different viewpoints, including the thresholds and their properties. Aiming at MADM problems with unreasonable values, a new integrated method of relative loss functions is established to obtain a fairer loss integration result of alternatives. In addition, considering that there are no decision attributes and only condition attributes in MADM, we use grey relational degree to calculate the condition probability. In the end, a novel TWD model is proposed to solve MADM problems with IFNs, and a practical example on selecting suppliers is used to demonstrate its effectiveness and practicability. © 2020 Elsevier Inc. All rights reserved.

1. Introduction According to the evaluation values of attributes, multiple attribute decision making (MADM) is to rank the limited alternatives in a certain way and to select the optimal. Its research results have been fruitful, and many of them have been applied to practical decision-making and brought huge economic benefits [3,5,9,13–16,22,30,33,35,51]. Aiming at the increasing uncertainty and complexity of practical problems, in recent years, scholars have gradually combined MCDM method with uncertainty theory, especially intuitionistic fuzzy set (IFS) and rough set [13–16,22,30,33]. For instance, in the campus recruitment, the overall quality of the interviewee is determined by learning ability, moral quality, organizational ability and professional knowledge, which are four attributes in MADM. This is a typical MADM problem. The interviewers measure these four attributes according to their subjective judgments, and the company can use fuzzy MADM methods to assist decision making. However, the original fuzzy MADM methods only determine the best candidate based on the attribute values, but they ignore the need for further testing to determine the final candidates in actual recruitment. This makes the decision-making results too harsh because the final result is a deterministic decision (acceptance or rejection), just like selecting or not selecting an interviewee in the campus recruitment problem. So we need to find a new way to MADM. The

* Corresponding authors. ** Corresponding author at: Faculty of Software and Information Science, Iwate Prefectural University, Iwate, 020–0693, Japan. E-mail addresses: [email protected] (P. Liu), [email protected], [email protected] (H. Fujita).

https://doi.org/10.1016/j.ijar.2019.12.020 0888-613X/© 2020 Elsevier Inc. All rights reserved.

178

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

intuitive way is to add indeterminacy to the decision results and further testing. This idea coincides with the thought of three-way decisions (TWDs) [42], whose core is thinking in three. In addition to deterministic decisions, TWDs still contain nondeterministic decisions, which can better illustrate the decision-making results under normal circumstances. This is also in line with human uncertain decision-making strategy [40]. In the above example of campus recruitment, many interviewees may need to be further investigated in addition to direct selection and non-selection. So, using TWDs to solve the MADM problem such as the above example will be more reasonable. The earliest scientific and reasonable explanation for TWDs given by Yao [41] originates from Decision-theoretic rough sets (DTRSs), which is based on Bayesian minimum risk and makes decisions by using actual semantics [39]. In addition, other scholars have studied the semantic problems of TWDs from different perspectives [20], such as TWD with costsensitive [11], TWD based on weight comprehensive evaluation, TWD based on total risk minimization, TWD based on total entropy minimization, and so on. They are similar to human decision-making strategy in real-world decision problems. Among them, the TWD based on DTRSs is the most typical semantic explanation. As a semantic meaning of three regions in probabilistic rough sets, DTRSs intend to conclude classification rules to indicate the decision (acceptance, rejection or indecision) of an object according to the loss function values under certain conditional attributes [39]. DTRSs contain three domains: positive domain, negative domain and boundary domain, which correspond to the three decisions of an object. When an object is in the positive domain, adopt an attitude of acceptance; when it falls into the negative domain, adopt an attitude of refusal; when it lies in the boundary domain, adopt a relaxed attitude. This is a new perspective of tolerance for mistakes, and consistent with our thinking habits under uncertainty. Since the birth of TWDs based on DTRSs, they have attracted the attention of scholars and developed rapidly. They also have been applied to various fields, such as investment options [21], energy project selection [13], discriminant analysis [17], risk decision analysis [12], government decision analysis [18], decision support system [37,48], medical diagnosis [4], cluster analysis [48], feature selection [28], image segmentation [8] and conflict analysis [10]. Therefore, this paper focuses on using TWD methods based on DTRSs to address MADM problems. We have known that TWD based on DTRS determines the decision of an object according to the minimum value of its expected loss functions. From the aspect of rough set, an attribute corresponds to an equivalent relation, so each attribute of an alternative in MADM can be understood as a TWD. In the above example of campus recruitment, the attribute of learning ability can be divided into three classes: strong learning ability, poor learning ability and uncertain learning ability, which correspond to three domains in TWD: acceptance, rejection and indecision. Then, based on TWD model, interviewees can be divided into three classes. Other attributes such as moral quality, organizational ability and professional knowledge can also be expressed in this way. Then, multiple attributes can be regarded as multiple TWD models [9], so MADM can be seen as a collection of TWDs, in other words, we can view MADM as a multiple attribute TWD model. Meanwhile, Jia and Liu [9] have preliminarily illustrated a correlation between TWD and MADM by using attribute value to express loss functions. However, their method [9] is only applicable to MADM with fuzzy numbers. In the research of MADM, intuitionistic fuzzy numbers (IFNs), compared with fuzzy numbers, can denote fuzzy information more intuitively and accurately [1,2], and are more widely used in practical decision-making [3,14,15,22,30,33,35,51]. Therefore, in this paper, we first attempt to study the relationship between IFNs and loss functions, and then use TWD methods based on DTRSs to address MADM problems with IFNs. In DTRS model, the foundations of getting expected loss functions are the determination of the loss functions and conditional probability, and one of the prerequisites for calculating conditional probability is decision attribute [16]. Unfortunately, there are only conditional attributes but no decision attribute in MADM. Take the campus recruitment above as an example, we would evaluate four attributes, all of which are conditional attributes, but not decision attribute. In reality, we usually regard qualification as a decision attribute. Therefore, we are faced with the following three key issues: (a) the determination of loss functions and the construction of classification rules based on IFNs; (b) the determination of conditional probability; (c) the aggregation of loss functions based on multiple attributes. With respect to issue (a), we discuss the correlation between the loss functions and the attribute values expressed by IFNs, and explore the classification rules. Based on the membership degrees (MDs) and non-membership degrees (N-MDs) of IFNs [3], we mainly discuss the thresholds of TWDs from positive viewpoint and negative viewpoint. With respect to issue (b), fortunately, Liang et al. [16] used TOPSIS to calculate conditional probability. Positive ideal solution (PIS) and negative ideal solution (NIS) represent two states of decision attribute, and the final relative closeness degree implies conditional probability. But TOPSIS only reflects the positional relationship between data curves but does not capture the trend difference of the data sequence. In order to calculate the conditional probability more reasonably, we use grey relational degree [7] instead, which is a good measure to similarity between curve shapes. Its advantage [7,30,51] is analyzing the trend difference of data series. The closer the curve shapes are, the greater the correlation degree between the corresponding data series, so the greater grey relational degree between the alternative and the ideal alternative leads to that the alternative is closer to the ideal one, which produces a greater conditional probability of the alternative. With regard to issue (c), Jia and Liu [9] have used weighted mean operator to integrate loss functions. Because unreasonable values occur frequently in practical MADM with decision maker’s bias or limited knowledge, to eliminate its influence on decision-making results, we often use power average (PA) operator [33,36] to solve this problem. In this paper, since the attribute values are represented by IFNs, in order to achieve a fairer result, we try to integrate the loss functions under different attributes with the intuitionistic fuzzy power weighted average (IFPWA) operator [33]. Thus, the main tasks of this paper are:

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

179

(1) To discuss the relationship between loss functions and IFNs to determine the loss functions. (2) To construct the classification rules of TWDs and prove the properties of loss functions and the thresholds from different perspectives. (3) To determine the conditional probability of TWDs based on grey relational degree. (4) To propose a TWD method for MADM with IFNs, and verify it with a practical example. In order to achieve the above tasks, the rest of the article is organized as follows. In section 2, we review the definitions, distance formula, operational rules and comparison method of the IFNs, TWDs based on DTRSs, and the IFPWA operator. In Section 3, we discuss the correlation between the loss functions and the attribute values expressed by IFNs to get the relative loss functions. At the same time, the classification rules of the TWDs are discussed from different viewpoints, including the thresholds and its properties. In Section 4, we explore the integrated method of relative loss functions under different attributes and the calculation of the conditional probability with grey relational degree, and then give a TWD model based on MADM with IFNs. In Section 5, we give a practical example to demonstrate the proposed TWD model, and compare the proposed TWD model with the MADM method by the IFPWA operator [33] and Jia and Liu’s method [9]. In Section 6, we summarize the work. 2. Preliminaries To understand this article much better, this section intends to retrospect some elementary knowledge including the definitions, distance formula, operational rules and comparison method of the IFNs, TWDs based on DTRSs and the IFPWA operator. 2.1. Intuitionistic fuzzy set Definition 2.1. [1]. Let Z be the universe of discourse and let z be a generic element in Z . An IFS A in the universe of discourse Z is represented by

A=





z, u A ( z), v A ( z) | z ∈ Z



(1)

where u A and v A express the MD and the N-MD of the IFS A, respectively, and 0 ≤ u A ( z), v A ( z) ≤ 1 and 0 ≤ u A ( z) + v A ( z) ≤ 1. For the IFS A in the universe of discourse Z , the indeterminacy degree π A ( z) [1,2] of element z belonging to the IFS A is defined by π A ( z) = 1 − u A ( z) − v A ( z), where 0 ≤ π A ( z) ≤ 1 and z ∈ Z . In [34,35], the pair (u A ( z), v A ( z)) is called an IFN. For convenience, we use x = (u x , v x ) to represent IFN, which meets u x ∈ [0, 1], v x ∈ [0, 1] and 0 ≤ u x + v x ≤ 1. Definition 2.2. [31]. Let x = (u x , v x ) and y = (u y , v y ) be any two IFNs. The Euclidean distance between the IFNs x = (u x , v x ) and y = (u y , v y ) is defined as follows:



d E (x, y ) =

(u x − u y )2 + ( v x − v y )2 2

(2)

Definition 2.3. [31]. Let x = (u x , v x ) be any an IFN. The ideal positive degree I (x) of the IFN x is described as follows:



I (x) = 1 −

(1 − u x )2 + ( v x )2 2

(3)

Definition 2.4. [31]. Let x = (u x , v x ) and y = (u y , v y ) be any two IFNs. The comparison method meets the condition that x ≥ y if and only if I (x) ≥ I ( y ). Definition 2.5. [5,6]. Let x = (u x , v x ) and y = (u y , v y ) be any two IFNs. The operations between the IFNs x = (u x , v x ) and y = (u y , v y ) are presented as follows:

(i) The complement of x = (u x , v x ) is x¯ = ( v x , u x ),   (ii) x ⊕ y = 1 − (1 − u x )(1 − u y ), v x v y ,   (iii) x ⊗ y = u x u y , 1 − (1 − v x )(1 − v y ) ,   (iv) θ x = 1 − (1 − u x )θ , v θx , where θ > 0,   (v) xθ = u θx , 1 − (1 − v x )θ , where θ > 0.

(4) (5) (6) (7) (8)

180

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

2.2. The IFPWA operator In real decision-making, we often conquer the influence of unreasonable values because of decision maker’s bias or limited knowledge, especially extreme data. Take singing competitions as an example, the referee often removes the highest and lowest scores and then gives the final results of the players. This is a way to eliminate the effect of extreme data. Another common way is PA operator [33,36], which use support function to constrain the impact of input arguments. Because this paper studies the MADM problem with IFNs, we introduce IFPWA operator developed by Xu [33] in the following. Definition 2.6. [33]. Let x = {x1 , x2 , · · · , xi } be a set of IFNs, where xk = (uk , v k ) (k = 1, 2, · · · , i). Then the IFPWA operator is defined as follows:

IFPW A (x1 , x2 , · · · , xi ) =

⊕ki =1 w k (1 + T (xk ))xk ⊕ki =1 w k (1 + T (xk ))

(9)

,

where w = ( w 1 , w 2 , · · · , w i ) T is the weight vector of xk (k = 1, 2, · · · , i), satisfying w i ∈ [0, 1] and

i

l=1 l=k

n

i =1

w i = 1; T (xk ) =

w l Sup (xk , xl ) and Sup (xk , xl ) is support function which describes the degree of support for xk from xl , meeting the

following three properties: (1) Sup (xk , xl ) ∈ [0, 1]; (2) Sup (xk , xl ) = Sup (xl , xk ); (3) Sup (xk , xl ) ≥ Sup (xs , xt ), if d(xk , xl ) < d(xs , xt ), where d denotes the distance between IFNs, 1 ≤ k, l, s, t ≤ m. 2.3. Three-way decisions based on decision-theoretic rough sets Pawlak proposed rough sets and defined the approximation space as apr = (U , R ), where U be a finite nonempty set and R be an equivalence relation [38]. Definition 2.7. [41]. Suppose [x] is the division of U caused by R, for ∀C ⊆ U , 0 ≤ β < α ≤ 1, the probabilistic lower and upper approximations of C can be described separately in the following:

















apr (C ) = x ∈ U | Pr C |[x] ≥ α apr (C ) = x ∈ U | Pr C |[x] > β

|C |[x]|

where Pr(C |[x]) expresses the conditional probability of C , whose expression is Pr(C |[x]) = |[x]| . According to Definition 2.7, the universe U can be separated three regions: positive region, boundary region and negative region, corresponding to three actions of the TWDs [41]: acceptance, indecision and rejection. With the aid of thresholds α and β , the three regions meet the following conditions.







POS(C ) = apr (C ) = x ∈ U | Pr C |[x] ≥ α



 

  BND(C ) = apr (C ) − apr (C ) = x ∈ U |α > Pr C |[x] > β     NEG(C ) = apr (C ) = x ∈ U | Pr C |[x] ≤ β

(10) (11) (12)

Specially, when the thresholds α and β are equal, assumed as α = β = γ , the three regions are simplified as the following two equations, and the TWDs degenerate into the two-way decisions.

















POSγ (C ) = x ∈ U | Pr C |[x] ≥ γ

NEGγ (C ) = x ∈ U | Pr C |[x] < γ

where POSγ (C ) and NEGγ (C ) represent the positive region and negative region of the two-way decisions, respectively. In order to interpret the semantic meaning of the thresholds and three regions in probabilistic rough sets, based on Bayesian decision procedure, Yao [39,43] proposed DTRSs, which contain a set of two states and a set of three actions for each state, shown in Table 1. The set of states  = {C , ¬C } delivers the meaning that an element belongs to C and does not belong to C . A = {a P , a B , a N } denotes the set of three actions, where a P , a B and a N separately represent acceptance, indecision and rejection in classifying an object, i.e., deciding x ∈ POS(C ), deciding x ∈ BND(C ), and deciding x ∈ NEG(C ), respectively. The loss function regarding the risk of different actions is given by a 3 × 2 matrix shown in Table 1. λ P P , λ B P and λ N P indicate the losses incurred for taking actions of a P , a B and a N respectively, when the object is in C . Similarly, λ P Q , λ B Q and λ N Q denote the losses incurred for taking actions of a P , a B and a N respectively, when the object is in

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

181

Table 1 The loss functions. States Actions aP aB aN

C(P )

¬C ( Q )

λP P λB P λN P

λP Q λB Q λN Q

¬C . Meanwhile, the loss functions meet the conditions λ P P ≤ λ B P < λ N P and λ N Q ≤ λ B Q < λ P Q by a reasonable semantic interpretation. For ∀x ∈ U , the expected losses of taking actions of a• (• = P , B , N) for objects in [x] are expressed in the following:













R a P |[x] = λ P P Pr C |[x] + λ P Q Pr ¬C |[x]

(13)



     R a B |[x] = λ B P Pr C |[x] + λ B Q Pr ¬C |[x]       R a N |[x] = λ N P Pr C |[x] + λ N Q Pr ¬C |[x]

(14) (15)

where Pr(C |[x]) expresses the conditional probability of an object x belonging to C , and Pr(¬C |[x]) expresses the conditional probability of an object x belonging to ¬C . Based on Bayesian decision procedure, the minimum-cost decision rule is the best, so we can get

(P) If R (a P |[x]) ≤ R (a B |[x]) and R (a P |[x]) ≤ R (a N |[x]), adopt x ∈ POS(C ); (B) If R (a B |[x]) ≤ R (a P |[x]) and R (a B |[x]) ≤ R (a N |[x]), adopt x ∈ BND(C ); (N) If R (a N |[x]) ≤ R (a P |[x]) and R (a N |[x]) ≤ R (a B |[x]), adopt x ∈ NEG(C ). Because Pr(C |[x]) + Pr(¬C |[x]) = 1, we combine it with the above decision rules and the conditions of loss functions to get the following simplification of the decision rules (P), (B) and (N).

 









 









 









P If Pr C |[x] ≥ α and Pr C |[x] ≥ γ , adopt x ∈ POS(C ); B If Pr C |[x] ≤ α and Pr C |[x] ≥ β, adopt x ∈ BND(C ); N If Pr C |[x] ≤ β and Pr C |[x] ≤ γ , adopt x ∈ NEG(C ),

where the thresholds

α=

α , β and γ are expressed as:

λ P Q − λB Q ; − λ B Q ) + (λ B P − λ P P )

(λ P Q

β=

(λ B Q

λ B Q − λN Q ; − λ N Q ) + (λ N P − λ B P )

γ=

(λ P Q

λ P Q − λN Q . − λ N Q ) + (λ N P − λ P P )

(i) According to the decision rule (B ), we can see β ≤ α , which can deduce an inequality: 0 ≤ β the decision rules (P ), (B ) and (N ) are further reduced into the following rules.

   

P





≤ γ ≤ α ≤ 1 [19]. Then,



If Pr C |[x] ≥ α , adopt x ∈ POS(C );





B If α > Pr C |[x] > β, adopt x ∈ BND(C );







N If Pr C |[x] ≤ β, adopt x ∈ NEG(C ).

(ii) Conversely, if β > α , namely 0 ≤ α ≤ γ ≤ β ≤ 1, then the decision rule (B ) does not meet the condition, so the decision rules (P ), (B ) and (N ) degenerate into the two-way decision rules (P ) and (N ), shown as follows.

 













P If Pr C |[x] ≥ γ , adopt x ∈ POS(C ); N If Pr C |[x] ≤ γ , adopt x ∈ NEG(C ).

3. Three-way decisions based on intuitionistic fuzzy evaluating information In this part, we will analyze the correlation between the loss functions and the attribute values expressed by IFNs to get the relative loss functions from IFNs. Moreover, because the MD and N-MD of IFNs indicate the satisfaction and dissatisfaction of decision makers to alternatives respectively, and they have different preferences, especially the attitude towards risks or losses, we analyze the classification rules of the TWDs from positive viewpoint, negative viewpoint, and comprehensive viewpoint.

182

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Table 2 The relative loss functions. States Actions aP aB aN

C(P )

¬C ( Q )

0

λˆ P Q λˆ B Q

λˆ B P λˆ N P

0

Table 3 The values of loss functions. States Actions aP aB aN

C(P )

¬C ( Q )

0.2 0.5 0.9

0.8 0.4 0.2

3.1. The relative loss functions In subsection 2.3, we have obtained the expressions of α , β and γ from DTRSs, which mean the thresholds for dividing the universe U . Later, the relative loss functions are introduced by Yao [41], which have the same function as actual loss functions. Afterwards Liu et al. [19] rewrote them as the following forms:

α=

λ( P − B ) Q ; λ( P − B ) Q + λ ( B − P ) P

β=

λ( B − N ) Q ; λ( B − N ) Q + λ ( N − B ) P

γ=

(λ( P − B ) Q

λ ( P − B ) Q + λ( B − N ) Q , + λ( B − P ) P ) + (λ( B − N ) Q + λ( N − B ) P )

where λ( P − B ) Q represents the loss difference of an object in ¬C placed in the positive and boundary area. Similarly, λ( B − N ) Q represents the loss difference of an object in ¬C placed in the boundary and negative area, λ( B − P ) P represents the loss difference of an object in C placed in the boundary and positive area, and λ( N − B ) P represents the loss difference of an object in C placed in the negative and boundary area. So we can see that the thresholds α , β and γ are only related to the values of λ( P − B ) Q , λ( B − N ) Q , λ( B − P ) P and λ( N − B ) P , not the exact values of loss functions. Next, we use λ• P (• = P , B , N) minus λ P P respectively, and λ• Q (• = P , B , N) minus λ N Q respectively. Then the results of loss functions are translated into the following forms, shown as Table 2. ˆ B P = λ B P − λ P P , λˆ N P = λ N P − λ P P , λˆ P Q = λ P Q − λ N Q and λˆ B Q = λ B Q − λ N Q . According to the expressions of the where λ thresholds α , β and γ in subsection 2.3, we can get the expressions of the thresholds αˆ , βˆ and γˆ in Table 2.

(λ P Q − λ N Q ) − (λ B Q − λ N Q ) (λ P Q − λ B Q ) λˆ P Q − λˆ B Q = = ; ˆ ˆ ˆ (λ P Q − λ B Q ) + (λ B P − 0) (λ P Q − λ N Q ) − (λ B Q − λ N Q ) + (λ B P − λ P P ) (λ P Q − λ B Q ) + (λ B P − λ P P ) λ B Q − λN Q λ B Q − λN Q λˆ B Q − 0 = = ; βˆ = (λˆ B Q − 0) + (λˆ N P − λˆ B P ) (λ B Q − λ N Q ) + (λ N P − λ P P ) − (λ B P − λ P P ) (λ B Q − λ N Q ) + (λ N P − λ B P )

αˆ =

γˆ =

(λˆ P Q

λ P Q − λN Q λˆ P Q − 0 = . ˆ (λ − λ − 0) + (λ N P − 0) PQ N Q ) + (λ N P − λ P P )

ˆ = α , βˆ = β and γˆ = γ . That is to say, when λ P P and λ N Q are fixed Through the above calculation, we can see that α zero, each loss function has a corresponding one, and the thresholds obtained by them are equal. The meaning of the transformed loss functions displayed in Table 2 can be explained in the following: when the object is in C , we use the loss of taking actions of a P as a benchmark, then the corresponding losses of adopting a P , a B and a N relative to a P are ˆ B P and λˆ N P , respectively; when the object is in ¬C , we use the loss of taking actions of a N as a benchmark, then 0, λ ˆ P Q , λˆ B Q and 0, respectively. Therefore, we call the the corresponding losses of adopting a P , a B and a N relative to a N are λ corresponding functions displayed in Table 2 as the relative loss functions. Example 3.1. To illustrate the property of relativity, we use an example to detail the relative loss functions. We assume that when the object is in C , the losses of adopting a P , a B and a N are 0.2, 0.5, 0.9, respectively; when the object is in ¬C , the losses of adopting a P , a B and a N are 0.8, 0.4, 0.2, respectively. Table 3 shows the values of loss functions. According to the expressions of the thresholds α , β and γ , we can get the values of the thresholds α = 0.57, β = 0.33 and γ = 0.46. Next, we transform them into relative loss functions based on Table 2, and the results are displayed in Table 4. Then we ˆ , βˆ and γˆ of the relative loss functions: αˆ = 0.57, βˆ = 0.33 and γˆ = 0.46. So we can get αˆ = α , reckon the thresholds α βˆ = β and γˆ = γ , which explains the relativity of the thresholds. The relativity of thresholds ensures the consistency of decision rules. In the following, we use relative loss functions to replace the exact values of loss functions.

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

183

Table 4 The values of relative loss functions. States Actions aP aB aN

C(P )

¬C ( Q )

0 0.3 0.7

0.6 0.2 0

Table 5 The IFN’s relative loss functions. X

C

¬C

aP aB aN

o

θx

x¯ θ x¯

x

o

3.2. The relative loss functions educed from IFNs (evaluation value of attribute) In the existing research, the loss functions are fixed in a TWD problem. In other word, different objects get the same loss functions when they are in the same class (C or ¬C ) and adopt the same decision (a P , a B and a N ) [9]. For instance, when selecting investment project, we use C and ¬C to denote high yield and low yield respectively, and suppose there are two projects A 1 and A 2 belonging to ¬C . If we invest in the two projects, we will get the same loss λ P Q based on the fixed loss functions. However, this is contradictory to the reality: different projects often get different losses because of their different quality or nature according to MADM. For instance, experts assign the profitability of A 1 and A 2 as 0.1 and 0.2 by using fuzzy numbers. Apparently, the two projects A 1 and A 2 are not worth investing when we consider profitability, which shows that A 1 and A 2 belong to ¬C , so investing in A 1 and A 2 should get the same loss λ P Q based on the fixed loss functions. But, according to MADM, A 2 is better than A 1 in terms of profitability, so the loss of A 2 should be lower than that of A 1 . To clear up the contradictions mentioned above, Jia and Liu [9] explored the correlation between the loss functions and the attribute values, and they mainly analyzed how to get the relative loss functions from fuzzy numbers (FNs). However, there was only the MD in FSs, the N-MD couldn’t be described. Later, Atanassov [1] proposed IFSs, which denote fuzzy information more intuitively and accurately so that they have been widely used to express attribute values. Therefore, in this subsection, we plan to discuss the correlation between the relative loss functions and the attribute values expressed by IFNs, and to get the relative loss functions from IFNs. Suppose that the evaluation value of the alternative X under the attribute G is an IFN x = (u , v ), the set of states  = {C , ¬C } represents two states of the attribute G separately, which means that the alternative X has the attribute G and does not have the attribute G, the three actions a P , a B and a N separately represent acceptance, indecision and rejection. So, with respect to the attribute G, we can construct a TWD model based on the ideas of Jia and Liu [9]. The relative loss functions can be inferred from IFN x, which are displayed in Table 5. According to the condition of loss functions, we know that the loss in the correct domain is the smallest, so we can assign the smallest value as the reference value based on the relative loss function. Because the minimum IFN is (0, 1), we assign λ P P = λ N Q = o = (0, 1) in Table 5. Secondly, the attribute value x = (u , v ) denotes the degree of recognition of alternative X under attribute G. If alternative X has attribute G, i.e., X ∈ C , then the relative loss of alternative X classified into rejection domain can be expressed by x = (u , v ), which means the decision maker’s mistake of taking action a N on alternative X will lead to a loss of x = (u , v ) as compared with taking action a P ; If alternative X does not have attribute G, i.e., X ∈ ¬C , then the relative loss of alternative X classified into acceptance domain should be the degree of disapproval of the alternative X , that is, the complement of x is x¯ = ( v , u ), which means the decision maker’s mistake of taking action a P on alternative X will lead to a loss of x¯ = ( v , u ) as compared with taking action a N . Thirdly, because the loss function in the indecisive domain is the one between in the acceptance domain and in the rejection domain, we need to draw into a parameter θ (θ ∈ [0, 1]) to represent the relative loss function in the indecisive domain. Fortunately, Li et al. [11] introduced a parameter to calculate the loss in the indecisive domain based on the losses of acceptance and rejection, and Jia and Liu [9] also confirmed its effectiveness and regarded the parameter as a risk avoidance coefficient, so the relative loss functions in the indecisive domain are expressed as λ B P = θ x and λ B Q = θ x¯ . The explanation of the relative loss functions educed from an IFN can be described as below: the losses of X classified into the correct domain are minimum so that we assign the minimum IFNs to them, which is λ P P = λ N Q = o = (0, 1); when the alternative X has the attribute G, i.e., X ∈ C , the loss of accepting X is o, and the loss of rejecting X is expressed as x, which means that the higher IFN x is, the greater the loss λ N P is; when the alternative X does not have the attribute G, i.e., X ∈ ¬C , the loss of rejecting X is o, and the loss of accepting X is expressed as x¯ , which means that the higher IFN x is, the lower the loss λ P Q is. Because of the conditions λ P P ≤ λ B P < λ N P and λ N Q ≤ λ B Q < λ P Q , we draw into a risk avoidance coefficient θ ∈ [0, 1] to represent the losses for incorporating the alternative X into the indecisive domains, which are λ B P = θ x and λ B Q = θ x¯ .

184

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Table 6 The expansions of IFN’s relative loss functions. X

C

¬C

aP aB aN

(0, 1) (1 − (1 − u )θ , v θ ) (u , v )

(v , u) (1 − (1 − v )θ , u θ ) (0, 1)

Table 7 The relative loss functions of X 1 . X1

C

¬C

aP aB aN

(0, 1) (0.0648, 0.7597) (0.2, 0.4)

(0.4, 0.2) (0.1421, 0.6170) (0, 1)

Table 8 The relative loss functions of X 2 . X2

C

¬C

aP aB aN

(0, 1) (0.1015, 0.6170) (0.3, 0.2)

(0.2, 0.3) (0.0648, 0.6968) (0, 1)

Table 9 The relative loss functions of X 3 . X3

C

¬C

aP aB aN

(0, 1) (0.2403, 0.6968) (0.6, 0.3)

(0.3, 0.6) (0.1015, 0.8579) (0, 1)

Next, we bring the operation rules of IFNs into Table 5 so that the specific relative loss functions can be obtained. The relative loss functions λ•∗ (• = P , B , N ; ∗ = P , Q ) are expressed by IFNs, displayed in Table 6. In accordance with Definitions 2.3 and 2.4, it is easy to find that they satisfy the conditions λ P P ≤ λ B P < λ N P and λ N Q ≤ λ B Q < λ P Q . Example 3.2. A company intends to invest in a project with idle money, and invite experts to evaluate three projects X 1 , X 2 and X 3 according to attribute G (i.e. profit) by using IFNs. Suppose that the evaluation values of the three projects are x1 = (0.2, 0.4), x2 = (0.3, 0.2) and x3 = (0.6, 0.3) respectively, and θ = 0.3. Aiming at the attribute G, the IFN’s relative loss functions are computed separately in Tables 7, 8 and 9. In the light of Definitions 2.3 and 2.4, we can get x1 < x2 < x3 , then X 1 ≺ X 2 ≺ X 3 . In addition, we can obtain the following inequalities: λ1B P < λ2B P < λ3B P , λ1N P < λ2N P < λ3N P , λ1P Q > λ2P Q > λ3P Q and λ1B Q > λ2B Q > λ3B Q . In conclusion, based on different evaluation values, the relative loss functions can be obtained differently, namely, if the three projects X 1 , X 2 and X 3 synchronously have the attribute G or not, the losses are different when they adopt the same decision. When X i ∈ C (i = 1, 2, 3), the losses of incorporating X 1 into the indecisive domain a B and negative domain a N are λ1B P = (0.0648, 0.7597) and λ1N P = (0.2, 0.4) separately, which are smaller than the losses of incorporating X 2 into the indecisive domain a B and negative domain a N (λ2B P = (0.1015, 0.6170) and λ2N P = (0.3, 0.2), respectively). Synchronously, they are also smaller than the losses of incorporating X 3 into the corresponding domains (λ3B P = (0.2403, 0.6968) and λ3N P = (0.6, 0.3), respectively). The interpretation is that the evaluation values meet the condition x1 < x2 < x3 , which shows that the profit of X 1 is lower than X 2 , and both of them are lower than X 3 . When the three projects X 1 , X 2 and X 3 have the attribute G, rejecting X 1 brings a lower loss than X 2 , and both of them bring lower losses than X 3 ; hesitating X 1 also get the same result. Conversely, when X i ∈ ¬C (i = 1, 2, 3), the losses of incorporating X 1 into the positive domain a P and indecisive domain a B are λ1P Q = (0.4, 0.2) and λ1B Q = (0.1421, 0.6170) separately, which are bigger than the losses of incorporating X 2 into the positive domain a P and indecisive domain a B (λ2P Q = (0.2, 0.3) and λ2B Q = (0.0648, 0.6968), respectively). At the same time, both of them are bigger than the losses of incorporating X 3 into the corresponding domains (λ3P Q = (0.3, 0.6) and

λ2B Q = (0.1015, 0.8579), respectively). The condition x1 < x2 < x3 is also capable of explaining the conclusion accepting or

hesitating X 1 brings a higher loss than X 2 , and both of them bring higher losses than X 3 . To further explain the IFN’s relative loss functions displayed in Table 5, inverse loss functions [9] are introduced. Assuming F is the opposite of G, and y = x¯ denotes the evaluation value of X aiming at F when evaluation value of X aiming at G is x. For example, if G expresses “energy conservation” in a technical evaluation problem, then F represents the opposite side “waste of energy”. If the evaluation value of X aiming at G is denoted as an IFN (0.7, 0.1), then the evaluation value

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

185

Table 10 The IFN’s inverse loss functions. X

D

¬D

bP bB bN

o

θy

y¯ θ y¯

y

o

Table 11 The transformation of IFN’s inverse loss functions. X

¬C

aN aB aP

o

x

θ x¯ x¯

θx

C

o

of X aiming at F is calculated as (0.1, 0.7). That is to say, the MD of the performance of X on “energy conservation” is 0.7, the N-MD is 0.1, and the MD of the performance of X on “waste of energy” is 0.1, the N-MD is 0.7. So x = (0.7, 0.1) and y = (0.1, 0.7) have equivalence connotation. Considering the correlation between loss functions and inverse loss functions, based on the relative loss functions of X displayed in Table 5, the inverse loss functions of X are written as Table 10, where D and ¬ D represent two states of the attribute F , and the three actions b P , b B and b N separately represent classifying X into POS( D ), BND( D ) and NEG( D ). Based on the above statement, we know that D = ¬C , b P , b B and b N are equivalent to a N , a B and a P , respectively. Then, we deduce the following properties POS( D ) = NEG(C ), BND( D ) = BND(C ) and NEG( D ) = POS(C ). According to y = x¯ , we can get y¯ = (¯x) = x. Therefore, the inverse loss functions are converted to the forms of Table 11, which corresponds with the initial relative loss function shown in Table 5. In terms of the inverse loss functions described in Tables 10 and 11, we can draw a conclusion: When X ∈ D, the losses of incorporating X into the positive domain b P , indecisive domain b B and negative domain b N are o, θ y and y separately, which are one to one correspondence with the losses of incorporating X into the negative domain a N , indecisive domain a B and positive domain a P when X ∈ ¬C ; when X ∈ ¬ D, the losses of incorporating X into the positive domain b P , indecisive domain b B and negative domain b N are y¯ , θ y¯ and o separately, which are one to one correspondence with the losses of incorporating X into the negative domain a N , indecisive domain a B and positive domain a P when X ∈ C . Therefore, the IFN’s inverse loss functions can be used to deal with the transformation between cost attributes and benefit attributes. 3.3. The classification rules of three-way decisions According to the procedure of DTRSs, the condition probability is also one of the ingredients. Suppose Pr(C |[ X ]) expresses the conditional probability of the alternative X having the attribute G, which is a crisp number. Then, the conditional probability of the alternative X not having the attribute G is defined as Pr(¬C |[ X ]). Thus, we know Pr(C |[ X ]) + Pr(¬C |[ X ]) = 1 by a reasonable semantic interpretation. Based on Table 6, we can calculate the expected loss function in accordance with formulas (13)–(15). For the alternative X , the expected losses of taking actions of a• (• = P , B , N) can be defined as follows:









R a P |[ X ] = ( v , u ) Pr ¬C |[ X ]



         R a B |[ X ] = 1 − (1 − u )θ , v θ Pr C |[ X ] ⊕ 1 − (1 − v )θ , u θ Pr ¬C |[ X ]     R a N |[ X ] = (u , v ) Pr C |[ X ]

(16) (17) (18)

Theorem 1. When Pr(C |[ X ]) is a fixed value, based on the Definition 2.5, the expected losses R (a• |[ X ]) (• = P , B , N) can be calculated as the following results:







R a P |[ X ] = 1 − (1 − v )1−Pr(C |[ X ]) , u 1−Pr(C |[ X ])





   R a B |[ X ] = 1 − (1 − u )θ ×Pr(C |[ X ]) (1 − v )θ ×(1−Pr(C |[ X ])) , v θ ×Pr(C |[ X ]) u θ ×(1−Pr(C |[ X ]))     R a N |[ X ] = 1 − (1 − u )Pr(C |[ X ]) , v Pr(C |[ X ])

(19) (20) (21)

Proof. Because Pr(C |[ X ]) + Pr(¬C |[ X ]) = 1 and Pr(C |[ X ]) is constant, based on the operational rules of IFNs, we can get













R a P |[ X ] = ( v , u ) Pr ¬C |[ X ] = 1 − (1 − v )1−Pr(C |[ X ]) , u 1−Pr(C |[ X ]) ;

186

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203





















R a B |[ X ] = 1 − (1 − u )θ , v θ Pr C |[ X ] ⊕ 1 − (1 − v )θ , u θ Pr ¬C |[ X ]

Pr(C |[ X ])  θ Pr(C |[ X ])   1−Pr(C |[ X ])  θ 1−Pr(C |[ X ])    ⊕ 1 − 1 − 1 − (1 − v )θ , v , u = 1 − 1 − 1 − (1 − u )θ         θ Pr(C |[ X ])  θ 1−Pr(C |[ X ])  θ Pr(C |[ X ]) θ 1−Pr(C |[ X ]) = 1 − 1 − 1 − (1 − u ) 1 − 1 − (1 − v ) , v u       θ Pr(C |[ X ]) θ 1−Pr(C |[ X ]) θ ×Pr(C |[ X ]) θ ×(1−Pr(C |[ X ])) (1 − v ) = 1 − (1 − u ) ,v u   θ ×Pr(C |[ X ]) θ ×(1−Pr(C |[ X ])) θ ×Pr(C |[ X ]) θ ×(1−Pr(C |[ X ])) ; = 1 − (1 − u ) (1 − v ) ,v u       Pr(C |[ X ]) Pr(C |[ X ]) . R a N |[ X ] = (u , v ) Pr C |[ X ] = 1 − (1 − u ) ,v 





Thus, the statements of Theorem 1 are right. Q. E. D. Based on Bayesian decision procedure, the minimum-cost decision rule is the best, so we can get

        (Pˆ ) If R a P |[ X ] R a B |[ X ] and R a P |[ X ] R a N |[ X ] , adopt X ∈ POS(C );         (Bˆ ) If R a B |[ X ] R a P |[ X ] and R a B |[ X ] R a N |[ X ] , adopt X ∈ BND(C );         (Nˆ ) If R a N |[ X ] R a P |[ X ] and R a N |[ X ] R a B |[ X ] , adopt X ∈ NEG(C ). Because the IFN contains the MD and the N-MD, it tends to pay more attention to the MD for the optimists, but for the pessimists, they tend to be more concerned with the N-MD [3,14]. Therefore, based on different attitudes of decision makers, we discuss the classification rules of the TWDs from positive viewpoint, negative viewpoint and comprehensive viewpoint. 3.3.1. Positive viewpoint From an optimistic perspective, we only focus on the MDs of IFNs, because the MDs of the expected losses are consistent with the expected losses [14]. Then, the expected losses R (a• |[ X ]) (• = P , B , N) can be represented by the MDs. Then, the classification rules are described as below:

(P1 ) If u P ≤ u B and u P ≤ u N , adopt X ∈ POS(C ); (B1 ) If u B ≤ u P and u B ≤ u N , adopt X ∈ BND(C ); (N1 ) If u N ≤ u P and u N ≤ u B , adopt X ∈ NEG(C ), where u P = 1 − (1 − v )1−Pr(C |[ X ]) , u B = 1 − (1 − u )θ×Pr(C |[ X ]) (1 − v )θ×(1−Pr(C |[ X ])) and u N = 1 − (1 − u )Pr(C |[ X ]) . According to u P ≤ u B , we can get 1 − (1 − v )1−Pr(C |[ X ]) ≤ 1 − (1 − u )θ×Pr(C |[ X ]) (1 − v )θ×(1−Pr(C |[ X ])) , and then we can deduce the following inequalities:

(1 − v )1−Pr(C |[ X ]) ≥ (1 − u )θ ×Pr(C |[ X ]) (1 − v )θ ×(1−Pr(C |[ X ]))           ⇒ 1 − Pr C |[ X ] lg (1 − v ) ≥ θ × Pr C |[ X ] lg (1 − u ) + θ × 1 − Pr C |[ X ] lg (1 − v )       ⇒ lg (1 − v ) − Pr C |[ X ] lg (1 − v ) ≥ Pr C |[ X ] θ lg (1 − u ) + θ lg (1 − v ) − Pr C |[ X ] θ lg (1 − v )    ⇒ lg (1 − v ) − lg (1 − v )θ ≥ Pr C |[ X ] lg (1 − u )θ − lg (1 − v )θ + lg (1 − v )     ⇒ lg (1 − v )1−θ ≥ Pr C |[ X ] lg (1 − u )θ (1 − v )1−θ   (1 − θ)lg (1 − v ) ⇒ Pr C |[ X ] ≥ . θ lg (1 − u ) + (1 − θ)lg (1 − v ) Similarly, in the light of u P ≤ u N , we can deduce Pr(C |[ X ]) ≥ θ lg (1− v ) (1−θ)lg (1−u )+θ lg (1− v ) .

lg (1− v ) ; lg (1−u )+lg (1− v )

according to u B ≤ u N , we can also deduce

Pr(C |[ X ]) ≥ Therefore, the classification rules can be rewritten as the following simplification:

 









 



















P1 If Pr C |[ X ] ≥ α1 and Pr C |[ X ] ≥ γ1 , adopt X ∈ POS(C ); B1 If Pr C |[ X ] ≤ α1 and Pr C |[ X ] ≥ β1 , adopt X ∈ BND(C );



N 1 If Pr C |[ X ] ≤ β1 and Pr C |[ X ] ≤ γ1 , adopt X ∈ NEG(C ),

where

(1 − θ)lg (1 − v ) ; θ lg (1 − u ) + (1 − θ)lg (1 − v ) θ lg (1 − v ) β1 = ; (1 − θ)lg (1 − u ) + θ lg (1 − v )

α1 =

(22) (23)

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

γ1 =

lg (1 − v ) lg (1 − u ) + lg (1 − v )

187

(24)

.

It is important to note that u = 1, v = 1 and u + v = 0. Aiming at these particular cases, we explain them alone. (i) If u = 1, i.e., v = 0, then u P = 0, u B = 1 and u N = 1. It means that the classification result has only one decision of acceptance; (ii) If v = 1, i.e., u = 0, then u P = 1, u B = 1 and u N = 0. It means that the classification result has only one decision of rejection; (iii) If u + v = 0, i.e., u = 0 and v = 0, then u P = 0, u B = 0 and u N = 0. It means that the decision result cannot be obtained accurately, which is equivalent to no evaluation. It is meaningless in realistic evaluation, so we exclude it in subsequent discussion. Theorem 2. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then we can come to the conclusions: (i) The threshold α1 decreases monotonously with the parameter θ ; (ii) The threshold β1 increases monotonously with the parameter θ ; (iii) The threshold γ1 is irrelevant to the parameter θ . lg (1− v )

Proof. Because γ1 = lg (1−u )+lg (1− v ) , it has nothing to be with the parameter θ , which shows that parameter θ . Then, we only need to prove (i) and (ii). Firstly, we take α1 (θ) as a function of θ , and then compute the derivative of α1 (θ):

α1 (θ) =

γ1 is independent of the

−(θ lg (1 − u ) + (1 − θ)lg (1 − v ))lg (1 − v ) − (1 − θ)(lg (1 − u ) − lg (1 − v ))lg (1 − v ) . (θ lg (1 − u ) + (1 − θ)lg (1 − v ))2

Because its denominator (θ lg (1 − u ) + (1 − θ)lg (1 − v ))2 > 0, we only need to just need to talk about its numerator.

    − θ lg (1 − u ) + (1 − θ)lg (1 − v ) lg (1 − v ) − (1 − θ) lg (1 − u ) − lg (1 − v ) lg (1 − v )     = −θ lg (1 − u ) − lg (1 − v ) lg (1 − v ) − lg (1 − v )lg (1 − v ) − lg (1 − u ) − lg (1 − v ) lg (1 − v )   + θ lg (1 − u ) − lg (1 − v ) lg (1 − v ) = −lg (1 − v )lg (1 − v ) − lg (1 − u )lg (1 − v ) + lg (1 − v )lg (1 − v ) = −lg (1 − u )lg (1 − v ). It is easy to find that −lg (1 − u )lg (1 − v ) < 0, so α1 (θ) < 0. According to the relation between the derivative and the monotonicity, we know α1 decreases monotonously with the parameter θ . Similarly, we take β1 (θ) as a function of θ , too. It is proved in the same way as α1 that β1 increases monotonously with the parameter θ , and the proof procedure will not be rewritten. Therefore, the conclusion (i), (ii) and (iii) are all correct. Q. E. D. Lemma 1. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then we know (i) The scope of positive region x ∈ POS(C ) becomes larger with the increase of the parameter θ ; (ii) The scope of boundary region x ∈ BND(C ) becomes smaller with the increase of the parameter θ ; (iii) The scope of negative region x ∈ NEG(C ) becomes larger with the increase of the parameter θ . According to Theorem 2 and Lemma 1, α1 becomes smaller and β1 becomes larger as the parameter θ increases, which make positive domain and negative domain larger and make boundary domain smaller, in other words, larger θ broadens the domain of deterministic decision, and narrows the domain of nondeterministic decision. Conversely, smaller θ enlarges the domain of nondeterministic decision and shrinks the domain of deterministic decision. With the growth of the parameter θ , there must be certain parameters to make α1 ≤ β1 , which makes the decision process degenerate into two-way decisions. Theorem 3. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then we know (i) when 0 ≤ θ < 0.5, a decision problem is a TWD; (ii) when 0.5 ≤ θ ≤ 1, a decision problem is a two-way decision. Proof. According to the decision rule (B 1 ), we can see β1 < α1 , so a TWD needs to meet the following condition: β1 (θ) − α1 (θ) < 0. First of all, we calculate

188

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Fig. 1. The TWD model with different θ .

Table 12 The relationship between θ and thresholds.

α , β and γ

θ

The thresholds

θ =0 0 < θ < 0.5 θ = 0.5 0.5 < θ < 1 θ =1

α1 = 1, β1 = 0 0 < β1 < γ1 < α1 < 1 α1 = β1 = γ1 0 < α1 < γ1 < β1 < 1 α1 = 0, β1 = 1

θ lg (1 − v ) (1 − θ)lg (1 − v ) − (1 − θ)lg (1 − u ) + θ lg (1 − v ) θ lg (1 − u ) + (1 − θ)lg (1 − v ) θ 2lg (1 − u )lg (1 − v ) + θ(1 − θ)lg 2 (1 − v ) − (1 − θ)2lg (1 − u )lg (1 − v ) − θ(1 − θ)lg 2 (1 − v ) = ((1 − θ)lg (1 − u ) + θ lg (1 − v ))(θ lg (1 − u ) + (1 − θ)lg (1 − v )) (θ 2 − (1 − θ)2 )lg (1 − u )lg (1 − v ) = ((1 − θ)lg (1 − u ) + θ lg (1 − v ))(θ lg (1 − u ) + (1 − θ)lg (1 − v )) (2θ − 1)lg (1 − u )lg (1 − v ) = ((1 − θ)lg (1 − u ) + θ lg (1 − v ))(θ lg (1 − u ) + (1 − θ)lg (1 − v ))

β1 (θ) − α1 (θ) =

Because 0 ≤ θ ≤ 1, lg (1 − u ) < 0 and lg (1 − v ) < 0, it is easy to find that lg (1 − u )lg (1 − v ) > 0 and the denominator ((1 − θ)lg (1 − u ) + θ lg (1 − v ))(θ lg (1 − u ) + (1 − θ)lg (1 − v )) > 0. To make the inequality β1 (θ) − α1 (θ) < 0 set up, then 2θ − 1 < 0, so 0 ≤ θ < 0.5. Therefore, when 0 ≤ θ < 0.5, a decision problem is a TWD. Conversely, if β1 (θ) − α1 (θ) ≥ 0, namely 0.5 ≤ θ ≤ 1, then the decision rule (B 1 ) does not meet the condition, so the decision rules only include (P 1 ) and (N 1 ), which explains the two-way decision rules. Therefore, when 0.5 ≤ θ ≤ 1, a decision problem is a two-way decision. Q. E. D. Fig. 1 shows the classification rules of TWDs under different values of the parameter θ , and Table 12 detailedly explains the change of thresholds under different values of the parameter θ for the IFN x = (u , v ) except (0, 1) and (1, 0). In the light of Fig. 1 and Table 12, we know that when θ = 0, there are α1 = 1 and β1 = 0, which means there is only the region of indecision, and TWDs are reduced as Pawlak rough sets; when 0 < θ < 0.5, the thresholds satisfy the decision rules of TWDs; when θ = 0.5, there is α1 = β1 = γ1 , and the TWDs degenerate into two-way decisions, where the domain of indecision is an empty set; when 0.5 < θ < 1, there is also a two-way decision model meeting 0 < α1 < γ1 < β1 < 1; when θ = 1, there are α1 = 0, β1 = 1, it is still a two-way decision related to γ1 . We can make further infer that the two-way decisions are only decided by the threshold γ1 in 0.5 ≤ θ ≤ 1. What is more, it can also be concluded that the bigger the parameter θ , the more risky the decision maker is. Therefore, this once again confirms the rationality that Jia and Liu [9] called the parameter θ as a risk avoidance coefficient to denote the grade of decision makers’ disgust with the risk or loss of indecision. Theorem 4. Suppose x1 = (u 1 , v 1 ) and x2 = (u 2 , v 2 ) be two arbitrary IFNs except (0, 1) and (1, 0) where u 1 ≤ u 2 and v 1 ≥ v 2 . The thresholds α11 , β11 are related to x1 and α12 , β12 related to x2 separately, then α11 ≥ α12 and β11 ≥ β12 . Proof. According to u 1 ≤ u 2 and v 1 ≥ v 2 , we can get 0 > lg (1 − u 1 ) ≥ lg (1 − u 2 ) and lg (1 − v 1 ) ≤ lg (1 − v 2 ) < 0. When 0 < θ < 1, we have 0 > θ lg (1 − u 1 ) ≥ θ lg (1 − u 2 ) and (1 − θ)lg (1 − v 1 ) ≤ (1 − θ)lg (1 − v 2 ) < 0. Then, we can deduce that θ lg (1−u ) θ lg (1−u ) 0 < (1−θ)lg (1−1 v ) ≤ (1−θ)lg (1−2 v ) . 1

2

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Fig. 2. The three regions divided by the thresholds

In addition, the threshold get

1

θ lg (1−u 1 ) (1−θ)lg (1− v 1 ) +1



1

α1 =

, θ lg (1−u 2 ) (1−θ)lg (1− v 2 ) +1

(1−θ)lg (1− v ) θ lg (1−u )+(1−θ)lg (1− v )

i.e.,

=

α11 ≥ α12 .

189

α1i , β1i and γ1i (i = 1, 2, 3) for each project.

1

, θ lg (1−u ) (1−θ)lg (1− v ) +1

based on the above obtained inequality, so we can

Besides, when θ = 0, we have α11 = α12 = 1; when θ = 1, we have α11 = α12 = 0. Consequently, if u 1 ≤ u 2 and v 1 ≥ v 2 , then α11 ≥ α12 . Similarly, we can get β11 ≥ β12 in the same way, and the proof procedure will not be rewritten. Therefore, Theorem 4 is right. Q. E. D. Theorem 5. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then we can come to the conclusions: (i) If the N-MD v is constant, the thresholds α1 and β1 decrease with the MD u, respectively; (ii) If the MD u is constant, the thresholds α1 and β1 increase with the N-MD v, respectively. Proof. Firstly, we take

α1 (u ) as a function of u, and then compute the derivative of α1 (u ):

α1 (u ) = −(1 − θ)lg (1 − v )

θ (1−−u )1ln10 (θ lg (1 − u ) + (1 − θ)lg (1 − v ))2

=

θ(1 − θ)lg (1 − v ) . (1 − u )ln10(θ lg (1 − u ) + (1 − θ)lg (1 − v ))2

It is known that 1 − u > 0, ln10 > 0, (θ lg (1 − u ) + (1 − θ)lg (1 − v ))2 > 0, lg (1 − v ) < 0, and 0 ≤ θ ≤ 1, so α1 (u ) ≤ 0. According to the relation between the derivative and the monotonicity, we know the threshold α1 decreases with the MD u. Then, we take α1 ( v ) as a function of v, and then compute the derivative of α1 ( v ): 1−θ)lg (1−u ) α1 ( v ) = (1− v )ln10(θ−θ( ≥ 0. Thus, the threshold α1 increases with the N-MD v. lg (1−u )+(1−θ)lg (1− v ))2 Similarly, we also take β1 (u ) as a function of u, and β1 ( v ) as a function of v. It is proved in the same way as

α1 (u ) and

α1 ( v ) separately that β1 decreases with the MD u and increases with the N-MD v. Therefore, the conclusion (i) and (ii) are all correct.

Q. E. D.

Example 3.3. We still use the data of Example 3.2 to explain. The evaluation values of the three projects X 1 , X 2 and X 3 are x1 = (0.2, 0.4), x2 = (0.3, 0.2) and x3 = (0.6, 0.3) respectively, and their loss functions are shown in Table 7, 8 and 9. Assume the risk avoidance coefficient is θ = 0.3, so we can calculate the thresholds: α11 = 0.8423, β11 = 0.4952; α12 = 0.5935, β12 = 0.2114; α13 = 0.4760, β13 = 0.1430. Fig. 2 shows three regions divided by these thresholds intuitively. We can find that the evaluation values x1 and x2 meet the conditions where u 1 < u 2 and v 1 > v 2 , so α11 ≥ α12 and β11 ≥ β12 in accordance with Theorem 4, which is illustrated in Fig. 2. It deduces that there are a smaller scope of acceptance and a bigger scope of rejection for X 1 than for X 2 . According to x1 and x3 , we can also get similar conclusions. 3.3.2. Negative viewpoint In this part, we concentrate on the N-MDs of IFNs from a pessimistic perspective, because the N-MDs of the expected losses are opposite to the expected losses [14]. The bigger the N-MD of IFN is, the smaller the expected loss R (a• |[ X ]) (• = P , B , N) is. Then, the classification rules are described as below: (P2 ) If v P ≥ v B and v P ≥ v N , adopt X ∈ POS(C ); (B2 ) If v B ≥ v P and v B ≥ v N , adopt X ∈ BND(C ); (N2 ) If v N ≥ v P and v N ≥ v B , adopt X ∈ NEG(C ),

190

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

where v P = u 1−Pr(C |[ X ]) , v B = v θ×Pr(C |[ X ]) u θ×(1−Pr(C |[ X ])) and v N = v Pr(C |[ X ]) . According to the above three decision rules, we can also launch three thresholds:

(1 − θ)lgu ; (1 − θ)lgu + θ lg v θ lgu β2 = ; θ lgu + (1 − θ)lg v

α2 =

γ2 =

lgu lgu + lg v

(25) (26) (27)

.

Then, the decision rules can be rewritten as the following simplification:

 









 



















P2 If Pr C |[ X ] ≥ α2 and Pr C |[ X ] ≥ γ2 , adopt X ∈ POS(C ); B2 If Pr C |[ X ] ≤ α2 and Pr C |[ X ] ≥ β2 , adopt X ∈ BND(C );



N 2 If Pr C |[ X ] ≤ β2 and Pr C |[ X ] ≤ γ2 , adopt X ∈ NEG(C ).

In line with α1 , β1 and γ1 , It also meet the condition where u = 1, v = 1 and u + v = 0. Aiming at these particular cases, we have explained in subsection 3.3.1, and won’t reiterate them here. Theorem 6. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then we can come to the conclusions: (i) The threshold α2 decreases monotonously with the parameter θ ; (ii) The threshold β2 increases monotonously with the parameter θ ; (iii) The threshold γ2 is irrelevant to the parameter θ . Lemma 2. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then from a pessimistic perspective we can also have: (i) The scope of positive region x ∈ POS(C ) becomes larger with the increase of the parameter θ ; (ii) The scope of boundary region x ∈ BND(C ) becomes smaller with the increase of the parameter θ ; (iii) The scope of negative region x ∈ NEG(C ) becomes larger with the increase of the parameter θ . Similar to Lemma 1, Lemma 2 also illustrates that the greater the parameter θ , the larger the domain of certain decision is, and the smaller the domain of uncertain decision is. Theorem 7. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then we know (i) when 0 ≤ θ < 0.5, a decision problem is a TWD; (ii) when 0.5 ≤ θ ≤ 1, a decision problem is a two-way decision. Similar to Fig. 1 and Table 12, from negative viewpoint, different values of the parameter θ also change the decision rules of TWDs and the thresholds α2 , β2 and γ2 . When θ = 0, there are α2 = 1 and β2 = 0, which means there is only the region of indecision, and TWDs are reduced as Pawlak rough sets; when 0 < θ < 0.5, the thresholds α2 , β2 and γ2 satisfy the decision rules of TWDs; when θ = 0.5, there is α2 = β2 = γ2 , and the TWDs degenerate into two-way decisions, where the domain of indecision is empty set; when 0.5 < θ < 1, it is also a two-way decision meeting 0 < α2 < γ2 < β2 < 1; when θ = 1, there are α2 = 0, β2 = 1, it is still a two-way decision related to γ2 . Therefore, we can further get that the two-way decisions are only decided by the threshold γ2 in 0.5 ≤ θ ≤ 1. Theorem 8. Suppose x1 = (u 1 , v 1 ) and x2 = (u 2 , v 2 ) be two arbitrary IFNs except (0, 1) and (1, 0) where u 1 ≤ u 2 and v 1 ≥ v 2 . The thresholds α21 , β21 are related to x1 and α22 , β22 related to x2 separately, then α21 ≥ α22 and β21 ≥ β22 . Theorem 9. Suppose x = (u , v ) be an arbitrary IFN except (0, 1) and (1, 0), then we can come to the conclusions: (i) If the N-MD v is constant, the thresholds α2 and β2 decrease with the MD u, respectively; (ii) If the MD u is constant, the thresholds α2 and β2 increase with the N-MD v, respectively. The proofs of Theorem 6-9 are similar to the ones of Theorem 2-5, so we do not repeat them one by one. Theorem 10. If u + v = 1, then the decision rules (P 1 )-(N 1 ) are the same as the decision rules (P 2 )-(N 2 ).

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Fig. 3. The three domains partitioned by the thresholds

191

α2i , β2i and γ2i (i = 1, 2, 3) for each project.

Proof. According to u + v = 1, we can get u = 1 − v and v = 1 − u, then the thresholds into the following forms:

α2 , β2 and γ2 can be converted

(1 − θ)lgu (1 − θ)lg (1 − v ) = = α1 ; (1 − θ)lgu + θ lg v θ lg (1 − u ) + (1 − θ)lg (1 − v ) θ lgu θ lg (1 − v ) = = β1 ; β2 = θ lgu + (1 − θ)lg v (1 − θ)lg (1 − u ) + θ lg (1 − v ) lgu lg (1 − v ) γ2 = = = γ1 . lgu + lg v lg (1 − u ) + lg (1 − v )

α2 =

Thus, the decision rules (P 1 )-(N 1 ) are the same as the decision rules (P 2 )-(N 2 ). Q. E. D. Example 3.4. According to Example 3.3, suppose the risk avoidance coefficient is still θ = 0.3, we continue to calculate the thresholds from negative viewpoint: α21 = 0.8039, β21 = 0.4295; α22 = 0.6358, β22 = 0.2428; α23 = 0.4975, β23 = 0.1539. Fig. 3 visually shows three domains partitioned by these thresholds from negative viewpoint. It is easy to illustrate Theorem 8 from Fig. 3. It also deduces that a greater IFN can lead to a bigger scope of acceptance and a smaller scope of rejection, which is similar to Theorem 4. From Example 3.3 and Example 3.4, obviously the sum of MD and N-MD of the IFN is less than one, then the corresponding thresholds is different such as α11 = α21 , β11 = β21 . In this case, the project X i may get inconsistent decisions. Aiming at this condition, the MD and the N-MD should be captured simultaneously. We call it comprehensive perspective. 3.3.3. Comprehensive viewpoint To compare the expected losses R (a• |[ X ]) (• = P , B , N) of the alternative X , based on the comparison method of IFNs, we draw IFN’s ideal positive degree into the decision rules of TWD. According to Theorem 1 and Definition 2.3, the ideal positive degree of the expected losses R (a• |[ X ]) (• = P , B , N) can be calculated as the following results:

 

 I R a P |[ X ] = 1 −  



I R a B |[ X ]

 

=1−

 

 I R a N |[ X ] = 1 −



(u 1−Pr(C |[ X ]) )2 + ((1 − v )1−Pr(C |[ X ]) )2 2

;

((1 − u )θ ×Pr(C |[ X ]) (1 − v )θ ×(1−Pr(C |[ X ])) )2 + (u θ ×(1−Pr(C |[ X ])) v θ ×Pr(C |[ X ]) )2 2

((1 − u )Pr(C |[ X ]) )2 + ( v Pr(C |[ X ]) )2 2

.

Then, the classification rules can be rewritten as the following simplification:

 

 

         ≤ I R a B |[ X ] and I R a P |[ X ] ≤ I R a N |[ X ] , adopt X ∈ POS(C );               B3 If I R a B |[ X ] ≤ I R a P |[ X ] and I R a B |[ X ] ≤ I R a N |[ X ] , adopt X ∈ BND(C );               N3 If I R a N |[ X ] ≤ I R a P |[ X ] and I R a N |[ X ] ≤ I R a B |[ X ] , adopt X ∈ NEG(C ). 

P3 If I R a P |[ X ]

(28)

;

(29)

(30)

192

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Theorem 11. If u + v = 1, then the decision rules (P 3 )-(N 3 ) are the same as the decision rules (P 1 )-(N 1 ) and (P 2 )-(N 2 ). Proof. According to u + v = 1, we can get u = 1 − v and v = 1 − u, then we can deduce the following equations based on formula (28)–(30):

 

 I R a P |[ X ] = 1 −



(u 1−Pr(C |[ X ]) )2 + ((1 − v )1−Pr(C |[ X ]) )2 2



((1 − v )1−Pr(C |[ X ]) )2 + ((1 − v )1−Pr(C |[ X ]) )2

=1−

=1−

;

2



(1 − v )1−Pr(C |[ X ])

2

= 1 − (1 − v )1−Pr(C |[ X ]) = uP  

 I R a N |[ X ] = 1 −

 

=1−

=1−

((1 − u )Pr(C |[ X ]) )2 + ( v Pr(C |[ X ]) )2 2

((1 − u )Pr(C |[ X ]) )2 + ((1 − u )Pr(C |[ X ]) )2 ;

2



(1 − u )Pr(C |[ X ])

2

= 1 − (1 − u )Pr(C |[ X ]) = uN  

 I R a B |[ X ] = 1 −

 

=1−

=1−

((1 − u )θ ×Pr(C |[ X ]) (1 − v )θ ×(1−Pr(C |[ X ])) )2 + (u θ ×(1−Pr(C |[ X ])) v θ ×Pr(C |[ X ]) )2 2

((1 − u )θ ×Pr(C |[ X ]) (1 − v )θ ×(1−Pr(C |[ X ])) )2 + ((1 − v )θ ×(1−Pr(C |[ X ])) (1 − u )θ ×Pr(C |[ X ]) )2 2



(1 − u )θ ×Pr(C |[ X ]) (1 − v )θ ×(1−Pr(C |[ X ]))

.

2

= 1 − (1 − u )θ ×Pr(C |[ X ]) (1 − v )θ ×(1−Pr(C |[ X ])) = uB Then, the decision rules (P 3 )-(N 3 ) are converted into the decision rules (P1 )-(N1 ) derived from positive viewpoint:

 

P3 If u P ≤ u B and u P ≤ u N , adopt X ∈ POS(C );

  

B3 If u B ≤ u P and u B ≤ u N , adopt X ∈ BND(C );



N 3 If u N ≤ u P and u N ≤ u B , adopt X ∈ NEG(C ).

Because the decision rules (P 1 )-(N 1 ) are the simplifications of the decision rules (P1 )-(N1 ), the decision rules (P 3 )-(N 3 ) are the same as the decision rules (P 1 )-(N 1 ). Similarly, we can get that the decision rules (P 3 )-(N 3 ) are the same as the decision rules (P 2 )-(N 2 ). Therefore, the decision rules (P 3 )-(N 3 ) are the same as the decision rules (P 1 )-(N 1 ) and (P 2 )-(N 2 ). Q. E. D. Theorem 11 shows that the decision rules (P 1 )-(N 1 ) and (P 2 )-(N 2 ) are particular cases of the decision rules (P 3 )-(N 3 ). In other words, the decision rules (P 1 )-(N 1 ) and (P 2 )-(N 2 ) can only be used if the sum of MD and N-MD is one. Therefore, we use the decision rules (P 3 )-(N 3 ) to construct TWD model with IFNs based on MADM. 4. A novel three-way decision method based on multiple attribute decision making with IFNs In the previous section, we set up TWD model for only one attribute. However, there is no discussion on how to integrate the TWD results of multiple attributes in MADM problems. In this section, we will explore the integrated method of relative loss functions under different attributes, and calculate the conditional probability with grey relational degree. In the end, we give a TWD method based on MADM with IFNs.

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

193

Table 13 The expansions of integrated loss function f (xi ). Xi

C∗

¬C ∗

aP

(0, 1) θ δ (1 − nj=1 (1 − u i j )θ j δi j , nj=1 v i jj i j ) n δ (1 − j =1 (1 − u i j )δi j , nj=1 v i ji j )

(1 −

aB aN

(1 −

n

n

j =1 (1 −

v i j )δi j ,

j =1 (1 −

v i j )θ j δi j ,

n

j =1

n

δi j

ui j )

j =1

θ j δi j

ui j

)

(0, 1)

4.1. The aggregation of relative loss functions based on multiple attributes Suppose there are a set of alternatives X = { X 1 , X 2 , · · · , X m } and a set of attributes G = {G 1 , G 2 , · · · , G n } in MADM constructed a m × n matrix, where xi j = (u i j , v i j ) represents the evaluation value given by decision makers with respect to alternative X i under attribute G j , and ω = (ω1 , ω2 , · · · , ωn ) is the weight vector of attributes meeting ω j ∈ [0, 1] and n j =1 ω j = 1. Because of the relativity of loss functions, we can get the relative loss functions of each alternative under each attribute based on subsection 3.2, which is expressed as f (xi j ), a 3 × 2 matrix shown as follows:





o x¯ i j f (xi j ) = ⎝ θ j xi j θ j x¯ i j ⎠ , xi j o

where 0 ≤ θ j < 0.5.

(31)

Because unreasonable value occurs frequently in practical MADM with decision maker’s bias or limited knowledge, to eliminate the influence of unreasonable value, we often use PA operator [33,36] to solve this problem in MADM. Moreover, the weight ω j indicates the importance degree of attribute G j in MADM, and the relative loss function f (xi j ) denotes the relative losses of taking different actions to alternative X i under attribute G j . Consequently, to obtain a fairer loss integration result of alternative X i , we try to integrate the relative loss functions under different attributes with the IFPWA operator [33]. The specific integration methods are as follows

⎛ f ( xi ) =

⊕nj=1 δi j

f ( xi j ) = ⎝

o

⊕nj=1 θ j δi j xi j ⊕nj=1 δi j xi j

n ω j (1+ T (xi j )) , T ( xi j ) = k=1 ω j (1+ T (xi j ))

where δi j = n

j =1

k= j

⎞ ⊕nj=1 δi j x¯ i j n ⊕ j =1 θ j δi j x¯ i j ⎠

(32)

o

ωk Sup (xi j , xik ), Sup (xi j , xik ) = 1 − d E (xi j , xik ).

The value θ j is determined by the property of attributes G j , which depend on decision maker’s perceptiveness, so the value of θ j is varied in real MADM problems. In particular, if the value θ j = θ ( j = 1, 2, · · · , n), then the relative losses of delaying X i can be restated as λ B P = θ ⊕nj=1 δi j xi j and λ B Q = θ ⊕nj=1 δi j x¯ i j . The integrated relative loss function f (xi ) is the

equivalent of the relative loss function educed from the comprehensive evaluation value ⊕nj=1 δi j xi j . That is to say, when the decision makers believe that the risk avoidance coefficients of different attributes are the same, we can firstly use the IFPWA operator to integrate evaluation values of different attributes, and then compute the relative losses and thresholds based on the integrated result. Then, we bring the operation rules of IFNs into f (xi ) so that the expansions of relative loss functions can be obtained, which are displayed in Table 13. Obviously, they still satisfy the conditions λ∗P P ≤ λ∗B P < λ∗N P and λ∗N Q ≤ λ∗B Q < λ∗P Q ,

n n n n n θ j δi j δi j where λ∗P P = (0, 1), λ∗B P = (1 − j =1 (1 − u i j )θ j δi j , j =1 v i j ), λ∗N P = (1 − j =1 (1 − u i j )δi j , j =1 v i j ), λ∗P Q = (1 − j =1 (1 − n

n

δi j

n

θ j δi j

v i j )δi j , j =1 u i j ), λ∗B Q = (1 − j =1 (1 − v i j )θ j δi j , j =1 u i j ), λ∗N Q = (0, 1). For the alternative X i , the expected losses of taking actions of a• (• = P , B , N) can be calculated as follows:

  n 1−Pr(C ∗ |[ X i ])  n 1−Pr(C ∗ |[ X i ])    δi j  δi j R a P |[ X i ] = 1 − (1 − v i j ) , ui j 

j =1

(33)

j =1

  n Pr(C ∗ |[ X i ])  n 1−Pr(C ∗ |[ X i ])    (1 − u i j )θ j δi j (1 − v i j )θ j δi j , R a B |[ X i ] = 1 − 

j =1



n  j =1

Pr(C ∗ |[ X i ])  θ j δi j

vij

j =1 n 

1−Pr(C ∗ |[ X i ]) 

θ j δi j

ui j

  n Pr(C ∗ |[ X i ])  n Pr(C ∗ |[ X i ])    δi j  (1 − u i j )δi j , vij R a N |[ X i ] = 1 − 

j =1

(34)

j =1

j =1

(35)

194

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

where C ∗ and ¬C ∗ represent two overall states of all attribute, which means whether the alternative X i has decision attributes or not; Pr(C ∗ |[ X i ]) expresses the conditional probability of the alternative X i belonging to the state C ∗ . From the analysis of subsection 3.3, the decision rules of alternative X i are constructed based on the classification rules (P 3 )-(N 3 ), displayed as follows:

 i               Pˆ If I R a P |[ X i ] ≤ I R a B |[ X i ] and I R a P |[ X i ] ≤ I R a N |[ X i ] , adopt X i ∈ POS C ∗ ;  i               Bˆ If I R a B |[ X i ] ≤ I R a P |[ X i ] and I R a B |[ X i ] ≤ I R a N |[ X i ] , adopt X i ∈ BND C ∗ ;  i               ˆ If I R a N |[ X i ] ≤ I R a P |[ X i ] and I R a N |[ X i ] ≤ I R a B |[ X i ] , adopt X i ∈ NEG C ∗ , N where I ( R (a• |[ X i ])) (• = P , B , N) denotes the ideal positive degree of the expected losses R (a• |[ X i ]) (• = P , B , N) for the alternative X i . ˆ i ), then we can get If u i j + v i j = 1 (i = 1, 2, · · · , m; j = 1, 2, · · · , n), based on Theorem 11 and the decision rules (Pˆ i )-(N two groups of TWDs’ thresholds from positive viewpoint and negative viewpoint. (1) From an optimistic perspective, the thresholds of the TWDs are obtained based on decision rules (P1 )-(N1 ) and formula (22)–(24), displayed as follows:

n

j =1 (1 − θ j )δi j lg (1 − v i j ) n j =1 θ j δi j lg (1 − u i j ) + j =1 (1 − θ j )δi j lg (1 −

i 1

α = n

n

β1i

j =1 θ j δi j lg (1 −

= n

j =1 (1 − θ j )δi j lg (1 − u i j ) +

n

i 1

vij)

n

j =1 θ j δi j lg (1 −

j =1 δi j lg (1 − v i j ) n j =1 δi j lg (1 −

γ = n

vij)

j =1 δi j lg (1 − u i j ) +

vij)

vij)

; ;

.

Then, the decision rules of alternative X i are simplified as the following:

 i













 i



























P1 If Pr C ∗ |[ X i ] ≥ α1i and Pr C ∗ |[ X i ] ≥ γ1i , adopt X i ∈ POS C ∗ ; B1 If Pr C ∗ |[ X i ] ≤ α1i and Pr C ∗ |[ X i ] ≥ β1i , adopt X i ∈ BND C ∗ ;



N1i If Pr C ∗ |[ X i ] ≤ β1i and Pr C ∗ |[ X i ] ≤ γ1i , adopt X i ∈ NEG C ∗ .

(2) From a pessimistic perspective, the thresholds of the TWDs are obtained based on decision rules (P2 )-(N2 ) and formula (25)–(27), displayed as follows:

n

j =1 (1 − θ j )δi j lgu i j n ; ( 1 − θ j )δi j lgu i j + j =1 j =1 θ j δi j lg v i j n j =1 θ j δi j lgu i j n ; β2i = n j =1 θ j δi j lgu i j + j =1 (1 − θ j )δi j lg v i j n j =1 δi j lgu i j i n γ2 = n . j =1 δi j lgu i j + j =1 δi j lg v i j i 2

α = n

Then, the decision rules of alternative X i are simplified as the following:

 i













 i



























P2 If Pr C ∗ |[ X i ] ≥ α2i and Pr C ∗ |[ X i ] ≥ γ2i , adopt X i ∈ POS C ∗ ; B2 If Pr C ∗ |[ X i ] ≤ α2i and Pr C ∗ |[ X i ] ≥ β2i , adopt X i ∈ BND C ∗ ;



N2i If Pr C ∗ |[ X i ] ≤ β2i and Pr C ∗ |[ X i ] ≤ γ2i , adopt X i ∈ NEG C ∗ .

Because of u i j + v i j = 1 (i = 1, 2, · · · , m; j = 1, 2, · · · , n), it is obvious to get

α1i = α2i , β1i = β2i and γ1i = γ2i , then the

ˆ i ). In fact, when decision rules (P1i )-(N1i ) is the same as the decision rules (P2i )-(N2i ), including the decision rules (Pˆ i )-(N u i j + v i j = 1, the IFN becomes a fuzzy number [32], and then these decision rules are also suitable for TWD model with fuzzy number mentioned by Jia and Liu [9].

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

195

4.2. The calculation of the conditional probability with grey relational analysis From the above analysis, the condition probability is also an important component of TWDs. We often assume the condition probability is fixed, but in real MADM, each alternative often has different conditional probability. In addition, in MADM, it is not possible to directly determine or give conditional probability because one of the prerequisites for calculating conditional probability is decision attribute, which does not exist in MADM problems. So we need to find a new way to determine conditional probability. Facing to this challenge, we are enlightened by Liang et al.’s viewpoint [16] with the aid of TOPSIS to calculate the condition probability of each alternative, and use PIS and NIS to describe the set of states  = {C ∗ , ¬C ∗ }, which indicate whether the alternative X has the decision attribute or not. The final relative closeness degree implies conditional probability of the alternative X . However, TOPSIS taking distance as a scale can only reflect the positional relationship between data curves [26]. In the decision-making process, input data often fluctuates greatly so that there is no typical distribution rule because of environmental impact, human factors or limited time. It is better to embody the importance of the alternative through the trend difference of the data sequence, which is not appropriate for TOPSIS [29]. Fortunately, the advantage of grey relational degree [7,29] is analyzing the trend difference of data series, and it is a good measure to similarity between curve shapes. The closer the curve shapes are, the greater the correlation degree between the corresponding data series, so the greater grey relational degree between the alternative and the ideal alternative leads to that the alternative is closer to the ideal one. Besides, Wei [30] used the grey relational degree to determine the attribute weight in MADM, which further shows that we can use grey relational degree to predict the conditional probability. The specific process is as follows: (1) Determine the PIS and the NIS Typically, we choose the largest attribute value as the PIS and the smallest attribute value as the NIS [51]. According to Definition 2.4, we compare the ideal positive degree I (xi j ) of the attribute value xi j to pick up the maximum and minimum of attribute values so as to achieve the purpose of deciding the PIS and the NIS.







−

x+ = u +j , v +j = max(xi j ) j

(36)

= u −j , v j x− j

(37)

i

= min(xi j ) i

(2) Calculate the grey relational coefficient ηi+j between the alternative X i (i = 1, 2, · · · , m) and the PIS about the attribute G j ( j = 1, 2, · · · , n), shown as follows.

ηi+j =

m+ + ξ M + d+ + ξ M+ ij

+

(38)



+

where di j = d E (xi j , x j ) =

(u i j −u +j )2 +( v i j − v +j )2 2

, m+ =  min  min d+ , M + =  max  max d+ , ξ denotes the identification coefficient ij ij i

where 0 < ξ < 1, and its general value is 0.5. Then we calculate the grey relational coefficient attribute G j ( j = 1, 2, · · · , n), shown as follows.

ηi−j =

j

i

j

ηi−j between the alternative X i (i = 1, 2, · · · , m) and the NIS about the

m− + ξ M − d− + ξ M− ij





where di j = d E (xi j , x j ) =

(39)



(u i j −u −j )2 +( v i j − v −j )2 2

− , m− =  min  min d− max  max d− i j , M =  i j , ξ denotes the identification coefficient i

j

i

j

where 0 < ξ < 1, and its general value is 0.5. (3) Calculate the grey relation degree of the alternative X i (i = 1, 2, · · · , m) from the PIS and the NIS in accordance with the below formulas, separately. n + The grey relation degree of the alternative X i (i = 1, 2, · · · , m) from the PIS: ηi+ = j =1 ωi j ηi j ;

n

− The grey relation degree of the alternative X i (i = 1, 2, · · · , m) from the NIS: ηi− = j =1 ωi j ηi j . (4) Compute the relative closeness of the grey relation of each alternative, shown as follows.

Ri =

ηi+

ηi+ + ηi−

(40)

Therefore, we can use R i to imply the probability of the alternative X i (i = 1, 2, · · · , m) belonging to the state C ∗ , which means that the condition probability of the alternative X i (i = 1, 2, · · · , m) is Pr(C ∗ |[ X i ]) = R i . It should be noted if there is only one alternative, the above method will no longer be applicable. In order to obtain its conditional probability, the maximum and minimum of IFNs are selected as the PIS and the NIS respectively [51], i.e.,

196

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

x+ = (1, 0) and x−j = (0, 1). However, in real life, we rarely encounter MADM problems involving only one alternative. So, j we usually choose several alternatives to make decisions. 4.3. The process of three-way decision model with IFNs under MADM In order to use TWD model to solve MADM problems, in this part, we propose a new TWD model with IFNs. Firstly, we summarize briefly the MADM problems. Then, we propose the process of the new TWD model in detail. Suppose there are m alternatives to form a collection of alternatives X = { X 1 , X 2 , · · · , X m } and there are n attributes to form n an attribute set G = {G 1 , G 2 , · · · , G n }. The weight vector of attributes is ω = (ω1 , ω2 , · · · , ωn ) meeting ω j ∈ [0, 1] and j =1 ω j = 1. xi j = (u i j , v i j ) represents the evaluation value of alternative X i under attribute G j , which is an IFN and meets u i j ∈ [0, 1], v i j ∈ [0, 1] and 0 ≤ u i j + v i j ≤ 1. Based on the features of attributes, decision makers determine the risk avoidance coefficient θ j ( j = 1, 2, · · · , n). Thus, we can get a decision matrix A = [xi j ]m×n . Then, we decide the action of each alternative according to the above information. The key steps of the new TWD model are summarized as follows. Step 1. Based on formula (31), calculate the relative loss functions f (xi j ) educed from the evaluation value xi j (i = 1, 2, · · · , m; j = 1, 2, · · · , n). Step 2. Based on formula (32), integrate the relative loss functions f (xi j ) of alternative X i under different attributes to get an integrated loss result f (xi ) (i = 1, 2, · · · , m). The expansions of integrated loss function f (xi ) are displayed in Table 13. Step 3. According to subsection 4.2, calculate the condition probability Pr(C ∗ |[ X i ]) of the alternative X i (i = 1, 2, · · · , m). Step 4. Based on formula (33)–(35) and Definition 2.3, compute the expected losses R (a• |[ X i ]) (• = P , B , N) and their ideal positive degrees I ( R (a• |[ X i ])) (• = P , B , N) for the alternative X i .

ˆ i ), determine the decision of each alternative X i (i = 1, 2, · · · , m). Step 5. According to the decision rules (Pˆ i )-(N Furthermore, we use an algorithm to describe the process of the proposed TWD model with IFNs under MADM, detailed in Algorithm 1. 5. A practical example on supplier selection Supplier selection is a critical MADM problem, and many traditional MADM methods have done to solve this problem [23,27,45,47]. However, these existing methods often ignore the need for more evaluation in the actual decision-making process, which easily lead to too harsh decision-making results and may not maximize the benefits of enterprises. In order to reduce the potential risk or loss, enterprises often further inspect or evaluate suppliers, so it is necessary to add another way to the decision-making results: further investigation. This can make the decision-making process more prudent and rigorous, and also can be better and more comprehensive inspection of suppliers. In the following, we use a practical example on selecting suppliers to demonstrate the new TWD model proposed in this paper. A core manufacturing enterprise wants to select suppliers of product components, and there are ten component suppliers which can be seen as a set of alternatives X = { X 1 , X 2 , · · · , X 10 }. Based on the past experiences and existing researches, the manufacturer has formulated a relevant index evaluation system, which mainly includes four aspects: technical level (G 1 ), service level (G 2 ), business ability (G 3 ) and enterprise environment (G 4 ). Their weights are 0.22, 0.22, 0.36 and 0.2 respectively, constructing a weighted vector ω = (0.22, 0.22, 0.36, 0.2). Their risk avoidance coefficient vector is θ j = (0.3, 0.1, 0.4, 0.2) T given by experts. The technical level mainly refers to the product development ability, product quality and reliability; the service level mainly includes price, reputation and after-sales service satisfaction; the business ability mainly includes financial ability, supply ability, cooperation ability, development ability and economic benefits; the enterprise environment mainly refers to economic and technological environment, geographic environment and compatibility of corporate culture. The manufacturer has called the heads of departments to evaluate the ten suppliers according to the formulated index evaluation system. Now, these evaluation values have been aggregated and analyzed, and four comprehensive satisfactions and dissatisfactions of each supplier X i under the above four aspects G j have been obtained. So, we can use IFN xi j = (u i j , v i j ) to represent the comprehensive evaluation value of each supplier under each aspect, where comprehensive satisfaction denotes MD u i j and comprehensive dissatisfaction denotes N-MD v i j , i = 1, 2, · · · , 10; j = 1, 2, 3, 4. They are displayed in Table 14. Then, we use the proposed TWD model to solve it. 5.1. The evaluation procedures [Step 1] Based on formula (31) and Definition 2.5, we can calculate the relative loss function values f (xi j ) educed from the evaluation value xi j (i = 1, 2, · · · , 10; j = 1, 2, 3, 4), shown in Table 15, where we use {C j , ¬C j } to indicate two states of the attribute G j ( j = 1, 2, 3, 4). [Step 2] Based on formula (32), we can get loss integration results f (xi ) of alternative X i by integrating the relative loss functions f (xi j ) (i = 1, 2, · · · , 10; j = 1, 2, 3, 4). The loss integration results are shown in Table 16, where C ∗ and ¬C ∗ represents two overall states of all attributes, which means whether it’s worth choosing.

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

197

Algorithm 1 The algorithm for the proposed TWD model with IFNs. Input: (1) The collection of alternatives X = { X 1 , X 2 , · · · , X m } and the attribute set G = {G 1 , G 2 , · · · , G n }; (2) The decision matrix A = [xi j ]m×n , the risk avoidance coefficient vector θ = (θ1 , θ2 , · · · , θn ), the weight vector of attributes identification coefficient ξ ; (3) The range set of IFNs xi j = (u i j , v i j ), u i j ∈ [0, 1], v i j ∈ [0, 1] and 0 ≤ u i j + v i j ≤ 1. Output: The decision of each alternative X i (i = 1, 2, · · · , m). begin for i = 1 to m do for j = 1 to n, k = 1 to n and k = j do Compute Sup (xi j , xik ) = 1 − d E (xi j , xik ) and T (xi j ) =

n 

ω = (ω1 , ω2 , · · · , ωn ) and the

ωk Sup (xi j , xik ) .

k=1 k= j

end for j = 1 to n do Compute δi j =

n 

ω j (1+ T (xi j ))

.

ω j (1 + T (xi j ))

j =1

end end for i = 1 to m and j = 1 to n do Calculate x+ = (u +j , v +j ) = max(xi j ) and x−j = (u −j , v −j ) = min(xi j ). j i

i

end for i = 1 to m and j = 1 to n do  Calculate d+ = d E (xi j , x+j ) = ij

(u i j −u +j )2 +( v i j − v +j )2 2

 and d− = d E (xi j , x−j ) = ij

(u i j −u −j )2 +( v i j − v −j )2 2

.

end for i = 1 to m and j = 1 to n do + − − + − − Calculate m+ = min min d+ i j , M = max max di j and m = min min di j , M = max max di j .

  i

 

j

i

end for i = 1 to m and j = 1 to n do Calculate

ηi+j =

m+ +ξ M + d+ +ξ M + ij

and

ηi−j =

m− +ξ M − d− +ξ M − ij

end for i = 1 to m and j = 1 to n do Calculate

ηi+ =

n 

ωi j ηi+j and ηi− =

j =1

Calculate Pr(C ∗ |[ X i ]) =



Calculate u iP = 1 − ⎝

n 

⎛ u iN = 1 − ⎝

⎞Pr(C ∗ |[ X ]) ⎛

j =1

⎛ , v iP = ⎝



n 

⎞Pr(C ∗ |[ X ])

⎞1−Pr(C ∗ |[ X ]) δi j ui j ⎠

,



n 

⎛ , v iB = ⎝

(1 − v i j )θ j δi j ⎠

and v iN = ⎝

(1 − u i j )δi j ⎠

n 

j =1 ⎞1−Pr(C ∗ |[ X ])

j =1

j =1

⎞Pr(C ∗ |[ X ]) ⎛ θ j δi j

vij

j =1

⎞Pr(C ∗ |[ X ]) δi j vij ⎠

n 



Calculate I ( R (a P |[ X i ])) = 1 −



(1−u iP )2 +( v iP )2 2

.

(1−u i )2 +( v i )2

, I ( R (a B |[ X i ])) = 1 −

(1−u iB )2 +( v iB )2 2

N N I ( R (a N |[ X i ])) = 1 − . 2 if I ( R (a P |[ X i ])) ≤ I ( R (a B |[ X i ])) and I ( R (a P |[ X i ])) ≤ I ( R (a N |[ X i ])) then adopt X i ∈ POS(C ∗ ); else if I ( R (a B |[ X i ])) ≤ I ( R (a P |[ X i ])) and I ( R (a B |[ X i ])) ≤ I ( R (a N |[ X i ])) then adopt X i ∈ BND(C ∗ ); else adopt X i ∈ NEG(C ∗ ). end if





n  j =1

j =1

end for i = 1 to m do

end end

j

ωi j ηi−j .

⎞1−Pr(C ∗ |[ X ])

(1 − u i j )θ j δi j ⎠

n 

i

.

(1 − v i j )δi j ⎠

j =1 n 

 

j

ηi+ . ηi+ +ηi−

end for i = 1 to m and j = 1 to n do



n 

i

j =1

end for i = 1 to m do

u iB = 1 − ⎝

 

j

and

⎞1−Pr(C ∗ |[ X ]) θ j δi j

ui j



,

198

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Table 14 The aggregated comprehensive values.

X1 X2 X3 X4 X5 X6 X7 X8 X9 X 10

G1

G2

G3

(0.6, 0.3) (0.4, 0.5) (0.5, 0.3) (0.7, 0.2) (0.6, 0.1) (0.1, 0.8) (0.4, 0.5) (0.2, 0.6) (0.3, 0.4) (0.4, 0.1)

(0.2, 0.4) (0.7, 0.2) (0.1, 0.6) (0.9, 0.1) (0.6, 0.2) (0.3, 0.5) (0.3, 0.6) (0.6, 0.3) (0.8, 0.1) (0.1, 0.3)

(0.3, (0.6, (0.4, (0.3, (0.3, (0.2, (0.4, (0.8, (0.5, (0.3,

G4 0.3) 0.4) 0.2) 0.5) 0.3) 0.4) 0.2) 0.1) 0.4) 0.4)

(0.1, 0.5) (0.5, 0.3) (0.3, 0.4) (0.8, 0.2) (0.5, 0.2) (0.5, 0.1) (0.3, 0.5) (0.4, 0.2) (0.6, 0.3) (0.5, 0.5)

Table 15 The relative loss function values educed from evaluation values. G1

G2

G3

G4

C1

¬C 1

C2

¬C 2

C3

¬C 3

C4

¬C 4

X1

aP aB aN

(0, 1) (0.24, 0.70) (0.6, 0.3)

(0.3, 0.6) (0.10, 0.86) (0, 1)

(0, 1) (0.02, 0.91) (0.2, 0.4)

(0.4, 0.2) (0.05, 0.85) (0, 1)

(0, 1) (0.13, 0.62) (0.3, 0.3)

(0.3, 0.3) (0.13, 0.62) (0, 1)

(0, 1) (0.02, 0.87) (0.1, 0.5)

(0.5, 0.1) (0.13, 0.63) (0, 1)

X2

aP aB aN

(0, 1) (0.14, 0.81) (0.4, 0.5)

(0.5, 0.4) (0.19, 0.76) (0, 1)

(0, 1) (0.11, 0.85) (0.7, 0.2)

(0.2, 0.7) (0.02, 0.96) (0, 1)

(0, 1) (0.31, 0.69) (0.6, 0.4)

(0.4, 0.6) (0.18, 0.82) (0, 1)

(0, 1) (0.13, 0.79) (0.5, 0.3)

(0.3, 0.5) (0.07, 0.87) (0, 1)

X3

aP aB aN

(0, 1) (0.10, 0.81) (0.5, 0.3)

(0.3, 0.5) (0.24, 0.70) (0, 1)

(0, 1) (0.01, 0.95) (0.1, 0.6)

(0.6, 0.1) (0.09, 0.79) (0, 1)

(0, 1) (0.18, 0.53) (0.4, 0.2)

(0.2, 0.4) (0.09, 0.69) (0, 1)

(0, 1) (0.07, 0.83) (0.3, 0.4)

(0.4, 0.3) (0.10, 0.79) (0, 1)

X4

aP aB aN

(0, 1) (0.30, 0.62) (0.7, 0.2)

(0.2, 0.7) (0.06, 0.90) (0, 1)

(0, 1) (0.21, 0.79) (0.9, 0.1)

(0.1, 0.9) (0.01, 0.99) (0, 1)

(0, 1) (0.13, 0.76) (0.3, 0.5)

(0.5, 0.3) (0.24, 0.62) (0, 1)

(0, 1) (0.28, 0.72) (0.8, 0.2)

(0.2, 0.8) (0.04, 0.96) (0, 1)

X5

aP aB aN

(0, 1) (0.24, 0.50) (0.6, 0.1)

(0.1, 0.6) (0.03, 0.86) (0, 1)

(0, 1) (0.09, 0.85) (0.6, 0.2)

(0.2, 0.6) (0.02, 0.95) (0, 1)

(0, 1) (0.13, 0.62) (0.3, 0.3)

(0.3, 0.3) (0.13, 0.62) (0, 1)

(0, 1) (0.13, 0.72) (0.5, 0.2)

(0.2, 0.5) (0.04, 0.87) (0, 1)

X6

aP aB aN

(0, 1) (0.03, 0.93) (0.1, 0.8)

(0.8, 0.1) (0.38, 0.50) (0, 1)

(0, 1) (0.04, 0.93) (0.3, 0.5)

(0.5, 0.3) (0.07, 0.89) (0, 1)

(0, 1) (0.09, 0.69) (0.2, 0.4)

(0.4, 0.2) (0.18, 0.53) (0, 1)

(0, 1) (0.13, 0.63) (0.5, 0.1)

(0.1, 0.5) (0.02, 0.87) (0, 1)

X7

aP aB aN

(0, 1) (0.14, 0.81) (0.4, 0.5)

(0.5, 0.4) (0.19, 0.76) (0, 1)

(0, 1) (0.04, 0.95) (0.3, 0.6)

(0.6, 0.3) (0.09, 0.87) (0, 1)

(0, 1) (0.18, 0.53) (0.4, 0.2)

(0.2, 0.4) (0.09, 0.69) (0, 1)

(0, 1) (0.07, 0.87) (0.3, 0.5)

(0.5, 0.3) (0.13, 0.79) (0, 1)

X8

aP aB aN

(0, 1) (0.06, 0.86) (0.2, 0.6)

(0.6, 0.2) (0.24, 0.62) (0, 1)

(0, 1) (0.09, 0.87) (0.6, 0.3)

(0.3, 0.6) (0.04, 0.95) (0, 1)

(0, 1) (0.47, 0.40) (0.8, 0.1)

(0.1, 0.8) (0.04, 0.91) (0, 1)

(0, 1) (0.10, 0.72) (0.4, 0.2)

(0.2, 0.4) (0.04, 0.83) (0, 1)

X9

aP aB aN

(0, 1) (0.10, 0.76) (0.3, 0.4)

(0.4, 0.3) (0.14, 0.70) (0, 1)

(0, 1) (0.15, 0.79) (0.8, 0.1)

(0.1, 0.8) (0.01, 0.98) (0, 1)

(0, 1) (0.24, 0.69) (0.5, 0.4)

(0.4, 0.5) (0.18, 0.76) (0, 1)

(0, 1) (0.17, 0.79) (0.6, 0.3)

(0.3, 0.6) (0.07, 0.90) (0, 1)

X 10

aP aB aN

(0, 1) (0.14, 0.50) (0.4, 0.1)

(0.1, 0.4) (0.03, 0.76) (0, 1)

(0, 1) (0.01, 0.89) (0.1, 0.3)

(0.3, 0.1) (0.04, 0.79) (0, 1)

(0, 1) (0.13, 0.69) (0.3, 0.4)

(0.4, 0.3) (0.18, 0.62) (0, 1)

(0, 1) (0.13, 0.87) (0.5, 0.5)

(0.5, 0.5) (0.07, 0.90) (0, 1)

[Step 3] According to the description of the practical example, we can regard the qualification of supplier as the decision attribute. Then, based on subsection 4.2, we can get the condition probability Pr(C ∗ |[ X i ]) of the alternative X i (i = 1, 2, · · · , 10), respectively (Suppose the identification coefficient is ξ = 0.5).





















Pr C ∗ |[ X 1 ] = 0.41, Pr C ∗ |[ X 2 ] = 0.54, Pr C ∗ |[ X 3 ] = 0.43, Pr C ∗ |[ X 4 ] = 0.61, Pr C ∗ |[ X 5 ] = 0.52,



         Pr C ∗ |[ X 6 ] = 0.35, Pr C ∗ |[ X 7 ] = 0.43, Pr C ∗ |[ X 8 ] = 0.58, Pr C ∗ |[ X 9 ] = 0.55, Pr C ∗ |[ X 10 ] = 0.43. [Step 4] Based on formula (33)–(35) and Definition 2.3, the ideal positive degrees I ( R (a• |[ X i ])) (• = P , B , N) of the expected losses R (a• |[ X i ]) (• = P , B , N) for each alternative X i (i = 1, 2, · · · , 10) are computed, depicted in Fig. 4. It is noticed that I (•) is abbreviation of ideal positive degree I ( R (a• |[ X i ])) (• = P , B , N). ˆ i ), the smallest ideal positive degrees leads to the decision of each al[Step 5] According to the decision rules (Pˆ i )-(N ternative X i (i = 1, 2, · · · , 10). Fig. 4 depicts the comparisons between the ideal positive degrees. From Fig. 4, we can find that POS(C ∗ ) = { X 4 , X 8 }, BND(C ∗ ) = { X 1 , X 2 , X 3 , X 5 , X 7 , X 9 , X 10 } and NEG(C ∗ ) = { X 6 }. So, X 4 and X 8 are recommended as component supplier, X 6 is not recommended, and others need to gather more information to make further decisions.

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Table 16 The loss integration results f (xi ). C∗

¬C ∗

X1

aP aB aN

(0, 1) (0.11, 0.74) (0.33, 0.36)

(0.37, 0.25) (0.11, 0.72) (0, 1)

X2

aP aB aN

(0, 1) (0.19, 0.77) (0.57, 0.34)

(0.37, 0.55) (0.13, 0.84) (0, 1)

X3

aP aB aN

(0, 1) (0.13, 0.70) (0.35, 0.32)

(0.37, 0.29) (0.09, 0.76) (0, 1)

X4

aP aB aN

(0, 1) (0.22, 0.72) (0.72, 0.23)

(0.30, 0.58) (0.11, 0.82) (0, 1)

X5

aP aB aN

(0, 1) (0.15, 0.66) (0.49, 0.20)

(0.21, 0.46) (0.07, 0.79) (0, 1)

X6

aP aB aN

(0, 1) (0.07, 0.78) (0.27, 0.37)

(0.51, 0.23) (0.18, 0.65) (0, 1)

X7

aP aB aN

(0, 1) (0.12, 0.74) (0.36, 0.38)

(0.44, 0.35) (0.12, 0.77) (0, 1)

X8

aP aB aN

(0, 1) (0.24, 0.64) (0.60, 0.22)

(0.30, 0.48) (0.09, 0.83) (0, 1)

X9

aP aB aN

(0, 1) (0.18, 0.75) (0.58, 0.28)

(0.32, 0.51) (0.12, 0.82) (0, 1)

X 10

aP aB aN

(0, 1) (0.11, 0.71) (0.33, 0.29)

(0.35, 0.28) (0.11, 0.73) (0, 1)

Fig. 4. The ideal positive degrees of the expected losses for each alternative.

199

200

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

Table 17 The influence of risk avoidance coefficients on decision-making results. The value of θ j ( j = 1, 2, 3, 4)

POS(C ∗ )

BND(C ∗ )

NEG(C ∗ )

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 ∼ 1.0

∅ ∅ ∅ ∅ { X4 } { X4 , X8 } { X4 , X5 , X8 , X9 } { X2 , X4 , X5 , X8 , X9 } { X2 , X4 , X5 , X8 , X9 } { X2 , X4 , X5 , X8 , X9 } { X2 , X4 , X5 , X8 , X9 }

{ X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , X 8 , X 9 , X 10 } { X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , X 8 , X 9 , X 10 } { X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , X 8 , X 9 , X 10 } { X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , X 8 , X 9 , X 10 } { X 1 , X 2 , X 3 , X 5 , X 6 , X 7 , X 8 , X 9 , X 10 } { X 1 , X 2 , X 3 , X 5 , X 7 , X 9 , X 10 } { X 1 , X 2 , X 3 , X 7 , X 10 } { X 1 , X 3 , X 7 , X 10 } { X 3 , X 10 } ∅ ∅

∅ ∅ ∅ ∅ ∅ { X6 } { X6 } { X6 } { X1 , X6 , X7 } { X 1 , X 3 , X 6 , X 7 , X 10 } { X 1 , X 3 , X 6 , X 7 , X 10 }

Table 18 The aggregated results of alternatives by IFPWA operator [33] and their ideal positive degrees.

X1 X2 X3 X4 X5 X6 X7 X8 X9 X 10

Aggregated result

Ideal positive degree

Ranking order

(0.33, 0.36) (0.57, 0.34) (0.35, 0.32) (0.72, 0.23) (0.49, 0.20) (0.27, 0.37) (0.36, 0.38) (0.60, 0.22) (0.58, 0.28) (0.33, 0.29)

0.4616 0.6125 0.4880 0.7416 0.6164 0.4232 0.4713 0.6783 0.6424 0.4854

9 5 6 1 4 10 8 2 3 7

Table 19 The decision-making results of different methods. Method

Decision result

The MADM method by the IFPWA operator [33] The proposed TWD model

X 4  X 8  X 9  X 5  X 2  X 3  X 10  X 7  X 1  X 6 POS(C ∗ ) = { X 4 , X 8 }, BND(C ∗ ) = { X 9 , X 5 , X 2 , X 3 , X 10 , X 7 , X 1 }, NEG(C ∗ ) = { X 6 }

Assuming that the risk avoidance coefficients θ j ( j = 1, 2, 3, 4) are unknown, we discuss their impact on decision-making results in the following. For ease of analysis, we set them to the same value. The decision-making results are displayed in Table 17. According to Table 17, when the risk avoidance coefficient meets 0 ≤ θ j ≤ 0.15( j = 1, 2, 3, 4), the decision result is BND(C ∗ ) = { X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , X 8 , X 9 , X 10 }, which indicates that the manufacturer hopes to further investigate all alternative suppliers; when 0.45 ≤ θ j ≤ 1( j = 1, 2, 3, 4), the decision result is POS(C ∗ ) = { X 2 , X 4 , X 5 , X 8 , X 9 } and NEG(C ∗ ) = { X 1 , X 3 , X 6 , X 7 , X 10 }, which indicates that the manufacturer is unwilling to take risks or losses on hesitant decision, in other words, the manufacturer does not want to invest more costs in further investigation; when 0.2 ≤ θ j ≤ 0.4( j = 1, 2, 3, 4), the decision-making results are distributed in these three domains. Moreover, we can draw the following conclusion: The larger value of θ j ( j = 1, 2, 3, 4) is, the more alternatives are classified into positive and negative domains, while the fewer alternatives are classified into boundary domains. This also confirms Theorem 3 and Theorem 7 again, and shows the validity of parameter θ as risk avoidance coefficient. 5.2. Comparison with other existing methods To illustrate the effectiveness and practicability of the proposed TWD model with IFNs for MADM, we firstly tend to compare with conventional MADM methods. Because the loss functions under different attributes are integrated by the IFPWA operator [33], we use the IFPWA operator to aggregate the evaluation values displayed in Table 14. The aggregated results of alternatives X i (i = 1, 2, · · · , 10) and their ideal positive degrees are shown in Table 18. Meanwhile, Table 19 shows the decision-making results of the MADM method by the IFPWA operator [33] and the proposed TWD model. From Table 19, it is easy to observe that the ranking order of the alternatives is: X 4  X 8  X 9  X 5  X 2  X 3  X 10  X 7  X 1  X 6 . It is consistent with the decision-making results of the proposed TWD model. The recommended suppliers X 4 and X 8 are located in the first and second places in the ranking order, the non-recommended supplier X 6 is located in the last position, and others are in the middle. This shows the rationality of our proposed TWD model. Then, we compare the characteristics of these two methods: (1) the conventional MADM method by the IFPWA operator [33] gives a ranking result and the alternative in the first place is often chosen as the best, while the proposed TWD model gives a

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

201

Table 20 The decision-making results of the proposed TWD model and Jia and Liu’s method used in project investment [9]. The condition probability Pr(C ∗ |[ X i ])

The proposed TWD model

Jia and Liu’s method [9]

POS(C ∗ )

BND(C ∗ )

NEG(C ∗ )

POS(C ∗ )

BND(C ∗ )

NEG(C ∗ )

Unknown 0.6 0.5 0.45 0.25 0.15

{ X1 , X2 , X4 , X7 } { X1 , X2 , X4 , X5 , X7 } { X1 , X2 , X4 , X7 } { X1 , X2 , X4 , X7 } { X1 , X7 } { X7 }

{ X5 } { X3 , X6 } { X5 } { X5 } { X2 , X4 } { X1 }

{ X3 , X6 , X8 } { X8 } { X3 , X6 , X8 } { X3 , X6 , X8 } { X3 , X5 , X6 , X8 } { X2 , X3 , X4 , X5 , X6 , X8 }

Not available { X1 , X2 , X4 , X5 , X7 } { X1 , X2 , X4 , X7 } { X1 , X2 , X4 , X7 } { X7 }

Not available { X3 , X6 , X8 } { X5 , X6 } { X5 } { X1 , X2 , X4 } { X7 }

Not available



∅ { X3 , X8 } { X3 , X6 , X8 } { X3 , X5 , X6 , X8 } { X1 , X2 , X3 , X4 , X5 , X6 , X8 }

classification result and the alternatives in the positive domain are considered acceptable. But the best alternative obtained by the IFPWA operator [33] does not mean that it is an acceptable alternative. For example, when the values θ j ( j = 1, 2, 3, 4) are equal to 0.15, all the alternatives are classified into boundary domains, and that means the supplier X 4 also needs to be further investigated. (2) The proposed TWD model gives the action of each alternative and its corresponding semantic explanation, while the conventional MADM method by the IFPWA operator [33] cannot accomplish this task. For example, according to the minimum overall cost, the proposed TWD model recommends that we choose suppliers X 4 and X 8 , but the conventional MADM method by the IFPWA operator [33] may directly ignore supplier X 8 and only emphasize supplier X 4 . So, the proposed TWD model can prevent unknown reasons causing erroneous results in the evaluation process. (3) The proposed TWD model can adjust the relative loss functions of alternatives in the light of the risk preference of decision makers, so different decision environments may produce different decision-making results. But the conventional MADM method by the IFPWA operator [33] does not consider the risk preference of decision makers. Therefore, in terms of risk aversion, the proposed TWD model is better than the conventional MADM method by the IFPWA operator [33]. In order to further illustrate the advantages of the proposed TWD model, we apply it to the example of project investment in [9]. Because the evaluation values are fuzzy numbers, as a special case of IFNs, we use fuzzy number to denote MD, one minus fuzzy number to denote corresponding N-MD. Table 20 displays the comparison of decision-making results obtained by the proposed TWD model and Jia and Liu’s method [9]. It is easy to see from Table 20 that the decision-making results of Jia and Liu’s method [9] are not available when the conditional probabilities are unknown, but those of the proposed TWD model can be obtained and are that POS(C ∗ ) = { X 1 , X 2 , X 4 , X 7 }, BND(C ∗ ) = { X 5 } and NEG(C ∗ ) = { X 3 , X 6 , X 8 }. This demonstrates that the proposed TWD model can do with the MADM problems more reasonably, because there is no decision attribute in MADM problem, which is one of the prerequisites for calculating conditional probability. But Jia and Liu’s method [9] ignores the conditional probability, which is an important factor in TWD. Assuming conditional probabilities are known, we adopt the conditional probabilities given in example of the project investment [9], i.e., Pr(C ∗ |[ X i ]) = 0.3(i = 1, 2, 3), Pr(C ∗ |[ X i ]) = 0.45(i = 4, 5, 6) and Pr(C ∗ |[ X i ]) = 0.5(i = 7, 8). Then, we use the two methods to address this example again, and get the corresponding decision-making results. The decision-making results of the proposed TWD model is that POS(C ∗ ) = { X 1 , X 4 , X 7 }, BND(C ∗ ) = { X 2 , X 5 } and NEG(C ∗ ) = { X 3 , X 6 , X 8 }, and those of Jia and Liu’s method [9] is that POS(C ∗ ) = { X 4 , X 7 }, BND(C ∗ ) = { X 1 , X 2 , X 5 } and NEG(C ∗ ) = { X 3 , X 6 , X 8 }. So, we can find that the decision-making results of the two methods are not quite the same. If we suppose the conditional probabilities are the same constant, we will get a similar conclusion, which is easy to see from Table 20. This is mainly due to the different integration methods used in integrating relative loss functions. Jia and Liu’s method [9] used the weighted averaging operator to integrate relative loss functions, while the proposed TWD model used the IFPWA operator which has the ability to eliminate the impact of unreasonable data on decision-making results [23,33]. Considering decision maker’s bias or limited knowledge, the proposed TWD model is superior to Jia and Liu’s method [9]. On the other hand, IFNs express fuzzy information by MDs and N-MDs, and have stronger ability to express complex and uncertain information, so IFNs are more powerful than fuzzy numbers and more widely used in practical decision-making. Therefore, the proposed TWD model has a wider application range than Jia and Liu’s method [9]. To sum up, the proposed TWD model is more reasonable and effective than the conventional MADM method by the IFPWA operator [33] and Jia and Liu’s method [9]. In addition, the proposed TWD model gives a new method for computing conditional probability, and also provides a new perspective for solving MADM without decision attributes. 6. Conclusion In this article, we have explored the relationship between TWDs and attribute values denoted by IFNs in MADM, and have proposed a new TWD model with IFNs to address MADM problems. First of all, we have introduced the concept and nature of relative loss functions, which are the basis of transforming IFNs into loss functions. Then, based on the features of IFNs [3], we have analyzed the decision rules of TWDs from positive viewpoint, negative viewpoint and comprehensive viewpoint, including the thresholds and their properties. Aiming at multiple attributes in MADM with unreasonable values, we have established a new integrated method of relative loss functions by using the IFPWA operator [33]. The method can deduce comprehensive loss functions so that MADM is transformed into TWD. Then, we have given the classification rules

202

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

of alternatives. Moreover, considering that decision attribute is a core of calculating conditional probability but does not exist in MADM, a new computing method of conditional probability has been developed. Subsequently, a TWD model with IFNs has been established, and an algorithm has been given to describe the process of the proposed TWD model. Finally, we have used a practical example on selecting suppliers to demonstrate the proposed TWD model. Meanwhile, compared with conventional MADM method by the IFPWA operator [33] and Liu’s method [9], the results have shown that the proposed TWD model is feasible and effective, and can better gives semantic explanation and captures decision-makers’ preferences. Our research has established the connection between TWD and IFNs, which provides a new idea for solving the MADM problems with unreasonable attribute values. In future research, we will continue to concentrate on the relationship between TWD and MADM with other information representation such as linguistic variables [25,50], Pythagorean fuzzy numbers [16], simplified neutrosophic number [46], and so on. We will also apply the proposed TWD model with IFNs to other practical MADM problems such as investment options [21], information filtering [54], medical diagnosis [4], Machine Learning [24,44,49,52,53], etc. Declaration of competing interest The authors declare that they do have no commercial or associative interests that represent a conflict of interests in connection with this manuscript. There are no professional or other personal interests that can inappropriately influence our submitted work. Acknowledgements This work is supported by the National Natural Science Foundation of China (Nos. 71771140, 71471172, and 71801142), the Special Funds of Taishan Scholar Project of Shandong Province (No. ts201511045), and the Project of Cultural Masters and “the Four Kinds of a Batch” Talents. References [1] K.T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets Syst. 20 (1) (1986) 87–96. [2] K.T. Atanassov, More on intuitionistic fuzzy sets, Fuzzy Sets Syst. 33 (1989) 37–46. [3] T.Y. Chen, Bivariate models of optimism and pessimism in multi-criteria decision-making based on intuitionistic fuzzy sets, Inf. Sci. 181 (11) (2011) 2139–2165. [4] Y. Chen, X. Yue, H. Fujita, S. Fu, Three-way decision support for diagnosis on focal liver lesions, Knowl.-Based Syst. 127 (2017) 85–99. [5] S.K. De, R. Biswas, A. Roy, Some operations on intuitionistic fuzzy sets, Fuzzy Sets Syst. 114 (3) (2000) 477–484. [6] S.K. De, R. Biswas, A. Roy, Some operations on intuitionistic fuzzy sets in terms of evidence theory: decision making aspect, Knowl.-Based Syst. 23 (8) (2000) 772–782. [7] J.L. Deng, Introduction to grey system, J. Grey Syst. 1 (1) (1989) 1–24. [8] M. Guo, L. Shang, Color image segmentation based on decision-theoretic rough set model and fuzzy C-means algorithm, in: IEEE International Conference on Fuzzy Systems, 2014, pp. 229–236. [9] F. Jia, P. Liu, A novel three-way decision model under multiple-criteria environment, Inf. Sci. 471 (2019) 29–51. [10] G. Lang, D. Miao, M. Cai, Three-way decision approaches to conflict analysis using decision-theoretic rough set theory, Inf. Sci. 406 (2017) 185–207. [11] H. Li, X. Zhou, B. Huang, D. Liu, Cost-Sensitive Three-Way Decision: A Sequential Strategy, International Conference on Rough Sets and Knowledge Technology, Springer, Berlin, Heidelberg, 2013, pp. 325–337. [12] D. Liang, D. Liu, Systematic studies on three-way decisions with interval-valued decision-theoretic rough sets, Inf. Sci. 276 (2014) 186–203. [13] D. Liang, D. Liu, A novel risk decision making based on decision-theoretic rough sets under hesitant fuzzy information, IEEE Trans. Fuzzy Syst. 23 (2) (2015) 237–247. [14] D. Liang, D. Liu, Deriving three-way decisions from intuitionistic fuzzy decision-theoretic rough sets, Inf. Sci. 300 (2015) 28–48. [15] D. Liang, Z. Xu, D. Liu, Three-way decisions with intuitionistic fuzzy decision-theoretic rough sets based on point operators, Inf. Sci. 375 (2017) 183–201. [16] D. Liang, Z. Xu, D. Liu, Y. Wu, Method for three-way decisions using ideal TOPSIS solutions at Pythagorean fuzzy information, Inf. Sci. 435 (2018) 282–295. [17] D. Liu, T. Li, D. Liang, A new discriminant analysis approach under decision-theoretic rough sets, in: Proceedings of the 6th International Conference on Rough Sets and Knowledge Technology, in: LNAI, 2011, pp. 476–485. [18] D. Liu, T. Li, D. Liang, Three-way government decision analysis with decision-theoretic rough sets, Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 20 (supp01) (2012) 119–132. [19] D. Liu, T. Li, D. Ruan, Probabilistic model criteria with decision-theoretic rough sets, Inf. Sci. 181 (17) (2011) 3709–3722. [20] D. Liu, D. Liang, Generalized three-way decisions and special three-way decisions, J. Frontiers Comput. Sci. Technol. 11 (3) (2017) 502–510. [21] D. Liu, Y. Yao, T. Li, Three-way investment decisions with decision-theoretic rough sets, Int. J. Comput. Intell. Syst. 4 (1) (2011) 66–74. [22] J. Liu, X. Zhou, B. Huang, H. Li, A Three-Way Decision Model Based on Intuitionistic Fuzzy Decision Systems, International Joint Conference on Rough Sets, Springer, Cham, 2017, pp. 249–263. [23] P. Liu, P. Wang, Some interval-valued intuitionistic fuzzy Schweizer-Sklar power aggregation operators and their application to supplier selection, Int. J. Syst. Sci. 49 (6) (2018) 1188–1211. [24] C. Luo, T. Li, Y. Huang, H. Fujita, Updating three-way decisions in incomplete multi-scale information systems, Inf. Sci. 476 (2019) 274–289. [25] F. Meng, J. Tang, H. Fujita, Linguistic intuitionistic fuzzy preference relations and their application to multi-criteria decision making, Inf. Fusion 46 (2019) 77–90. [26] S. Opricovic, G.H. Tzeng, Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS, Eur. J. Oper. Res. 156 (2) (2004) 445–455. [27] J. Qin, X. Liu, W. Pedrycz, An extended TODIM multi-criteria group decision making method for green supplier selection in interval type-2 fuzzy environment, Eur. J. Oper. Res. 258 (2) (2017) 626–638. [28] L. Sun, L.Y. Wang, W.P. Ding, Y.H. Qian, J.C. Xu, Neighborhood multi-granulation rough sets-based attribute reduction using Lebesgue and entropy measures in incomplete neighborhood decision systems, Knowl.-Based Syst. (2019) 105373, https://doi.org/10.1016/j.knosys.2019.105373.

P. Liu et al. / International Journal of Approximate Reasoning 119 (2020) 177–203

203

[29] X.D. Sun, Y. Jiao, J.S. Hu, Research on decision-making method based on gray correlation degree and TOPSIS, Chin. J. Manag. Sci. 13 (04) (2005) 63–68 (in Chinese). [30] G.W. Wei, Gray relational analysis method for intuitionistic fuzzy multiple attribute decision making, Expert Syst. Appl. 38 (9) (2011) 11671–11677. [31] Z. Xing, W. Xiong, H. Liu, A Euclidean approach for ranking intuitionistic fuzzy values, IEEE Trans. Fuzzy Syst. 26 (1) (2018) 353–365. [32] Z. Xu, Intuitionistic fuzzy aggregation operators, IEEE Trans. Fuzzy Syst. 15 (6) (2007) 1179–1187. [33] Z. Xu, Approaches to multiple attribute group decision making based on intuitionistic fuzzy power aggregation operators, Knowl.-Based Syst. 24 (6) (2011) 749–760. [34] Z. Xu, M. Xia, Induced generalized intuitionistic fuzzy operators, Knowl.-Based Syst. 24 (2011) 197–209. [35] Z. Xu, R.R. Yager, Some geometric aggregation operators based on intuitionistic fuzzy sets, Int. J. Gen. Syst. 35 (4) (2006) 417–433. [36] R.R. Yager, The power average operator, IEEE Trans. Syst. Man Cybern., Part A, Syst. Hum. 31 (6) (2001) 724–731. [37] J.T. Yao, N. Azam, Web-based medical decision support systems for three-way medical decision making with game-theoretic rough sets, IEEE Trans. Fuzzy Syst. 23 (1) (2015) 3–15. [38] Y.Y. Yao, Probabilistic approaches to rough sets, Expert Syst. 20 (5) (2003) 287–297. [39] Y.Y. Yao, Decision-Theoretic Rough Set Models, International Conference on Rough Sets and Knowledge Technology, Springer, Berlin, Heidelberg, 2007, pp. 1–12. [40] Y.Y. Yao, Three-way decisions and cognitive computing, Cogn. Comput. 8 (4) (2016) 543–554. [41] Y.Y. Yao, Three-way decisions with probabilistic rough sets, Inf. Sci. 180 (3) (2010) 341–353. [42] Y.Y. Yao, Three-way decision and granular computing, Int. J. Approx. Reason. 103 (2018) 107–123. [43] Y.Y. Yao, S.K.M. Wong, A decision theoretic framework for approximating concepts, Int. J. Man-Mach. Stud. 37 (6) (1992) 793–809. [44] X. Yang, T. Li, H. Fujita, D. Liu, Y. Yao, A unified model of sequential three-way decisions and multilevel incremental processing, Knowl.-Based Syst. 134 (2017) 172–188. [45] M. Yazdani, P. Chatterjee, E.K. Zavadskas, S.H. Zolfani, Integrated QFD-MCDM framework for green supplier selection, J. Clean. Prod. 142 (2017) 3728–3740. [46] J. Ye, A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets, J. Intell. Fuzzy Syst. 26 (5) (2014) 2459–2466. [47] C. Yu, Y. Shao, K. Wang, L. Zhang, A group decision making sustainable supplier selection approach using extended TOPSIS under interval-valued Pythagorean fuzzy environment, Expert Syst. Appl. 121 (2019) 1–17. [48] H. Yu, Z. Liu, G. Wang, An automatic method to determine the number of clusters using decision-theoretic rough set, Int. J. Approx. Reason. 55 (1) (2014) 101–115. [49] X.D. Yue, Y.F. Chen, D.Q. Miao, H. Fujita, Fuzzy neighborhood covering for three-way classification, Inf. Sci. 507 (2020) 795–808. [50] L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning Part I, Inf. Sci. 8 (3) (1975) 199–249. [51] X. Zhang, F. Jin, P. Liu, A grey relational projection method for multi-attribute decision making based on intuitionistic trapezoidal fuzzy number, Appl. Math. Model. 37 (5) (2013) 3467–3477. [52] Y. Zhang, D. Miao, J. Wang, Z. Zhang, A cost-sensitive three-way combination technique for ensemble learning in sentiment classification, Int. J. Approx. Reason. 105 (2019) 85–97. [53] H.Y. Zhang, S.Y. Yang, Three-way group decisions with interval-valued decision-theoretic rough sets based on aggregating inclusion measures, Int. J. Approx. Reason. 110 (2019) 31–45. [54] B. Zhou, Y. Yao, J. Luo, Cost-sensitive three-way email spam filtering, J. Intell. Inf. Syst. 42 (1) (2014) 19–45.