Accepted Manuscript
Managing Incomplete Preferences and Consistency Improvement in Hesitant Fuzzy Linguistic Preference Relations with Applications in Group Decision Making Hongbin Liu, Yue Ma, Le Jiang PII: DOI: Reference:
S1566-2535(18)30375-0 https://doi.org/10.1016/j.inffus.2018.10.011 INFFUS 1036
To appear in:
Information Fusion
Received date: Revised date: Accepted date:
27 May 2018 18 October 2018 31 October 2018
Please cite this article as: Hongbin Liu, Yue Ma, Le Jiang, Managing Incomplete Preferences and Consistency Improvement in Hesitant Fuzzy Linguistic Preference Relations with Applications in Group Decision Making, Information Fusion (2018), doi: https://doi.org/10.1016/j.inffus.2018.10.011
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights • A method to compute missing elements of IHFLPRs is introduced.
CR IP T
• An additive consistency improvement method of IHFLPRs is proposed.
AC
CE
PT
ED
M
AN US
• A GDM model based on IHFLPRs is introduced.
1
ACCEPTED MANUSCRIPT
CR IP T
Managing Incomplete Preferences and Consistency Improvement in Hesitant Fuzzy Linguistic Preference Relations with Applications in Group Decision Making Hongbin Liua , Yue Mab , Le Jiangc,∗ a
AN US
School of Mathematics and Information Science, Henan University of Economics and Law, Zhengzhou, Henan 450046, China. b School of Mathematics and Statistics, North China University of Water Resources and Electric Power, Zhengzhou, Henan 450046, China. c School of Mathematics and Information Science, Zhengzhou University of Light Industry, Zhengzhou, Henan 450000, China.
Abstract
CE
PT
ED
M
Incomplete hesitant fuzzy linguistic preference relations (IHFLPRs) are useful in decision making which combine advantages of hesitant fuzzy linguistic term sets and incomplete fuzzy preference relations. The existing researches on IHFLPRs pay little attention to missing elements, and the consistency improvement processes change original information greatly. Inspired by the worst and the best consistency indexes of hesitant fuzzy linguistic preference relations, this paper constructs several optimization models to calculate the missing elements of IHFLPRs. As a result, a complete hesitant fuzzy linguistic preference relation is obtained. Furthermore, an algorithm is introduced to improve the additive consistency of the hesitant fuzzy linguistic preference relation to an acceptable level. Finally a group decision making model based on IHFLPRs is introduced and an example is presented.
AC
Keywords: Incomplete hesitant fuzzy linguistic preference relation, hesitant fuzzy linguistic term set, worst consistency index, best consistency index.
∗
Corresponding author. Address: School of Mathematics and Information Science, Zhengzhou University of Light Industry, Zhengzhou, Henan 450000, China. Email addresses:
[email protected] (Hongbin Liu),
[email protected] (Yue Ma),
[email protected] (Le Jiang)
Preprint submitted to Information Fusion
November 1, 2018
ACCEPTED MANUSCRIPT
1. Introduction
AC
CE
PT
ED
M
AN US
CR IP T
Hesitant fuzzy set (HFS) [24, 25] is an important extension of fuzzy set [40], in which several values might be the membership degrees of an object belonging to a given set. Based on HFS, many extended forms are introduced, among which the hesitant fuzzy linguistic term set (HFLTS) [20] is an popular one, and many research results on it have been developed. It is noted that HFLTSs can also consist of unbalanced linguistic terms although in most cases they are composed of balanced linguistic terms [6]. Some comprehensive review on HFSs and HFLTSs is provided in [19, 22]. Fuzzy preference relations are commonly-used in group decision making (GDM), which permit decision makers to provide a pairwise comparison at one time, and they don’t need to consider other alternatives at the same time. The HFLTSs facilitate decision makers to express their opinions when they are hesitant under linguistic environment. Combining both advantages, the hesitant fuzzy linguistic preference relations (HFLPRs) are a convenient tool for decision makers to express their preferences in form of HFLTSs when they think that the preference of one alternative over another is suitable to be expressed in several linguistic terms rather than a single linguistic term. For the first time Rodr´ıguez et al. [21] utilized HFLTSs in fuzzy preference relations and proposed a GDM model. Constructing a HFLPR requires decision makers to provide n(n − 1)/2 pairwise comparisons for n alternatives. But in some situations, some elements of the HFLPR are not provided due to lack of necessary knowledge, avoidance of elicitation of sensitive information, time pressure, etc. In these situations, an incomplete hesitant fuzzy linguistic preference relation (IHFLPR) is utilized, which is similar to a traditional incomplete fuzzy preference relation (IFPR). There are mainly two ways to deal with IFPRs. One way is deriving priority weights to rank alternatives by using some methods such as least-square method [7], chi-square method [31], logarithmic least squares method [34], eigenvector method [35]. The other way is estimating missing elements of the IFPR and further conducting calculations on the complete FPR [1–3, 11]. Based on IFPRs, researches on incomplete hesitant fuzzy preference relations (IHFPRs) and IHFLPRs mainly focus on the following issues: 1) Deriving priority weights directly from known elements of an IHFPR or an IHFLPR. This method is utilized in [32, 45, 47]. 2) Constructing a consistent FPR or a fuzzy linguistic preference relation 3
ACCEPTED MANUSCRIPT
(FLPR) by using known elements. The obtained FPR or FLPR is generally totally different from the original IHFPR or IHFLPR. This method is utilized in [23, 48].
AN US
CR IP T
Consistency is an important issue in using HFLPRs or HFPRs. There are mainly three types of consistency discussed in literature: weak consistency, additive consistency and multiplicative consistency. They represent human’s different understanding of consistency. Considering that additive consistency is stricter than weak consistency, and is more simple than multiplicative consistency in computation, this paper considers additive consistency of HFLPRs. Regarding consistency improvement methods for HFLPRs or HFPRs, most of existing researches can be categorized as follows:
M
1) Constructing a consistent HFLPR or a HFPR which is totally different from the original HFLPR or HFPR. Generally the adjusted HFLPR or HFPR is obtained by using an optimization model. Sometimes the obtained HFLPR or HFPR might not be accepted by decision makers since all of original opinions of decision makers are adjusted. Researches using this method are such as [23, 45, 50].
ED
2) Adjusting the HFLPR or HFPR iteratively until an acceptable consistency is reached, and only partial elements are modified in the original HFLPR or HFPR. In such a method, the consistency is calculated in an average sense. Researches using this method are such as [15, 27, 36, 49].
AC
CE
PT
Recently, Li et al. [13] introduced the interval consistency index (ICI), average consistency index (ACI), worst consistency index (WCI), and best consistency index (BCI) of HFLPRs. The WCI and BCI are obtained by computing the worst and the best additive consistency of FLPRs associated with a HFLPR, respectively. The ACI is the arithmetic mean of consistency indexes of all associated FLPRs, and ICI is an interval with lower bound and upper bound being WCI and BCI, respectively. By considering these consistency indexes, one can understand the overall consistency state of the HFLPR. Similar method is also utilized in [12] to compute the optimistic consistency index and pessimistic consistency index of FLPRs, and in [5] to calculate the classical consistency index, the boundary consistency index and the average consistency index of interval fuzzy preference relations. In [14] it is introduced an optimization model maximizing the average consistency index to obtain the personalized numerical values for linguistic terms. 4
ACCEPTED MANUSCRIPT
AN US
CR IP T
Considering that IHFLPRs are a convenient way for decision makers, in this paper we focus on methods to manage incomplete elements in an IFHLPR and afterwards improve additive consistency of the obtained HFLPR. The existing researches dealing with IHFLPR or IHFPR do not attach much importance on missing elements. Thus we introduce a method to calculate each missing element based on WCI and BCI of a HFLPR. We then propose a direct additive consistency improvement approach improving the worst additive consistency to an acceptable level rather than improving the average additive consistency in most of the existing researches. This approach follows the second consistency improvement method, which improves consistency in an iterative way. It seems that this approach is more acceptable by decision maker since some of their original information is preserved. The novelty of this paper is as follows: 1) An algorithm to calculate missing elements in an IHFLPR is proposed. Motivated by WCI and BCI in [13], this algorithm considers the most hesitant situation and obtains the lower and upper bounds of for each missing element.
ED
M
2) A direct way to improve additive consistency of the obtained HFLPR is introduced. This method improves the WCI and the BCI of the HFLPR in an iterative way until an acceptable consistency is reached.
PT
3) A GDM model based on IHFLPRs is introduced. In this model, closeness of each expert to other experts is reflected in decision makers’ weights in the aggregation process.
AC
CE
The remainder of this paper is structured as follows. Section 2 reviews the linguistic 2-tuple model, HFLTSs, HFLPRs and IHFLPRs. Section 3 presents a method to compute missing elements of an IHFLPR. Section 4 introduces an additive consistency improvement process, as well as some numerical examples. Section 5 presents a GDM model and an example. Finally Section 6 concludes the whole paper. 2. Preliminaries In this section we briefly review the linguistic 2-tuple model, HFLTSs, HFLPRs and IHFLPRs.
5
ACCEPTED MANUSCRIPT
CR IP T
2.1. Linguistic 2-tuple model For decision making problems under linguistic environment, a linguistic term set S = {s0 , s1 , . . . , sg } is utilized, in which g +1 is called the cardinality of S. Sometimes decision makers think that it is suitable to express an assessment between two linguistic terms. In this case, linguistic 2-tuple model [9] and symbolic linguistic model [37] are introduced. In this paper we will use the linguistic 2-tuple model.
AN US
Definition 1. [9] A linguistic 2-tuple (si , α) is defined by a mapping: ∆ : [0, g] → S × [−0.5, 0.5), such that i = round(β) ∆(β) = (si , α), such that , α=β−i where α is called the symbolic proportion.
M
Similarly, there exists a mapping: ∆−1 : S × [−0.5, 0.5) → [0, g], such that ∆−1 (si , α) = i + α. For simplicity, the set of linguistic 2-tuples can be denoted as S. To compare two linguistic 2-tuples, the following rules are introduced [9]: For (si1 , α1 ), (si2 , α2 ) ∈ S, 1) If i1 < i2 , then (si1 , α1 ) < (si2 , α2 );
ED
2) If i1 = i2 , then (si1 , α1 ) < (si2 , α2 ) for α1 < α2 , and (si1 , α1 ) = (si2 , α2 ) for α1 = α2 .
CE
PT
2.2. HFLTSs HFLTSs can deal with the situations that decision makers may be hesitant in providing assessments. In the following we use the linguistic term set S = {s0 , s1 , . . . , sg }.
AC
Definition 2. [20] A HFLTS is a finite set of consecutive linguistic terms of S. The comparison rules of HFLTSs are provided in [28]. Let HS = {h1 , . . . , h#HS } be a HFLTS based on S, where hi ∈ S, i = 1, . . . , #HS , and #HS is the cardinality of HS . The expected value of HS is defined as ! #HS 1 X −1 (1) ∆ (hi ) , E(HS ) = ∆ #HS i=1 6
ACCEPTED MANUSCRIPT
where ∆ and ∆−1 are defined in Definition 1. The variance of HS is defined as !2
.
(2)
CR IP T
V ar(HS ) =
#HS 1 X ∆−1 (hi ) − ∆−1 (E(HS )) #HS i=1
For two HFLTSs HS1 , HS2 , the comparison rules are as follows: 1) If E(HS1 ) < E(HS2 ), then HS1 ≺ HS2 ; 2) If E(HS1 ) = E(HS2 ), and
AN US
a) if V ar(HS1 ) < V ar(HS2 ), then HS1 HS2 ,
b) if V ar(HS1 ) = V ar(HS2 ), then HS1 = HS2 .
2.3. HFLPRs and IHFLPRs We review the FLPRs, HFLPRs and consistency measures.
M
Definition 3. [1] An FLPR is defined as A = (aij )n×n , where aij ∈ S denotes the preference degree of alternative xi to xj , which satisfies: ∆−1 (aij ) + ∆−1 (aji ) = g.
ED
The additive consistent FLPR is defined as follows: Definition 4. [1] An FLPR is A = (aij )n×n is additive consistent if (3)
PT
g ∆−1 (aij ) + ∆−1 (ajk ) = ∆−1 (aik ) + , i, j, k = 1, . . . , n. 2
AC
CE
In some situations some elements of A are missing, as a result an incomplete FLPR (IFLPR) can be defined as A = (aij )n×n , where aij = x represents missing elements, and the known elements satisfy aij + aji = sg . If there is at least one known element in each row and column of an IFLPR, then it can become a complete FLPR after missing values are computed [1]. The consistency index (CI) of an FLPR A = (aij )n×n is defined as follows [13, 46]: P −1 4 CI(A) = 1 − gn(n−1)(n−2) ∆ (aij ) + ∆−1 (ajk ) − ∆−1 (aik ) − g2 . i
(4)
A HFLPR is defined as follows:
7
ACCEPTED MANUSCRIPT
and #Hij is the cardinality of Hij , i, j = 1, . . . , n.
CR IP T
Definition 5. [50] A HFLPR is denoted as H = (Hij )n×n , where Hij = #H {h1ij , . . . , hij ij } are HFLTSs, n o #H Hji = N eg(Hij ) = ∆(g − ∆−1 (h1ij )), . . . , ∆(g − ∆−1 (hij ij )) ,
AN US
It is noted that the above definition of HFLPR is little different from the definition of HFPR in [30, 33]. Since we think that the elements in a HFLTS are equally possible, we can arrange them randomly. For simplicity, the #H elements of Hij are arranged in an ascending order, h1ij < h2ij < . . . < hij ij , for i < j. Different FLPRs can be obtained from a HFLPR, they are called FLPRs associated with the HFLPR. Definition 6. [13] Let H = (Hij )n×n be a HFLPR, L = (lij )n×n is called an FLPR associated with H, if lij ∈ Hij and lji = sg − lij , i, j = 1, . . . , n.
M
The lower bound and upper bound of the consistency index of a HFLPR, i.e., WCI and BCI, are calculated by using the optimization models (M-1) and (M-2), respectively.
PT
ED
(M − 1) P −1 4 ∆ (lij ) + ∆−1 (ljk ) − ∆−1 (lik ) − g min 1 − gn(n−1)(n−2) 2 i
CE
(M − 2) P −1 4 max 1 − gn(n−1)(n−2) ∆ (lij ) + ∆−1 (ljk ) − ∆−1 (lik ) − g2 i
AC
By introducing 0 − 1 variables, the above models can be transformed into 0-1 programming models. For more details, we refer to [13]. If some elements of a HFLPR are missing, then an IHFLPR can be defined. Definition 7. [23] An IHFLPR is represented by H = (Hij )n×n , where Hij = x represents missing elements, and the known elements satisfy the condition in Definition 5. 8
ACCEPTED MANUSCRIPT
3. Computing missing elements of IHFLPRs
AN US
CR IP T
In this section, we develop a method to calculate missing elements of an IHFLPR. Assume that H = (Hij )n×n is an IHFLPR. If Hij = x is missing, then we know Hji is also missing. For simplicity, we only give known elements Hij for i < j, and Hij , i > j can be obtained by using reciprocity. The models to compute WCI and BCI in [13] can be applied here. First we denote the set of known elements as Ω = {(i, j)|Hij is known}, and the set of missing elements as Ω = {(i, j)|Hij is missing}, We then #H introduce 0-1 variable vector xij = (x1ij , . . . , xij ij ) for Hij , (i, j) ∈ Ω as follows: 0, lij 6= hrij r , r = 1, . . . , #Hij , xij = 1, lij = hrij where lij , i, j = 1, 2, . . . , n are defined in Definition 6. Then the following model (M-3) to compute WCI of H can be constructed:
PT
ED
M
(M − 3) P −1 4 ∆ (lij ) + ∆−1 (ljk ) − ∆−1 (lik ) − g min 1 − gn(n−1)(n−2) 2 i
The BCI of H is obtained from the following model:
AC
CE
(M − 4) P −1 4 ∆ (lij ) + ∆−1 (ljk ) − ∆−1 (lik ) − g max 1 − gn(n−1)(n−2) 2 i
ACCEPTED MANUSCRIPT
CR IP T
By solving the above two models, we can obtain a HFLPR A = (aij )n×n with CI(A) = W CI(H), and a HFLPR B = (bij )n×n with CI(B) = BCI(H). For any missing element Hij , let hlij and huij be the lower bound and upper bound of Hij , respectively, which are obtained as follows: hlij = min {aij , bij } , huij = max {aij , bij } .
(5)
Since hlij , huij might not be linguistic terms in S, we assume that sl , su ∈ S are linguistic terms closest to hlij , huij , i.e., |∆−1 (sl ) − ∆−1 (hlij )| = min ∆−1 (st ) − ∆−1 (hlij ) , st ∈S (6) |∆−1 (su ) − ∆−1 (huij )| = max ∆−1 (st ) − ∆−1 (huij ) .
AN US
st ∈S
Then the missing element Hij can be obtained as follows: Hij =
u−l [
m=0
{sl+m } .
(7)
ED
M
It is easy to see that Hij is a HFLTS. To summarize the above process, in the following we provide Algorithm 1 to compute missing elements of an IHFLPR. Algorithm 1. Input: An IHFLPR H. Output: A complete HFLPR H.
PT
Step 1 Utilize models (M-3) and (M-4) to compute W CI, BCI, FLPRs A, B, hlij and huij .
CE
Step 2 Obtain each missing element Hij by using Eqs. (5), (6), (7). As a result a complete HFLPR H is obtained. Step 3 Output H.
AC
Step 4 End.
The algorithm only utilizes the known elements of HFLPRs, and considers the most hesitant situation for each missing element. The obtained missing elements are HFLTSs, which fit well with the hesitant situations using HFLPRs. In the following we provide a numerical example to clarify the algorithm. 10
ACCEPTED MANUSCRIPT
CR IP T
Example 1. Consider the example in [23]. Let S = {s0 , . . . , s6 } be a linguistic term set, and an IHFLPR be as follows {s3 } {s5 } x {s2 , s3 } {s3 } x {s1 , s2 } H= {s3 } {s4 } {s3 } In the following Algorithm 1 is utilized to obtain a complete HFLPR. Step 1. Based on model (M-3), the following model (M-5) can be constructed to compute WCI:
AN US
(M − 5)
ED
M
4 P 1 |∆−1 (lij ) + ∆−1 (ljk ) − ∆−1 (lik ) − 3| min 1 − 36 −1 i
Similarly, the following model (M-6) can be constructed to compute BCI:
PT
(M − 6)
AC
CE
4 P 1 max 1 − 36 |∆−1 (lij ) + ∆−1 (ljk ) − ∆−1 (lik ) − 3| i
11
ACCEPTED MANUSCRIPT
that s2 s3 s0 s1 s3 s4 s3
CR IP T
Solving the above models with LINGO, we obtain s3 s5 s0 s2 s3 s5 s s s s3 3 6 1 A= ,B = s3 s4 s3
AN US
with CI(A) = W CI(H) = 0.5556, and CI(B) = BCI(H) = 1. Step 2. By using Eq. (5), we have hl13 = s0 , hu13 = s2 , hl23 = s0 , hu23 = s6 . Therefore, H13 = {s0 , s1 , s2 }, and H23 = {s0 , . . . , s6 }. Step 3. The complete HFLPR H is obtained as {s3 } {s5 } {s0 , s1 , s2 } {s2 , s3 } {s3 } {s0 , . . . , s6 } {s1 , s2 } . H= {s3 } {s4 } {s3 }
ED
M
Remark 1. From this example, we can see that the obtained results of missing elements are HFLTSs with high hesitancy. In the situation without additional information, calculating missing elements based on WCI and BCI might be a reasonable way since it considers the most hesitant case for each missing element. 4. Additive consistency improvement process for HFLPRs
PT
In this section, we introduce a method to improve the additive consistency of the obtained HFLPRs, and present several numerical examples.
AC
CE
4.1. Improving the additive consistency of HFLPRs After missing elements of an IHFLPR are computed, a complete HFLPR is obtained. We know that the number of FLPRs associated with the complete HFLPR H = (Hij )n×n is: NH =
n Y
(#Hij ) .
(8)
i
In all of these FLPRs, what we concern is the FLPR A with CI(A) = W CI(H). It is noted that each linguistic term in a HFLTS is treated with equal importance. Therefore, we think that the consistency of a HFLPR is 12
ACCEPTED MANUSCRIPT
acceptable if its WCI reaches a predefined consistency threshold. Under this assumption, the consistency of all associated FLPRs is acceptable. In the following we give the definition of acceptable consistency.
CR IP T
Definition 8. Given a consistency threshold α ∈ [0, 1], we say the additive consistency of a HFLPR H = (Hij )n×n is acceptable if W CI(H) ≥ α.
AC
CE
PT
ED
M
AN US
Note that what is concerned in [5, 13] is the average consistency index, which is an expectation of the consistency indexes of all FPRs distributed in the original interval FPR, or an average of the consistency indexes of all FLPRs associated with the HFLPR. In some sense additive consistency improvement methods in [15, 46] can be viewed as the same type which consider the average consistency but not the WCI. In the following we consider the process to improve consistency of a HFLPR H = (Hij )n×n . Since BCI(H) ≥ W CI(H), we consider the following cases: Case 1. If W CI(H) ≥ α, then the consistency of H is acceptable. Case 2. If W CI(H) < α ≤ BCI(H), then H needs adjustment. In this case we focus on the method to improve W CI(H). Case 3. If W CI(H) ≤ BCI(H) < α, then H needs adjustment. In this case we focus on the method to improve BCI(H) until we obtain that BCI(H) ≥ α ≥ W CI(H), which becomes Case 2. If the improvement of BCI(H) results in BCI(H) ≥ W CI(H) ≥ α, then it becomes Case 1. In the following we introduce an improvement method for BCI(H) in Case 3. If BCI(H) < α, then we need to find out an FLPR serves as the improvement goal for H. An FLPR H with CI(H) = 1 will be the best one for our purpose. Meanwhile, there are numerous FLPRs with consistency being 1, and what we concern is the one closest to H. Under this assumption and motivated by [5], we construct the following optimization model (M-7) to obtain H = (hij )n×n : (M − 7)
P −1 #H 2 min gn(n−1) hij − x1ij · ∆−1 (h1ij ) − x2ij · ∆−1 hij ij ∆ i
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
In the above model, the objective function is to make H close to H. Since Hij is a HFLTS that may include more than one terms, and hij is a single linguistic term, we try to make them be close to each other by minimizing the distance between hij and the lower bound or the upper bound of Hij , #H that is, h1ij or hij ij . As a result hij is close to Hij , and H is close to H. In order to increase the WCI and BCI of a HFLPR H, we first give a useful conclusion: If we add a linguistic term to any element of a HFLPR H, then W CI(H) may decrease, while the BCI(H) may increase. The reason is that adding one linguistic term will increase the number of FLPRs associated with H. This fact can be seen from Eq. (8). Meanwhile, recall that W CI(H) and BCI(H) are obtained from all of these FLPRs, thus more FLPRs may cause W CI(H) to decrease and BCI(H) to increase. Similarly, removing one linguistic term from any element of H may cause W CI(H) to increase and BCI(H) to decrease. It is straightforward to see that high cardinality of a HFLTS means high hesitancy, which will make it more difficult to make a decision from a HFLPR. Therefore, in order to increase W CI(H), it is better to remove some linguistic terms from HFLTSs in a HFLPR rather than adding linguistic terms to HFLTSs. In this process, one or more terms can be removed in each round. To make this process more efficient, in each round the number of removed terms can be set as b#Hij /2c, where b·c is the floor function. On the other hand, in order to increase BCI(H), it is not good to increase the cardinality of HFLTSs since it will contradict the step to increase W CI(H). It will be a better way to modify HFLTSs with low cardinality gradually until BCI(H) ≥ α. The consistency improvement process for IHFPRs is shown in Figure 1. It mainly has two phases: computing missing elements and consistency improvement process. Algorithm 1 has provided the method to compute missing elements. The consistency improvement process includes BCI improvement which is shown by Step 5 of Algorithm 2, and WCI improvement which is shown by Step 7 of Algorithm 2. In the following we present Algorithm 2 to show the consistency improvement process for IHFPRs in detail. Algorithm 2. Input: An IHFLPR H and a consistency threshold α. t Output: A HFLPR H with acceptable consistency. Step 1 Utilize Algorithm 1 to obtain a complete HFLPR H. 14
CE
PT
ED
M
AN US
CR IP T
ACCEPTED MANUSCRIPT
AC
Figure 1: Consistency improvement process for IHFLPRs
15
ACCEPTED MANUSCRIPT
Step 2 Utilize models (M-3) and (M-4) to compute FLPRs A, B, and W CI(H), BCI(H). Turn to the next step.
If BCI(H) < α, then turn to the next step.
CR IP T
Step 3 If W CI(H) ≥ α, then turn to Step 9.
If W CI(H) < α ≤ BCI(H), then turn to Step 7. t
Step 4 Set t = 0 and H = H.
AN US
Step 5 Utilize model (M-7) to compute FLPR H with CI(H) = 1. For t t #H ij = 1, modify H according to the following cases: n to t,u t,u t hij , then H ij should increase 1) If hij > hij , where hij = max t t hij ∈H ij
t
to a linguistic term close to hij . One possible way is to let H ij t,u
increase to the adjacent linguistic term of hij at each round. n to t,l t,l t 2) If hij < hij , where hij = tmint hij , then H ij should decrease to hij ∈H ij
t
M
a linguistic term close to hij . One possible way is to let H ij decrease t,l
to the adjacent linguistic term hij at each round.
ED
Turn to the next step. t
t
t
Step 6 Calculate B with CI(B ) = BCI(H ). t
t
PT
If BCI(H ) ≥ α > W CI(H ), then turn to the next step. t
CE
If W CI(H ) ≥ α, then turn to Step 9. t t If BCI H < α, then let t = t + 1, and consider elements H ij with t+1
t
#H ij = #H ij + 1 and turn to 1). t
AC
Step 7 Modify H by considering the following cases: j k t,1 t t t 3) If hij ≥ bij , then remove #H ij /2 linguistic terms in H ij which n to t,1 t,1 are far from hij , where hij = tmint hij . hij ∈H ij
16
ACCEPTED MANUSCRIPT
t,#H ij
t
t
t
≤ bij , then remove b#H ij /2c linguistic terms in H ij n to t,#H ij t,#H ij , where hij = max which are far from hij hij . t t 4) If hij
Turn to the next step. t
Step 8 Calculate W CI(H ). t
If W CI(H ) ≥ α, then turn the next step. t
CR IP T
hij ∈H ij
If W CI(H ) < α, then let t = t + 1 and turn to Step 7. t
AN US
Step 9 Output H . Step 10 End.
t
Remark 2. In Algorithm 2, the aim of Step 5 is to increase BCI(H ) by t setting H as the improvement goal. The aim of Step 7 is to keep BCI(H ) ≥ t t α, and increase W CI(H ) by setting B as the improvement goal.
M
The convergence of Algorithm 2 is provided as follows. Proposition 1. The Algorithm 2 is convergent.
AC
CE
PT
ED
Proof. We prove this property from two parts. We first prove that Step 5 can t make H close to H. This is easy to see from the adjustment rules in Step t t 5. As a result, H is close to H, and BCI(H ) is close to CI(H) = 1. From t α ≤ 1, we know that after finite rounds we have BCI(H ) ≥ α. Note that t in this stage we don’t pay attention to W CI(H ) since it will be considered in the next stage. t t We then prove that Step 7 ensures that W CI(H ) ≥ α and BCI(H ) keeps unchanged. Actually, the adjustment rules in Step 7 remove linguistic t t terms from one side of elements of B , while elements of B are still kept in t t t H . Therefore, the process will result in W CI(H ) ≥ α since BCI(H ) ≥ t t t α. Otherwise the process will continue until H = B , and W CI(H ) = t t CI(B ) = BCI(H ) ≥ α. This completes the proof. Tonmeasure the adjustmentoefficiency, we define the adjustment rate. Let t t t Λ = H ij |H ij 6= Hij , H ij ∈ Ω , which denotes the number of adjusted elements comparing with the decision maker’s original opinions. 17
ACCEPTED MANUSCRIPT
CR IP T
Definition 9. The adjustment rate of Algorithm 2 is defined as follows: ! #Λ t 1 #Λ 1 X t t + (9) R H = E(H ij ) − E(H ij ) , H ij ∈ Λ, 2 #Ω g · #Λ k=1 where # means the number of elements in a set.
AN US
In the above definition, the first part denotes the ratio of number of the adjustment elements to number of the original known elements, and the second part denotes the deviation between original opinions and adjustment elements. The average of the two parts can measure the adjustment efficiency in a precise way. 4.2. Numerical examples We present some numerical examples to illustrate the method to improve the consistency of hesitant fuzzy preference relations.
CE
PT
ED
M
Example 2. Continued with Example 1. Assume that the consistency threshold is α = 0.9. Since W CI(H) = 0.5556 < α and BCI(H) = 1 > α, H needs adjustment. We apply Algorithm 2 to this example. 0 0 0 0 Set H = H and B = B. Since b13 = s2 and h13 = {s0 , s1 , s2 }, we 0 0 remove b3/2c = 1 linguistic term from h13 . Thus s0 is removed from h13 0 1 0 because it is far from s2 . Similarly, we modify h14 to h14 = {s3 }, h23 to 1 0 1 h23 = {s0 , . . . , s3 }, and h24 to h24 = {s1 }. Then we obtain {s3 } {s5 } {s1 , s2 } {s3 } 1 {s3 } {s0 , . . . , s3 } {s1 } . H = {s3 } {s4 } {s3 } 1
AC
By using model (M-3) we obtain that W CI(H ) = 0.7778 < α. 1 1 1 We further adjust h13 → {s2 }, h23 → {s0 , s1 }, and H is adjusted to the following form: {s3 } {s5 } {s2 } {s3 } 2 {s3 } {s0 , s1 } {s1 } . H = {s3 } {s4 } {s3 } 18
ACCEPTED MANUSCRIPT
2
By using model (M-3) and we obtain that W CI(H ) = 0.9444 > α. Thus 2 the consistency of H is acceptable.
CR IP T
In the above example the BCI of a HFLPR is high enough such that only improvement of WCI is needed. In the following we will present an example to show improvement process of both BCI and WCI.
AN US
Example 3. We consider the example in [23]. An IHFLPR is given as follows: {s3 } {s4 } {s4 , s5 } x {s3 } x {s5 , s6 } . H= {s3 } {s1 } {s3 } Assume that the consistency threshold is also α = 0.9. In the following Algorithm 2 is utilized to improve the consistency. Step 1. By using models (M-3) and (M-4), we can obtain the lower and upper bounds of missing elements as follows:
M
hl14 = s0 , hu14 = s3 , hl23 = s0 , hu23 = s4 . Therefore, we can obtain the HFLTSs
ED
H14 = {s0 , . . . , s3 }, H23 = {s0 , . . . , s4 },
CE
PT
and the complete HFLPR {s3 } {s4 } {s4 , s5 } {s0 , . . . , s3 } {s3 } {s0 , . . . , s4 } {s5 , s6 } H= {s3 } {s1 } {s3 }
AC
Step 2. Compute W CI(H) and BCI(H) as: W CI(H) = 0.3889, BCI(H) = 0.8333.
It can be seen that BCI(H) < α. Turn to the next step. 0 Step 3. Set t = 0 and H = H.
19
.
ACCEPTED MANUSCRIPT
Step 4. Based on model (M-7), the following model (M-8) is constructed to calculate the FLPR H with CI(H) = 1:
AN US
CR IP T
(M − 8)h min 16 ∆−1 h12 − 4 + ∆−1 h13 − 4x113 − 5x213 + ∆−1 h14 − 0 · x114 − 3x214 i −1 −1 −1 2 2 1 1 + ∆ h23 − 0 · x23 − 4x23 + ∆ h24 − 5x24 − 6x24 + ∆ h34 − 1 4 P −1 1 −1 −1 1 − 36 hij + ∆ hjk − ∆ hik − 3 = 1 ∆ i
ED
M
By solving this model, we obtain that s3 s4 s5 s3 s3 s4 s2 . H= s3 s1 s3
t
PT
In the following we summarize the adjustment process of H in Table 1. t
Table 1: Adjustment process of H from t = 0.
CE
Adjustment
AC
BCI
0 h13 0 h24
t=1 1 = {s4 , s5 } → h13 = {s5 } 1 = {s5 , s6 } → h24 = {s4 } 0.8889 2
2
1 h24
t=2 2 = {s4 } → h24 = {s3 } 0.9444
When t = 2, BCI(H ) = CI(B ) = 0.9444 > α, where s3 s4 s5 s3 2 s3 s4 s3 , B = s3 s1 s3 20
ACCEPTED MANUSCRIPT
t
Table 2: Adjustment process of H from t = 2.
t=3
WCI BCI
t=4 3 = {s2 , s3 , s4 } → h23 = {s3 , s4 }
3 h23
CR IP T
Adjustment
2 3 h14 = {s0 , . . . , s3 } → h14 = {s2 , s3 } 2 3 h23 = {s0 , . . . , s4 } → h23 = {s2 , s3 , s4 }
0.7778 0.9444
4
2
AN US
h23
2
M
and W CI(H ) = CI(A ) = 0.5556 < α, s3 s4 2 s3 A = 2
0.8333 0.9444 t=5 5 = {s3 , s4 } → h23 = {s4 } 0.9444 0.9444
where
s5 s0 s0 s3 . s3 s1 s3 2
CE
PT
ED
Step 5. Modify H to increase W CI(H ). The adjustment process is summarized in Table 2. 5 5 Since W CI(H ) = BCI(H ) = 0.9444 > α, we can see that the consis5 tency of H is acceptable. Therefore, {s3 } {s4 } {s5 } {s3 } 5 {s3 } {s4 } {s3 } , H = {s3 } {s1 } {s3 } 5
AC
and the adjustment rate of H is: 5 1 2 3 3 R H = + = . 2 4 12 8
In order to compare the ranking order of our method with other methods, we compute score of each alternative in a simple way, that is, 1X t E H ij , Score(ai ) = 4 j=1 4
21
(10)
ACCEPTED MANUSCRIPT
t
where H ij is the adjusted HFLPR with acceptable consistency. In this way, the scores of alternatives are obtained as
CR IP T
Score(a1 , a2 , a3 , a4 ) = ((s4 , −0.25), (s2 , −0.5), (s4 , −0.5), (s3 , 0)).
AN US
Therefore, we obtain the ranking order as: a1 a3 a4 a2 . There are also some models to deal with IHFLPRs and IHFPRs. We will compare them with our model. 1) In [23], it is introduced a method to derive a priority weighting vector from an IHFLPR, and this vector is utilized to construct an additive consistent FLPR. Then computation is based on this FLPR. By using this method, an additive consistent FLPR based on H can be obtained as s3 (s3 , 0.25) (s6 , −0.25) (s4 , 0.5) s3 (s5 , 0.5) (s4 , 0.25) . P1∗ = s3 (s2 , −0.25) s3
ED
M
It is easy to obtain the adjustment rate as R(P1∗ ) = 7/12, and the consistency as W CI(P1∗ ) = BCI(P1∗ ) = 1. This method maximizes consistency from given elements of the IHFLPR, and computes a new additive consistent FLPR. It does not need to compute missing elements, or improve consistency. However, the original information is not kept in the obtained FLPR. By using the same method to compute alternatives’ scores, we obtain that
PT
Score(a1 , a2 , a3 , a4 ) = ((s4 , 0.125), (s4 , −0.125), (s1 , 0.4375), (s3 , −0.375)).
AC
CE
Therefore, the ranking order is a1 a2 a4 a3 . 2) In [47], it is introduced two GDM methods for IHFPRs based on multiplicative consistency. The first one is deriving the priority weights directly by using the similar method as in [32]. The second method is estimating missing elements of the IHFPRs and then computing alternatives’ weights to obtain a ranking order. By setting ζ = 1 and the complete HFLPR is obtained as {s3 } {s4 , s4 } {s4 , s5 } {(s4 , 0.23), (s5 , −0.5)} {s3 } {(s4 , 0.38), (s5 , 0.14)} {s5 , s6 } . H= {s3 } {s1 , s1 } {s3 } 22
ACCEPTED MANUSCRIPT
CR IP T
But the steps in [47] cannot be applied since 0 appears in one element of the HFPR. We then utilize our method to obtain the ranking of alternatives as a1 a2 a4 a3 . The adjustment rate is R(H) = 0, and the consistency indexes are BCI(H) = 0.8333, W CI(H) = 0.7222. If the consistency threshold is α = 0.9, then the consistency of H is unacceptable. This method utilizes a normalization process to estimate missing elements, in which the estimated elements have the same cardinality. It is not provided a consistency improvement method in this model. We summarize the main comparison results in Table 3.
Methods Song and Hu [23] Zhang [47] Our Method
Scheme IHFLPR IHFPR IHFLPR
AN US
Table 3: Main comparison results of the methods dealing with IHFPRs or IHFLPRs.
Ranking Order a1 a2 a4 a3 a1 a2 a4 a3 a1 a4 a2 a3
WCI 1 0.7222 0.9444
Adjustment rate 7/12 0 3/8
AC
CE
PT
ED
M
From Table 3, we can see that the methods obtain the same best alternative, but the ranking orders may be different. The advantage of our model lies in that the consistency of the obtained HFLPR is acceptable, and most of the original information is kept. In the adjustment process, although some linguistic terms in a HFLTS are removed, the left linguistic term is still lie in the scope of the original HFLTS. In this view, a part of the decision makers’ information is kept. Additionally, some elements are modified to another linguistic term, but we try to make the adjusted linguistic term close to the original linguistic terms. The decision makers might accept this way rather than modifying their opinions to linguistic terms very far from the original ones. Some models such as [23] can also ensure that the additive consistency be acceptable, but the adjustment rate is higher than that of our model, thus less of the original information is kept. There are also some models can calculate the missing elements, and the original information is completely kept, as a result the adjustment rate is very low and no adjustment process is needed. But the additive consistency may be unacceptable for a consistency threshold. For example, in [47], we have BCI(H) = 0.8333, W CI(H) = 0.7222, which are lower than a consistency threshold α = 0.9. Example 4. Let S = {s0 , . . . , s6 } be a linguistic term set. Consider the 23
ACCEPTED MANUSCRIPT
.
CR IP T
following IHFLPR {s3 } {s5 , s6 } x {s2 } {s2 } {s } {s , s } x {s 3 3 4 5 , s6 } {s3 } {s1 , s2 } {s4 } H= {s3 } {s1 } {s3 }
AN US
To calculate the missing elements, we apply Algorithm 1 and obtain the following HFLPR: {s3 } {s5 , s6 } {s0 , . . . , s3 } {s2 } {s2 } {s3 } {s3 , s4 } {s0 , s1 , s2 } {s5 , s6 } , {s } {s , s } {s } H= 3 1 2 4 {s3 } {s1 } {s3 }
AC
CE
PT
ED
M
and W CI(H) = 0.5778, BCI(H) = 0.7444. In the following, we consider consistency of H for several different values of α. Case 1: α = 0.5. Since W CI(H) > α, we can see that the consistency of H is acceptable. It is also obtained that R(H) = 0, and the number of associated FLPRs NH = 192. Case 2: α = 0.6. Since W CI(H) < α < BCI(H), the W CI(H) needs improvement. By 1 modifying H 13 → H 13 = {s2 , s3 }, we can obtain that {s3 } {s5 , s6 } {s2 , s3 } {s2 } {s2 } {s3 } {s3 , s4 } {s0 , s1 , s2 } {s5 , s6 } 1 {s3 } {s1 , s2 } {s4 } H = , {s3 } {s1 } {s3 } 1
1
and W CI(H ) = 0.6222 > α. It can also obtained that R(H ) = 0, and the number of associated FLPRs as NH 1 = 96. Case 3: α = 0.7.
24
ACCEPTED MANUSCRIPT
CR IP T
Since W CI(H) < α < BCI(H), the W CI(H) needs improvement. By applying Algorithm 2 and we can obtain that {s3 } {s5 } {s3 } {s2 } {s2 } {s3 } {s3 } {s2 } {s5 , s6 } 2 {s3 } {s2 } {s4 } H = , {s3 } {s1 } {s3 } 2
2
M
AN US
and W CI(H ) = 0.7111 > α. It can also obtained that R(H ) = 23/120, and the number of associated FLPRs as NH 2 = 2. Case 4: α = 0.8. Since BCI(H) < α, both of W CI(H) and BCI(H) need improvement. By applying Algorithm 2 and we can obtain that {s3 } {s5 } {s3 } {s2 } {s2 } {s3 } {s3 } {s2 } {s5 } 3 {s3 } {s2 } {s4 } H = , {s3 } {s3 } {s3 } 3
3
AC
CE
PT
ED
and W CI(H ) = 0.8111 > α. It can also obtained that R(H ) = 1/3, and the number of associated FLPRs as NH 3 = 1. Case 5: α = 0.9. Since BCI(H) < α, both of W CI(H) and BCI(H) need improvement. By applying Algorithm 2 and we can obtain that {s3 } {s5 } {s3 } {s2 } {s2 } {s3 } {s3 } {s2 } {s3 } 4 {s3 } {s2 } {s2 } H = , {s3 } {s3 } {s3 } 4
4
and W CI(H ) = 0.9 = α. It can also obtained that R(H ) = 13/30, and the number of associated FLPRs NH 4 = 1. To observe the adjustment rate and number of associated FLPRs with different α, we provide the results in Figures 2 and 3. It is noted that R(H) denotes the adjustment rate for all adjusted HFLPRs, and N (H) denotes the number of associated FLPRs associated with the adjusted HFLPRs. 25
AN US
CR IP T
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
Figure 2: Values of R(H) for different α.
Figure 3: Values of NH for different α.
26
ACCEPTED MANUSCRIPT
5. A GDM model based on IHFLPRs
CR IP T
From Figures 2, 3, we can see that the consistency improvement method will decrease hesitancy of the HFLPR. For a HFLPR, bigger values of α need more adjustment rounds. Therefore, bigger adjustment rates and smaller numbers of associated FLPRs are obtained from the adjusted HFLPR. The reason is that high hesitancy of a HFLPR will produce low WCI since high hesitancy will produce more associated FLPRs than low hesitancy. As a result, more inconsistent cases will happen.
AN US
In GDM with uncertainty and hesitation, sometimes decision makers provide their preferences in form of IHFLPRs due to some reasons. In [23] it is introduced a GDM method to select the best solution. They consider the situation that weights of decision makers are known. In this paper we consider a different situation that the weights of decision makers are unknown, but they are to be determined by each decision maker’s proximity to other decision makers.
PT
ED
M
5.1. GDM model based on IHFLPRs We first present scheme of the GDM. Suppose that a group of decision makers E = {e1 , . . . , em }, provide their preferences over a set of alternatives A = {a1 , . . . , an } in form of IHFLPRs (k) H (k) = (Hij )n×n , k = 1, 2, . . . , m. Since this problem is with incomplete information, we solve it in three steps: completion of the IHFLPRs and consistency improvement process, aggregation process, and selection process. Step 1 Completion of the IHFLPRs and consistency improvement process.
AC
CE
In this process, the missing elements of the IHFLPRs are computed and the additive consistency is improved. This process has been provided in Sections 3 and 4. In the following the other two pro(k) = cesses will be conducted based on the obtained HFLPR H (k) (H ij )n×n , k = 1, 2, . . . , m.
Step 2 Aggregation process. This process aims to aggregate individual HFLPRs into a collective HFLPR. In order to compute weights of the decision makers, we think that the closer one decision maker to other decision makers, the bigger weight he/she should have. The proximity, measuring 27
ACCEPTED MANUSCRIPT
i=1 j=i+1 l=1,l6=k
CR IP T
closeness of one decision maker ek to other decision makers, can be defined as (l) n−1 n m P P P (k) 2 Pk = 1 − gn(n−1)(m−1) E H ij − E H ij .
(11)
The weight of decision maker ek can be calculated in a very simple way as follows: [10]: , m X wk = P k Pl . (12)
AN US
l=1
It is noted that there are also some other ways to integrate decision makers’ consistency in the calculation of weights, such as the Regular Increasing Monotone (RIM) quantifier [10, 16–18, 38, 39]. Finally the collective assessment of each alternative is computed as (k) 1 XX wk E H ij , i = 1, 2, . . . , n. Score(ai ) = n j=1 k=1
(13)
ED
Step 3 Selection process.
m
M
n
In the selection process, the Score(ai ), i = 1, . . . , n are ranked and the best alternative is selected.
CE
PT
5.2. Illustrative example In the following we provide an example to illustrate the computational process of the proposed method.
AC
Example 5. We use the example in [23]. Suppose that four experts from an investment company assess four projects A = {a1 , a2 , a3 , a4 } to determine the best project to invest. They utilize HFLPRs but only give the most confident comparisons by using HFLTSs and the remaining elements are missing. The assessments are as follows: {s3 } {s4 } {s4 , s5 } x {s3 } x {s5 , s6 } ; H (1) = {s3 } {s1 } {s3 } 28
ACCEPTED MANUSCRIPT
x {s3 } x {s3 } {s5 } {s2 } ; H (2) = {s3 } {s1 , s2 } {s3 } {s3 } {s2 } x {s3 , s4 , s5 } {s3 } x {s2 } ; H (3) = {s3 } {s1 } {s3 }
{s3 }
H (4) =
CR IP T
{s3 }
x {s5 } {s3 , s4 } {s3 } {s2 } x . {s3 } {s4 , s5 } {s3 }
AN US
M
Step 1. The IHFLPRs are completed as follows: {s3 } {s4 } {s4 , s5 } {s0 , . . . , s3 } (1) {s3 } {s0 , . . . , s4 } {s5 , s6 } H = {s3 } {s1 } {s3 }
;
AC
CE
PT
ED
{s3 } {s2 , . . . , s6 } {s3 } {s0 , s1 } (2) {s3 } {s5 } {s2 } ; H = {s3 } {s1 , s2 } {s3 } {s3 } {s2 } {s0 , . . . , s5 } {s3 , s4 , s5 } (3) {s3 } {s4 , s5 , s6 } {s2 } ; H = {s3 } {s1 } {s3 }
H
(4)
{s3 } {s0 , . . . , s6 } {s5 } {s3 , s4 } {s3 } {s2 } {s0 , . . . , s3 } . = {s3 } {s4 , s5 } {s3 }
Here we also set the consistency threshold α = 0.9. In Example 3 we have considered the consistency improvement process of H (1) . In the similar way 29
ACCEPTED MANUSCRIPT
e (3) H
{s3 } {s2 } {s3 } {s1 } {s3 } {s5 } {s2 } ; = {s3 } {s1 } {s3 } {s3 } {s3 } {s5 } {s3 } {s3 } {s5 } {s2 } ; = {s3 } {s1 } {s3 }
M
{s3 } {s5 , s6 } {s5 } {s5 } {s3 } {s2 } {s3 } . = {s3 } {s4 } {s3 }
ED
e (4) H
AN US
e (2) H
CR IP T
we can improve the additive consistency of other IHFLPRs to make them acceptable. The improved HFLPRs are as follows: {s3 } {s4 } {s5 } {s3 } {s3 } {s4 } {s3 } e (1) = ; H {s3 } {s1 } {s3 }
PT
The WCI of the improved HFLPRs are: e (i) = 0.9444 > α, i = 1, 2, 3, 4. W CI H
CE
Step 2. We can obtain that
(P1 , P2 , P3 , P4 ) = (0.82, 0.78, 0.82, 0.69),
AC
and thus
(w1 , w2 , w3 , w4 ) = (0.2637, 0.2508, 0.2637, 0.2219).
The collective FLPR is then obtained as follows: s3 (s4 , −0.43) (s5 , −0.50) (s3 , −0.06) s3 (s4 , 0.07) (s2 , 0.49) ec = H s3 (s2 , −0.33) s3 30
.
ACCEPTED MANUSCRIPT
Step 3. The score vector of alternatives is Score(a1 , a2 , a3 , a4 ) = ((s4 , −0.5), (s3 , 0), (s2 , 0.03), (s3 , 0.48)) .
CR IP T
Therefore, the ranking of alternatives is a1 a4 a2 a3 and the best project is a1 .
AN US
In [23] the result is a1 a2 a4 a3 and the best project is also a1 . The different ranking order is due to different methods used in the two models. In [23], each IHFLPR is transformed into an additive consistent FLPR without adjustment process. But in our method, the HFLPRs are adjusted such that their additive consistency is acceptable. Additionally, weights of decision makers in [23] are given as (w1 , . . . , w4 ) = (0.4, 0.3, 0.2, 0.1), while our method calculates the weights by using the proximity measures of each expert. If we also use the weights in [23], then the ranking order is a4 a1 a2 a3 , which is different from [23]. It seems that acceptable consistency is more flexible by selecting different consistency thresholds, and is more acceptable to decision makers since total consistency seems to be more difficult to realize.
n−1 P
(k) (l) (wk + wl ) φij − φij ,
(14)
(k) (k) φij = ∆−1 E H ij , k = 1, . . . , m, i, j = 1, . . . , n.
The consensus degree of H
AC
n m−1 m P P P
i=1 j=i+1 k=1 l=k+1
CE
where
2 gn(n−1)m(m−1)
PT
CD = 1 −
ED
M
5.3. Discussion It is notworthy that our model does not consider consensus in the problem. Here we explain the reasons. Since decision makers with higher weights play a more important role on consensus, we can use the following equation to calculate consensus degree:
CD(H
(1)
(1)
,...,H
,...,H
(4)
(4)
is then obtained as
) = 0.8806.
If we set a consensus threshold CD = 0.9, then the complete HFLPRs need adjustment. What we want is to make each HFLPR close to other HFLPRs. Motivated by [43], we adjust a HFLPR by using the weighted average (2) of all HFLPRs. For example, if we adjust H 12 = {s2 }, we try to make it close 31
ACCEPTED MANUSCRIPT
(k)
PT
ED
M
AN US
CR IP T
to the arithmetic mean of H , k = 1, 3, 4, that is, (s4 , 0.17). It is noted here (2) (k) that we set the weight of H 12 as 0, and the weight of H , k = 1, 3, 4 as 1/3, (2) (2) to make H 12 close to other HFLPRs. For example, let H 12 → {s3 }, then the consensus will be CD = 0.8876 > 0.8806, which means that the consensus (2) degree increases after this step. But it will make W CI(H ) = 0.8889 < 0.9, which means that its consistency becomes unacceptable. Similar results can be obtained if we first adjust consensus and then we adjust consistency. From this example we know that first adjusting consistency or consensus, then adjusting the other one is infeasible to improve both consistency and consensus. In [27], it is proposed a method, which improves the additive consistency of each HFLPR in the first step, then improves the consensus of the HFLPRs in the second step. But in their method the additive consistency is based on the expectation values of the HFLTSs, and our method is based on the WCI of the HFLPR. Additionally, as they stated that their method cannot ensure the consistency be acceptable after improvement of consensus. In [36], it is also introduced a similar consistency and consensus improvement method for HFLPRs, which follows the similar steps as in [27], but with different details. It is not discussed the issue that whether the consistency will be acceptable after the improvement of consensus. Although in their illustrative example the consistency is still acceptable after the consensus improvement, it might be invalid in general cases. How to apply the existing methods [4, 8, 10, 26, 29, 41–44], to improve both consistency and consensus such that they are acceptable, is a difficult but interesting topic which deserves to study in the future. 6. Conclusions
AC
CE
The HFLTSs facilitate decision makers for their elicitation of hesitant and fuzzy assessments. Combining HFLTSs with FLPRs provides a useful tool for decision makers to compare alternatives with hesitation. Due to knowledge insufficiency or some other reasons, IHFLPRs are sometimes utilized. Based on some research results on IHFLPRs, we propose a new approach to estimate missing elements and improve its additive consistency, and further utilize this method in GDM. The main results in this paper are summarized as follows: 1) A method to compute missing elements is proposed by computing the lower and upper bounds of missing elements. The obtained elements are in 32
ACCEPTED MANUSCRIPT
AN US
CR IP T
form of HFLTSs, which fit well with the situation using HFLPRs. 2) An algorithm to improve additive consistency of the IHFLPR is introduced. Such a method tries to keep decision makers’ original opinions in the adjustment process. 3) A GDM method is proposed based on IHFLPRs. In this GDM, proximity of each decision maker to other decision makers is utilized to compute the decision maker’s weight. There are still a lot of problems need to be investigated, including the afore-mentioned consistency and consensus improvement in a GDM [43], and application of the proposed method to FLPRs based on probability linguistic term sets and linguistic distributions. Acknowledgements
M
This work is supported by the Key Scientific Research Funds of Henan Provincial Department of Education (16A630038); the National Natural Science Foundation of China (11872175); the Doctoral Research Start-up Funding Project of Zhengzhou University of Light Industry and Henan University of Economics and Law (BSJJ2013053, 800234).
References
ED
References
PT
[1] S. Alonso, F. Cabrerizo, F. Chiclana, F. Herrera, E. Herrera-Viedma, Group decision making with incomplete fuzzy linguistic preference relations, International Journal of Intelligent Systems 24 (2009) 201–222.
AC
CE
[2] S. Alonso, F. Chiclana, F. Herrera, E. Herrera-Viedma, J. Alcal´a-Fdez, C. Porcel, A consistency-based procedure to estimate missing pairwise preference values, International Journal of Intelligent Systems 23 (2008) 155–175. [3] S. Alonso, E. Herrera-Viedma, F. Chiclana, F. Herrera, A web based consensus support system for group decision making problems and incomplete preferences, Information Sciences 180 (2010) 4477–4495. [4] F. Chiclana, F. Mata, L. Mart´ınez, E. Herrera-Viedma, S. Alonso, Integration of a consistency control module within a consensus model, 33
ACCEPTED MANUSCRIPT
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 16 (2008) 35–53.
CR IP T
[5] Y.C. Dong, C.C. Li, F. Chiclana, E. Herrera-Viedma, Average-case consistency measurement and analysis of interval-valued reciprocal preference relations, Knowledge-Based Systems 114 (2016) 108–117.
[6] Y.C. Dong, C.C. Li, F. Herrera, Connecting the linguistic hierarchy and the numerical scale for the 2-tuple linguistic model and its use to deal with hesitant unbalanced linguistic information, Information Sciences 367-368 (2016) 259–278.
AN US
[7] Z.W. Gong, Least-square method to priority of the fuzzy preference relations with incomplete information, International Journal of Approximate Reasoning 47 (2008) 258–264.
[8] F. Herrera, E. Herrera-Viedma, J. Verdegay, A rational consensus model in group decision making using linguistic assessments, Fuzzy Sets and Systems 88 (1997) 31–49.
ED
M
[9] F. Herrera, L. Mart´ınez, A 2-tuple fuzzy linguistic representation model for computing with words, IEEE Transactions on Fuzzy Systems 8 (2000) 746–752.
PT
[10] E. Herrera-Viedma, S. Alonso, F. Chiclana, F. Herrera, A consensus model for group decision making with incomplete fuzzy preference relations, IEEE Transactions on Fuzzy Systems 15 (2007) 863–877.
CE
[11] E. Herrera-Viedma, C. F., F. Herrera, S. Alonso, Group decision-making model with incomplete fuzzy preference relations based on additive consistency, IEEE Transactions on Systems Man & Cybernetics, Part B Cybernetics 37 (2007) 176–189.
AC
[12] C.C. Li, Y.C. Dong, F. Herrera, E. Herrera-Viedma, L. Mart´ınez, Personalized individual semantics in computing with words for supporting linguistic group decision making. An application on consensus reaching, Information Fusion 33 (2017) 29–40. [13] C.C. Li, R.M. Rodr´ıguez, F. Herrera, L. Mart´ınez, Y.C. Dong, Consistency of hesitant fuzzy linguistic preference relations: an internal consistency index, Information Sciences 432 (2018) 347–361. 34
ACCEPTED MANUSCRIPT
CR IP T
[14] C.C. Li, R.M. Rodr´ıguez, L. Mart´ınez, Y.C. Dong, F. Herrera, Personalized individual semantics based on consistency in hesitant linguistic group decision making with comparative linguistic expressions, Knowledge-Based Systems 145 (2018) 156–165. [15] H.B. Liu, J.F. Cai, L. Jiang, On improving the additive consistency of the fuzzy preference relations based on comparative linguistic expressions, International Journal of Intelligent Systems 29 (2014) 544–559.
AN US
[16] X.W. Liu, S.L. Han, Orness and parameterized RIM quantifier aggregation with OWA operators: a summary, International Journal of Approximate Reasoning 48 (2008) 77–97. [17] I. Palomares, J. Liu, Y. Xu, Modelling experts’ attitudes in group decision making, Soft Computing 16 (2012) 1755–1766. [18] I. Palomares, R.M. Rodr´ıguez, L. Mart´ınez, An attitude-driven web consensus support system for heterogeneous group decision making, Expert Systems with Applications 40 (2013) 139–149.
ED
M
[19] R. Rodr´ıguez, B. Bedregal, H. Bustince, Y.C. Dong, B. Farhadinia, C. Kahraman, L. Mart´ınez, V. Torra, Y.J. Xu, Z.S. Xu, F. Herrera, A position and perspective analysis of hesitant fuzzy sets on information fusion in decision making. Towards high quality progress, Information Fusion 29 (2016) 89–97.
PT
[20] R.M. Rodr´ıguez, L. Mart´ınez, F. Herrera, Hesitant fuzzy linguistic term sets for decision making, IEEE Transactions on Fuzzy Systems 20 (2012) 109–119.
CE
[21] R.M. Rodr´ıguez, L. Mart´ınez, F. Herrera, A group decision making model dealing with comparative linguistic expressions based on hesitant fuzzy linguistic term sets, Information Sciences 241 (2014) 28–42.
AC
[22] R.M. Rodr´ıguez, L. Mart´ınez, V. Torra, Z.S. Xu, F. Herrera, Hesitant fuzzy sets: state of the art and future directions, International Journal of Intelligent Systems 29 (2014) 495–524. [23] Y.M. Song, J. Hu, A group decision-making model based on incomplete comparative expressions with hesitant linguistic terms, Applied Soft Computing 59 (2017) 174–181. 35
ACCEPTED MANUSCRIPT
[24] V. Torra, Hesitant fuzzy sets, International Journal of Intelligent Systems 25 (2010) 529–539.
CR IP T
[25] V. Torra, Y. Narukawa, On hesitant fuzzy sets and decision, Proc. FUZZ-IEEE 2009, IEEE Xplore, 2009, pp. 1378–1382. [26] Y.Z. Wu, C.C. Li, X. Chen, Y.C. Dong, Group decision making based on linguistic distributions and hesitant assessments: Maximizing the support degree with an accuracy constraint, Information Fusion 41 (2018) 151–160.
AN US
[27] Z.B. Wu, J.P. Xu, Managing consistency and consensus in group decision making with hesitant fuzzy linguistic preference relations, Omega 65 (2016) 28–40. [28] Z.B. Wu, J.P. Xu, Possibility distribution-based approach for MAGDM with hesitant fuzzy linguistic information, IEEE Transactions on Cybernetics 46 (2016) 694–705.
M
[29] Z.B. Wu, J.P. Xu, A consensus model for large-scale group decision making with hesitant fuzzy information and changeable clusters, Information Fusion 41 (2018) 217–231.
ED
[30] Y.J. Xu, F.J. Cabrerizo, E. Herrera-Viedma, A consensus model for hesitant fuzzy preference relations and its application in water allocation management, Applied Soft Computing 58 (2017) 265–284.
CE
PT
[31] Y.J. Xu, L. Chen, K. Li, H.M. Wang, A chi-square method for priority derivation in group decision making with incomplete reciprocal preference relations, Information Sciences 306 (2015) 166–179.
AC
[32] Y.J. Xu, L. Chen, R. Rodr´ıguez, F. Herrera, H. Wang, Deriving the priority weights from incomplete hesitant fuzzy preference relations in group decision making, Knowledge-Based Systems 99 (2016) 71–78. [33] Y.J. Xu, C.Y. Li, X.W. Wen, Missing values estimation and consensus building for incomplete hesitant fuzzy preference relations with multiplicative consistency, International Journal of Computational Intelligence Systems 11 (2018) 101–119.
36
ACCEPTED MANUSCRIPT
[34] Y.J. Xu, R. Patnayakuni, H. M.Wang, Logarithmic least squares method to priority for group decision making with incomplete fuzzy preference relations, Applied Mathematical Modelling 37 (2013) 2139–2152.
CR IP T
[35] Y.J. Xu, H.M. Wang, Eigenvector method, consistency test and inconsistency repairing for an incomplete fuzzy preference relation, Applied Mathematical Modelling 37 (2013) 5171–5183.
AN US
[36] Y.J. Xu, X.W. Wen, H. Sun, H.M. Wang, Consistency and consensus models with local adjustment strategy for hesitant fuzzy linguistic preference relations, International Journal of Fuzzy Systems DOI: 10.1007/s40815-017-0438-3 (2018).
[37] Z.S. Xu, A method based on linguistic aggregation operators for group decision making with linguistic preference relations, Information Sciences 166 (2004) 19–30.
M
[38] R.R. Yager, On ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Transactions on Systems, Man, and Cybernetics 18 (1988) 183–190.
ED
[39] R.R. Yager, Quantifier guided aggregation using OWA operators, International Journal of Intelligent Systems 11 (1996) 49–73. [40] L.A. Zadeh, Fuzzy sets, Information and Control 8 (1965) 338–353.
PT
[41] B.W. Zhang, H.M. Liang, G.Q. Zhang, Reaching a consensus with minimum adjustment in MAGDM with hesitant fuzzy linguistic term sets, Information Fusion 42 (2017) 12–23.
CE
[42] G.Q. Zhang, Y.C. Dong, Y.F. Xu, Linear optimization modeling of consistency issues in group decision making based on fuzzy preference relations, Expert Systems with Applications 39 (2012) 2415–2420.
AC
[43] G.Q. Zhang, Y.C. Dong, Y.F. Xu, Consistency and consensus measures for linguistic preference relations based on distribution assessments, Information Fusion 17 (2014) 46–55. [44] H.J. Zhang, Y.C. Dong, E. Herrera-Viedma, Consensus building for the heterogeneous large-scale GDM with the individual concerns and satisfactions, IEEE Transactions on Fuzzy Systems 26 (2018) 884–898. 37
ACCEPTED MANUSCRIPT
[45] Z. Zhang, X. Kou, W.Y. Yu, C.H. Guo, On priority weights and consistency for incomplete hesitant fuzzy preference relations, KnowledgeBased Systems 143 (2018) 115–126.
CR IP T
[46] Z. Zhang, X.Y. Kou, Q.X. Dong, Additive consistency analysis and improvement for hesitant fuzzy preference relations, Expert Systems with Applications 98 (2018) 118–128. [47] Z.M. Zhang, Deriving the priority weights from incomplete hesitant fuzzy preference relations based on multiplicative consistency, Applied Soft Computing 46 (2016) 37–59.
AN US
[48] Z.M. Zhang, C. Wu, On the use of multiplicative consistency in hesitant fuzzy linguistic preference relations, Knowledge-Based Systems 72 (2014) 13–27. [49] W. Zhou, Z.S. Xu, Probability calculation and element optimization of probabilistic hesitant fuzzy preference relations based on expected consistency, IEEE Transactions on Fuzzy Systems 26 (2018) 1367–1378.
AC
CE
PT
ED
M
[50] B. Zhu, Z.S. Xu, Consistency measures for hesitant fuzzy linguistic preference relations, IEEE Transactions on Fuzzy Systems 22 (2014) 35–45.
38