Hesitant fuzzy set lexicographical ordering and its application to multi-attribute decision making

Hesitant fuzzy set lexicographical ordering and its application to multi-attribute decision making

Information Sciences 327 (2016) 233–245 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins...

354KB Sizes 53 Downloads 189 Views

Information Sciences 327 (2016) 233–245

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Hesitant fuzzy set lexicographical ordering and its application to multi-attribute decision making B. Farhadinia∗ Department of Mathematics, Quchan University of Advanced Technology, Iran

a r t i c l e

i n f o

a b s t r a c t

Article history: Received 10 August 2014 Revised 3 July 2015 Accepted 29 July 2015 Available online 22 August 2015

There exist several types of hesitant fuzzy set (HFS) ranking techniques that have been widely used for handling multi-attribute decision making problems with HFS information. The main goal of this paper is to present firstly a brief study of some existing HFS ranking techniques by emphasizing their counterintuitive examples, and then a novel HFS ranking technique is introduced based on the idea of lexicographical ordering.

Keywords: Hesitant fuzzy set Lexicographical ordering Multi-attribute decision making

© 2015 Elsevier Inc. All rights reserved.

1. Introduction Since hesitant fuzzy set (HFS) was introduced originally by Torra [16] to deal with uncertainty, more and more multiple attribute decision making theories and methods with hesitant fuzzy information have been developed [3,12,18,19]. The concept of HFS is an extension of fuzzy set in which the membership degree of a given element, called the hesitant fuzzy element (HFE), is defined as a set of possible values. This situation can be found in a group decision making problem. To clarify the necessity of introducing HFSs, consider a situation in which two decision makers discuss the membership degree of an element x to a set A. One wants to assign 0.2, but the other 0.4. Accordingly, the difficulty of establishing a common membership degree is not because there is a margin of error (as in intuitionistic fuzzy sets [1]), or some possibility distribution values (as in type-2 fuzzy sets [4]), but because there is a set of possible values. Keeping in mind the concept of HFS, it is difficult sometimes for experts to express the membership degrees of an element to a given set only by crisp values between 0 and 1. To overcome this limitation, different extensions of HFS have been introduced in the literature, such as dual hesitant fuzzy sets (DHFSs) [24], generalized hesitant fuzzy sets (GHFSs) [14], higher order hesitant fuzzy sets (HOHFSs) [9] and hesitant fuzzy linguistic term sets (HFLTSs) [13,15]. As pointed out frequently in the literature, ranking HFEs is significant and plays an indispensable role in the hesitant fuzzy multi-attribute decision making problems. Up to now, some researchers proposed HFE measuring techniques from different perspectives, and they subsequently extended these techniques to those for HFSs. We may classify HFEs measuring techniques into two main classes with respect to their performance: algorithmic techniques and non-algorithmic techniques. In the algorithmic techniques, the ranking orders determined by performing several steps, for instance, the technique of Chen et al. [3] and that of Liao et al. [12]. The outranking approach proposed by Wang et al. [18] based on traditional ELECTRE methods is another algorithmic technique which is not covered by this study because of its laborious duty. In the non-algorithmic techniques, the ranking order of HFEs is achieved in ∗

Tel.: 00989155280519. E-mail address: [email protected], [email protected]

http://dx.doi.org/10.1016/j.ins.2015.07.057 0020-0255/© 2015 Elsevier Inc. All rights reserved.

234

B. Farhadinia / Information Sciences 327 (2016) 233–245

only one step, for instance, the technique of Xia and Xu [19], Xu and Xia’s distance measures [20], and Farhadinia’s techniques [6,7]. However, as shown later in the next sections, the existing HFSs ranking techniques are not always reasonable and may provide insufficient information on alternatives in some cases. This motivates us to propose a new ranking technique which is not falsified by the counterexamples of existing ones. An advantage of the proposed HFS ranking method is that the ranking vector associated with HFSs can be easily extended to that associated with DHFSs (as the mean of ranking vectors associated with the possible membership degrees and nonmembership degrees) [22]. It is more interesting that, on the basis of the relationship between DHFSs and IFSs [24], the proposed ranking vector can be also associated with IFSs which may be usefully seen as an enrichment of subject of IFS ranking methods [10,11]. The present paper is organized as follows: background on the HFSs and a brief review of some existing HFS ranking techniques underling their counterintuitive examples are given in Section 2. In Section 3, we define, inspired by lexicographical ordering, a novel HFS ranking method and show that the proposed ranking method meets some interesting properties and does not produce inconsistent orderings even if the counterintuitive examples of existing methods are taken into account. In Section 4, we illustrate the applicability of the proposed HFS lexicographic ranking method in multi-attribute decision making problems by means of a practical example. This paper concludes in Section 5. 2. Hesitant fuzzy set and existing HFS ranking techniques This section is first devoted to describing the basic definitions and notions of fuzzy set (FS) and its new generalization which are referred to as the hesitant fuzzy set (HFS) [16]. An ordinary fuzzy set (FS) A in X is defined [23] as A = {x, A(x) : x ∈ X }, where A: X → [0, 1] and the real value A(x) represents the degree of membership of x in A. Definition 2.1 ([19]). Let X be the universe of discourse. A hesitant fuzzy set (HFS) on X is symbolized by

H = {x, h(x) : x ∈ X }, where h(x), referred to as the hesitant fuzzy element (HFE), is a set of some values in [0, 1] denoting the possible membership degree of the element x ∈ X to the set H. In this regard, the HFS H can be denoted by

H = {x,



{γ } : x ∈ X }.

γ ∈h(x)

Example 2.1. If X = {x1 , x2 , x3 } is the discourse set, h(x1 ) = {0.2, 0.4, 0.5}, h(x2 ) = {0.3, 0.4} and h(x3 ) = {0.3, 0.2, 0.5, 0.6} are the HFEs of xi (i = 1, 2, 3) to a set H, respectively. Then H can be considered as a HFS, i.e.,

H = {x1 , {0.2, 0.4, 0.5}, x2 , {0.3, 0.4}, x3 , {0.3, 0.2, 0.5.0.6}}. From a mathematical point of view, a HFS H can be seen as a FS if there is only one element in h(x). In this situation, HFSs include FSs as a special case.    For given three HFEs represented by h = γ ∈h {γ }, h1 = γ1 ∈h1 {γ1 } and h2 = γ2 ∈h2 {γ2 }, the arithmetic operations are defined as follows (see [16,19]):

hλ =



{γ λ };

γ ∈h

λh =



{1 − (1 − γ )λ };

γ ∈h



h1 ⊕ h2 =

{γ1 + γ2 − γ1 γ2 };

γ1 ∈h1 ,γ2 ∈h2



h1 ⊗ h2 =

{γ1 γ2 }.

γ1 ∈h1 ,γ2 ∈h2

In what follows, we will describe briefly the existing techniques which are available for ranking HFEs. Farhadinia [7] showed that a ranking function of HFS is directly defined by the use of ranking function of its HFEs. Therefore, we mainly discuss here the ranking functions for HFEs, and drop the discussion on the corresponding ranking functions for HFSs. Hereafter, for notational convenience, h stands for the HFE h(x) for x ∈ X, and we assume that |h| = n, that is,

h=



{γ } = {γ (1) , γ (2) , . . . , γ (n) },

γ ∈h

where all the elements in h are arranged in increasing order.   (1) (2) (n) (1) (2) (n) Definition 2.2 ([8]). Let h1 = γ ∈h1 {γ } = {γ1 , γ1 , . . . , γ1 } and h2 = γ ∈h2 {γ } = {γ2 , γ2 , . . . , γ2 } be two HFEs. The component-wise ordering of HFSs is defined as

h1 h2

if and only if

γ1(i) ≤ γ2(i) , 1 ≤ i ≤ n.

(1)

B. Farhadinia / Information Sciences 327 (2016) 233–245

235

Notice that the number of values in different HFEs may be different. As assumed in many contributions made to the theory of HFEs (see e.g. [6,8,19,20]), we extend the HFE with fewer elements by repeating its maximum element until it has the same length with the other HFE. 2.1. Non-algorithmic techniques Xia and Xu’s ranking function [19]. They considered the arithmetic-mean SAM (h) given later by (12) as the score function of HFE h, denoted here by Sxix , to give a comparison law between two HFEs h1 and h2 as follows: if Sxix (h1 ) > Sxix (h2 ), then h1 > h2 ; if Sxix (h1 ) = Sxix (h2 ), then h1 = h2 . Farhadinia’s novel ranking function [6]. Farhadinia proposed the novel score function SNia as follows:

SNia (h) =

n (i) i=1 δ(i)γ ,  n i=1 δ(i)

(2)

such that {δ(i)}n1 is a positive-valued monotonic increasing sequence of index i. Here, we set {δ(i)}n1 = {i}n1 . For two HFEs h1 and h2 , if SNia (h1 ) > SNia (h2 ), then h1 > h2 ; if SNia (h1 ) = SNia (h2 ), then h1 = h2 . Xu and Xia’s ranking functions [20]. They introduced a class of ranking functions for HFEs based on the distance between a HFE and the full (or positive ideal) HFE 1h ={1} by −d Sxux (h) = d(h, 1h ),

(3)

where d can be chosen as an arbitrary distance measure for HFEs. Among the ranking functions, there are two representative ones which are defined as: • The hesitant normalized Hamming distance score function n 1  (i) |γ − 1|. n

−d

Sxuxhnh (h) = dhnh (h, 1h ) =

(4)

i=1

−d

−d

−d

−d

Notice that here, for two HFEs h1 and h2 , if Sxuxhnh (h1 ) > Sxuxhnh (h2 ), then h1 < h2 ; if Sxuxhnh (h1 ) = Sxuxhnh (h2 ), then h1 = h2 . • The hesitant normalized Euclidean distance score function



−d

Sxuxhne (h) = dhne (h, 1h ) =

n 1  (i) (γ − 1)2 n

 12

.

(5)

i=1

−d

−d

−d

−d

Notice that here, for two HFEs h1 and h2 , if Sxuxhne (h1 ) > Sxuxhne (h2 ), then h1 < h2 ; if Sxuxhne (h1 ) = Sxuxhne (h2 ), then h1 = h2 . In addition to the above distance measure for HFEs, some other distance measures which are proposed by Xu and Xia in [21] can be used for ranking HFEs as follows: −d3 Sxux (h) = d3 (h, 1h ) = max{|γ (i) − 1|};

(6)

1≤i≤n

−d4 Sxux (h) = d4 (h, 1h ) = max{|γ (i) − 1|2 };

(7)

1≤i≤n



−d5 Sxux (h) = d5 (h, 1h ) =

1 2



n 1  (i) |γ − 1| + max{|γ (i) − 1|} ; n 1≤i≤n

 1 −d6 Sxux (h) = d6 (h, 1h ) = 2

i=1

(8)



n 1  (i) |γ − 1|2 + max{|γ (i) − 1|2 } . n 1≤i≤n

(9)

i=1

−d

−d

−d

−d

Notice that here, for two HFEs h1 and h2 , if Sxuxk (h1 ) > Sxuxk (h2 ), then h1 < h2 ; if Sxuxk (h1 ) = Sxuxk (h2 ), then h1 = h2 , for k = 3, 4, 5, 6. Remark 2.1. In order to get the ranking order of HFEs, one may replace the distance measure by the similarity measure or the correlation measure or the relative closeness measure that apparently produces other kinds of ranking methods. But all such methods are not only novel compared to the latter technique but also cannot reduce the complexity of computations. Farhadinia’s ranking functions [7]. Farhadinia defined a number of score functions for ranking HFEs in the following forms: • The smallest score function:



S∇ (h) =

1,

if h is the full HFE, i.e., h = {1},

0,

otherwise.

(10)

236

B. Farhadinia / Information Sciences 327 (2016) 233–245

• The greatest score function:



S (h) =

0,

if h is the empty HFE, i.e., h = {0},

1,

otherwise.

(11)

• The arithmetic-mean score function:

SAM (h) =

n 1  (i) γ . n

(12)

i=1

• The geometric-mean score function:



SGM (h) =

n

 1n

γ (i)

.

(13)

i=1

• The minimum score function:

SMin (h) = min{γ (1) , γ (2) , . . . , γ (n) }.

(14)

• The maximum score function:

SMax (h) = max{γ (1) , γ (2) , . . . , γ (n) }.

(15)

• The product score function:

SP (h) =

n

γ (i) .

(16)

i=1

• The bounded sum score function:



SBS (h) = min 1,

n 



γ (i)

.

(17)

i=1

• The k-order statistic score function:

Sσ (k) (h) = γ σ (k) .

(18)

where γ σ (k) is the kth largest value in h. Obviously,

Sσ (1) (h) = SMin (h)

and

Sσ (n) (h) = SMax (h).

• The fractional score function:

n (i) i=1 γ ,

n (i) + (i) ) i=1 γ i=1 (1 − γ

SF (h) = n

with the convention

0 0

(19)

= 0.

By the use of any Farhadinia’s score function S∗ , we can rank HFEs h1 and h2 as follows: if S∗ (h1 ) > S∗ (h2 ), then h1 > h2 ; if S∗ (h1 ) = S∗ (h2 ), then h1 = h2 . 2.2. Algorithmic techniques In the following techniques, the ranking results are achieved by using two concepts known as score function and deviation function where the relationship between the score function and the deviation function is much like that of mean and variance in statistics. Chen, Xu and Xia’s measuring approach [3]. They considered the arithmetic-mean SAM (h) given by (12) as the score function of HFE h, denoted here by Scxx , and defined the deviation function σ cxx of HFE h by



σcxx (h) =

n 1  (i) (γ − Scxx (h))2 n

 12

.

i=1

Based on Scxx and σ cxx , they gave a comparison law between two HFEs h1 and h2 as follows: • if Scxx (h1 ) > Scxx (h2 ), then h1 > h2 ; • if Scxx (h1 ) = Scxx (h2 ), then 1. if σcxx (h1 ) = σcxx (h2 ), then h1 = h2 ; 2. if σ cxx (h1 ) > σ cxx (h2 ), then h1 < h2 ; 3. if σ cxx (h1 ) < σ cxx (h2 ), then h1 > h2 .

(20)

B. Farhadinia / Information Sciences 327 (2016) 233–245

237

Liao, Xu and Xia’s measuring approach [12]. They considered the arithmetic-mean SAM (h) given by (12) as the score function of HFE h, denoted here by Slxx , and defined the deviation function ν lxx of HFE h by

 νlxx (h) =

n 1  n

 12 (γ (i) − γ ( j) )2

.

(21)

i = j=1

Based on Slxx and ν lxx , they gave a comparison law between two HFEs h1 and h2 as follows: • if Slxx (h1 ) > Slxx (h2 ), then h1 > h2 ; • if Slxx (h1 ) = Slxx (h2 ), then 1. if νlxx (h1 ) = νlxx (h2 ), then h1 = h2 ; 2. if ν lxx (h1 ) > ν lxx (h2 ), then h1 < h2 ; 3. if ν lxx (h1 ) < ν lxx (h2 ), then h1 > h2 . Notice that there exists an error in the definition of deviation function ν lxx where

 νlxx (h) =

n 1  n

 12 (γ (i) − γ ( j) )2

i = j=1

should be corrected as

 νlxx (h) =

n 1  (n)2

 12 (γ (i) − γ ( j) )2

,

(22)

i = j=1

n! such that (n)2 = (n−2 )!2! . Now, with the help of some counterintuitive examples, we can easily show that the above-mentioned ranking functions are not fit so well. Firstly, let us deal with counterintuitive examples of non-algorithmic techniques which are described in Table 1. Keeping in mind that each ranking function may has different counterintuitive examples, but it suffices to give one example for each formula in Table 1. In order to deal with counterintuitive examples of algorithmic techniques, we consider a situation in which a group of five decision makers discuss the membership degree of an element x to a set. They are hesitant about some possible values as 0.1, 0.3, 0.3, 0.3 and 0.5, and they cannot reach consistency each other. For such a circumstance, the hesitance experienced by the five decision makers can be modeled by an HFE represented by h1 = {0.1, 0.3, 0.3, 0.3, 0.5}. Follows from the set theory, the HFE h1 may be represented as h2 = {0.1, 0.3, 0.3, 0.5} and h3 = {0.1, 0.3, 0.5} where multiple occurrences of any element are permitted. In this situation, we expect that all identical HFEs h1 , h2 and h3 have the same ranking value. Although both algorithmic approaches proposed by Chen et al. [3] and Liao et al. [12] do not possess the counterintuitive examples of all above-mentioned non-algorithmic techniques, but applying them to the latter situation give rise to unreasonable results. By applying Chen et al.’s measuring approach and using Eqs. (12) and (20), we get

Scxx (h1 ) = Scxx (h2 ) = Scxx (h3 ) = 0.3,



σcxx (h1 ) =

1 (0.08), 5



σcxx (h2 ) =

1 (0.08), 4

 σcxx (h3 ) =

1 (0.08), 3

which gives rise to

h1 > h2 > h3 . By applying Liao et al.’s measuring approach and using Eqs. (12) and (22), we get

Slxx (h1 ) = Slxx (h2 ) = Slxx (h3 ) = 0.3,



νlxx (h1 ) =

1 (0.40), 10

νlxx (h2 ) =



1 (0.32), 6

 νlxx (h3 ) =

1 (0.24). 3

This results in

h1 > h2 > h3 . However, we are going to seek an efficient ranking method overcoming the drawbacks of the existing methods, and also reducing the complexity of computations. This is the objective of the next section.

238

B. Farhadinia / Information Sciences 327 (2016) 233–245 Table 1 Non-algorithmic techniques and their counterintuitive example. Counterintuitive example

Ranking function

Example 1: h1 = {0.2, 0.6}, Example 2: h1 = {0.1, 0.5}, Example 3: h1 = {0.2, 0.6}, Example 4: h1 = {0.1, 0.9}, Example 5: h1 = {0.1, 0.2}, Example 6: h1 = {0.1, 0.2}, Example 7: h1 = {0.1, 0.9}, Example 8: h1 = {0.2, 0.6}, Example 9: h1 = {0.1, 0.2}, Example 10: h1 = {0.1, 0.2}, Example 11: h1 = {0.2, 0.6}, Example 12: h1 = {0.2, 0.4}, Example 13: h1 = {0.1, 0.2}, Example 14: h1 = {0.1, 0.3}, Example 15: h1 = {0.2, 0.4}, Example 16: h1 = {0.2, 0.6}, Example 17: h1 = {0.2, 0.6}, Example 18: h1 = {0.1, 0.2},

Eq. (12): Sxix (h1 ) = 0.4, Sxix (h2 ) = 0.4 (Unreasonable) Eq. (2): SNia (h1 ) = 0.3666, SNia (h2 ) = 0.3666 (Unreasonable) −d −d Eq. (4): Sxuxhnh (h1 ) = 0.6, Sxuxhnh (h2 ) = 0.6 (Unreasonable) −d −d Eq. (5): Sxuxhne (h1 ) = 0.6403, Sxuxhne (h2 ) = 0.6403 (Unreasonable) −d3 −d3 Eq. (6): Sxux (h1 ) = 0.9, Sxux (h2 ) = 0.9 (Unreasonable) −d4 −d4 Eq. (7): Sxux (h1 ) = 0.81, Sxux (h2 ) = 0.81 (Unreasonable) −d5 −d5 Eq. (8): Sxux (h1 ) = 0.7, Sxux (h2 ) = 0.7 (Unreasonable) −d6 −d6 Eq. (9): Sxux (h1 ) = 0.6362, Sxux (h2 ) = 0.6362 (Unreasonable) Eq. (10): S∇ (h1 ) = 0.0, S∇ (h2 ) = 0.0 (Unreasonable) Eq. (11): S (h1 ) = 1.0, S (h2 ) = 1.0 (Unreasonable) Eq. (12): SAM (h1 ) = 0.4, SAM (h2 ) = 0.4 (Unreasonable) Eq. (13): SGM (h1 ) = 0.2828, SGM (h2 ) = 0.2828 (Unreasonable) Eq. (14): SMin (h1 ) = 0.1, SMin (h2 ) = 0.1 (Unreasonable) Eq. (15): SMax (h1 ) = 0.3, SMax (h2 ) = 0.3 (Unreasonable) Eq. (16): SP (h1 ) = 0.08, SP (h2 ) = 0.08 (Unreasonable) Eq. (17): SBS (h1 ) = 0.8, SBS (h2 ) = 0.8 (Unreasonable) Eq. (18): Sσ (2) (h1 ) = 0.6, Sσ (2) (h2 ) = 0.6 (Unreasonable) Eq. (19): SF (h1 ) = 0.0270, SF (h2 ) = 0.0270 (Unreasonable)

h2 = {0.3, 0.5} h2 = {0.15, 0.475} h2 = {0.3, 0.5} h2 = {0.3217, 0.4} h2 = {0.1, 0.3} h2 = {0.1, 0.3} h2 = {0.2, 0.6} h2 = {0.1, 0.6286} h2 = {0.2, 0.3} h2 = {0.2, 0.3} h2 = {0.3, 0.5} h2 = {0.1, 0.8} h2 = {0.1, 0.3} h2 = {0.2, 0.3} h2 = {0.1, 0.8} h2 = {0.3, 0.5} h2 = {0.3, 0.6} h2 = {0.04, 0.4}

3. Lexicographic ranking of HFSs Although problem of ordering of HFEs has been discussed in more details so far and many approaches have been extensively proposed [3–12,19,20], they contained some shortcomings and in some situations they failed to exhibit the consistency of human intuition. Moreover, some of the existing approaches are not simple in calculation. In order to overcome the above-mentioned problems, specially to reduce the complexity of computational procedure, we will present a ranking method for HFEs inspired by lexicographical ordering. As below, we define a new concept of deviation function for HFEs which is simple in calculation and does not possess the drawbacks of already mentioned deviation functions. Definition 3.1. Let h be a HFE, denoted by h = {γ (1) , γ (2) , . . . , γ (n) }. Then, the successive deviation function of h is defined as follows:

νφ (h) =

n−1 

φ(γ (i+1) − γ (i) ),

(23)

i=1

where φ : [0, 1] → [0, 1] is an increasing real function with φ(0) = 0. By Definition 3.1, different formulas can be developed to calculate the successive deviation function of HFE h using different increasing real functions φ : [0, 1] → [0, 1] with φ(0) = 0, for instance, φ(t ) = t p for p > 1. Notice that if we chose φ(t ) = t, then we get

νφ (h) =

n−1  i=1

(γ (i+1) − γ (i) ) = max{h(i) } − min {h(i) }. 1≤i≤n

1≤i≤n

(24)

This successive deviation function cannot efficiently deal with the HFE distinguishing problems, especially, when different HFEs have the same mean of components, and the maximum and the minimum of their complements are exactly alike. For

B. Farhadinia / Information Sciences 327 (2016) 233–245

239

example, applying the successive deviation function ν φ given by (24) to different HFEs

h1 = {0.1, 0.5}, h2 = {0.1, 0.2, 0.4, 0.5}, h3 = {0.1, 0.2, 0.3, 0.4, 0.5}, gives rise to the same result which is unreasonable. In this regard, we take the increasing real function φ(t ) = t p into consideration with p > 1. It is noteworthy that the successive deviation function ν φ given by (23) is invariant with respect to multiple occurrences of any element of HFE h. That is, if we consider again

h1 = {0.1, 0.3, 0.3, 0.3, 0.5}, h2 = {0.1, 0.3, 0.3, 0.5}, h3 = {0.1, 0.3, 0.5}, then, it results that

νφ (h1 ) = νφ (h2 ) = νφ (h3 ) = φ(0.3 − −0.1) + φ(0.5 − −0.3) = 2φ(0.2). Before proceeding to present the main results, the next definition is required. Definition 3.2 ([2]). For X, Y ∈ Rn , the lexicographical ordering on the Euclidean space Rn , denoted by < quiring

lex ,

is defined by re-

X = (x1 , x2 , . . . , xn )
x j = y j holds for j < i and xi < yi . Furthermore, ≤ lex means that X < lex Y or X = Y . Definition 3.3. Let h be a HFE, denoted by h = {γ (1) , γ (2) , . . . , γ (n) }. We denote the ranking vector associated with h by R(h) where

R(h) = (SAM (h), νφ (h)).

(25)

Here, ν φ is the successive deviation function of h given by (23), and SAM (h) is the arithmetic-mean given by (12). By the help of (1)

(2)

(n)

(1)

(2)

(n)

R(·), the HFE lexicographic order is established as follows: for two HFEs h1 = {h1 , h1 , . . . , h1 } and h2 = {h2 , h2 , . . . , h2 }, we deduce that (i) h1 < h2 if and only if R(h1 ) < lex R(h2 ), (ii) h1 ≤ h2 if and only if R(h1 ) ≤ lex R(h2 ), (iii) if h1 = h2 , then R(h1 ) =lex R(h2 ). Obviously, the relations h1 > h2 and h1 ≥ h2 can be viewed as h2 < h1 and h2 ≤ h1 , respectively. It can be easily verified that the HFE lexicographic order has the following reasonable properties: for three HFEs h1 , h2 and h3 , it holds (i) (ii) (iii) (iv)

h1 ≤ h1 ; if h1 < h2 and h2 < h3 , then h1 < h3 ; if h1 ≤ h2 and h2 ≤ h1 then h1 = h2 ; if h1 < h2 then h1 ⊕h3 < h2 ⊕h3 .

Now we are in a position to show that the HFE lexicographical ranking method is an efficient ranking method and overcomes the drawbacks of the existing ones. This assertion is supported by Table 2 whose first column is the same as that of Table 1. Hereafter and in the next section, all computational results are obtained by applying the HFE lexicographical ranking method with R(h) = (SAM (h), ν2 (h)) where

ν2 (h) := νφ(t )=t 2 (h) =

n−1 

(γ (i+1) − γ (i) )2 .

(26)

i=1

Once again consider the situation discussed in the pervious section where we expected that multiple occurrences of any element of a HFE should not affect on its ranking result. There, we dealt with

h1 = {0.1, 0.3, 0.3, 0.3, 0.5}, h2 = {0.1, 0.3, 0.3, 0.5}, h3 = {0.1, 0.3, 0.5}.

240

B. Farhadinia / Information Sciences 327 (2016) 233–245 Table 2 Counterintuitive examples of non-algorithmic methods and the HFE lexicographical ranking results. Counterintuitive examples of non-algorithmic methods

The HFE lexicographical ranking result

Example 1: h1 = {0.2, 0.6}, Example 2: h1 = {0.1, 0.5}, Example 3: h1 = {0.2, 0.6}, Example 4: h1 = {0.1, 0.9}, Example 5: h1 = {0.1, 0.2}, Example 6: h1 = {0.1, 0.2}, Example 7: h1 = {0.1, 0.9}, Example 8: h1 = {0.2, 0.6}, Example 9: h1 = {0.1, 0.2}, Example 10: h1 = {0.1, 0.2}, Example 11: h1 = {0.2, 0.6}, Example 12: h1 = {0.2, 0.4}, Example 13: h1 = {0.1, 0.2}, Example 14: h1 = {0.1, 0.3}, Example 15: h1 = {0.2, 0.4}, Example 16: h1 = {0.2, 0.6}, Example 17: h1 = {0.2, 0.6}, Example 18: h1 = {0.1, 0.2},

Eq. (25): h 1 > h2 Eq. (25): h 1 > h2 Eq. (25): h 1 > h2 Eq. (25): h 1 > h2 Eq. (25): h 1 < h2 Eq. (25): h 1 < h2 Eq. (25): h 1 > h2 Eq. (25): h 1 > h2 Eq. (25): h 1 < h2 Eq. (25): h 1 < h2 Eq. (25): h 1 > h2 Eq. (25): h 1 < h2 Eq. (25): h 1 < h2 Eq. (25): h 1 < h2 Eq. (25): h 1 < h2 Eq. (25): h 1 > h2 Eq. (25): h 1 < h2 Eq. (25): h 1 < h2

h2 = {0.3, 0.5} h2 = {0.15, 0.475} h2 = {0.3, 0.5} h2 = {0.3217, 0.4} h2 = {0.1, 0.3} h2 = {0.1, 0.3} h2 = {0.2, 0.6} h2 = {0.1, 0.6286} h2 = {0.2, 0.3} h2 = {0.2, 0.3} h2 = {0.3, 0.5} h2 = {0.1, 0.8} h2 = {0.1, 0.3} h2 = {0.2, 0.3} h2 = {0.1, 0.8} h2 = {0.3, 0.5} h2 = {0.3, 0.6} h2 = {0.04, 0.4}

R(h1 ) = (0.4, 0.16) >lex R(h2 ) = (0.4, 0.04) R(h1 ) = (0.3, 0.16) >lex R(h2 ) = (0.3125, 0.1056) R(h1 ) = (0.4, 0.16) >lex R(h2 ) = (0.4, 0.04) R(h1 ) = (0.5, 0.64) >lex R(h2 ) = (0.3608, 0.0061) R(h1 ) = (0.15, 0.01) lex R(h2 ) = (0.4, 0.16) R(h1 ) = (0.4, 0.16) >lex R(h2 ) = (0.3643, 0.2794) R(h1 ) = (0.15, 0.01) lex R(h2 ) = (0.4, 0.04) R(h1 ) = (0.3, 0.04) lex R(h2 ) = (0.4, 0.04) R(h1 ) = (0.4, 0.16)
By applying the HFE lexicographical ranking method and Eq. (25) where

ν2 (h) =

n−1 

(γ (i+1) − γ (i) )2 ,

i=1

we find that

R(h1 ) = R(h2 ) = R(h3 ) = (0.3, 0.08), which implies that

h1 = h2 = h3 . This is what is expected of theory of sets. This shows the superiority of the HFE lexicographical ranking method over both methods proposed by Chen et al. [3] and Liao et al. [12]. Moreover, if the above HFE lexicographical ranking method is applied to the already-mentioned different HFEs

h1 = {0.1, 0.5}, h2 = {0.1, 0.2, 0.4, 0.5}, h3 = {0.1, 0.2, 0.3, 0.4, 0.5}, we then find that

R(h1 ) = (0.3, 0.16) >lex R(h2 ) = (0.3, 0.06) >lex R(h3 ) = (0.3, 0.04), which implies that

h1 > h2 > h3 . It can be easily seen that the latter result yields a consistent and accurate outcome, if we extend h1 and h2 optimistically by repeating their maximum element until they have the same length with h3 . In this regard, one gets

h1 = {0.1, 0.5, 0.5, 0.5, 0.5},

B. Farhadinia / Information Sciences 327 (2016) 233–245

241

h2 = {0.1, 0.2, 0.4, 0.5, 0.5}, h3 = {0.1, 0.2, 0.3, 0.4, 0.5}, and by Definition 2.2, it results in

h1  h2  h3 . 



{γ1(1) , γ1(2) , . . . , γ1(n) } and h2 = γ ∈h2 {γ } = {γ2(1) , γ2(2) , . . . , γ2(n) } be two component-wise (i) (i) comparable HFEs (see Definition 2.2) such that h1 h2 , that is, γ1 ≤ γ2 , 1 ≤ i ≤ n. Then, we get

Theorem 3.1. Let h1 =

γ ∈h1 {γ } =

R(h1 ) = (SAM (h1 ), νφ (h1 )) ≤lex R(h2 ) = (SAM (h2 ), νφ (h2 )), where SAM (·) is given by (12) and ν φ (·) is introduced in (23) as

νφ (h) =

n−1 

φ(γ (i+1) − γ (i) ),

i=1

where φ : [0, 1] → [0, 1] is an increasing real function with φ(0) = 0. (i)

(i)

Proof. Known by h1 h2 , i.e., γ1 ≤ γ2 , 1 ≤ i ≤ n, it is easily seen that n n 1  (i) 1  (i) γ1 ≤ γ2 = SAM (h2 ). n n

SAM (h1 ) =

i=1

i=1

If SAM (h1 ) < SAM (h2 ), then R(h1 ) = (SAM (h1 ), νφ (h1 ))
γ1( j) ≤ γ2( j) , 1 ≤ j ≤ n,

(27)

and as a result of SAM (h1 ) = SAM (h2 ), we have n n   γ1( j) = γ2( j) . j=1

(28)

j=1

To show ν φ (h1 ) ≤ ν φ (h2 ), we first verify that

γ1(i+1) − γ1(i) ≤ γ2(i+1) − γ2(i) , 1 ≤ i ≤ n − 1.

(29)

For this, we do for 1 ≤ i ≤ n − 1 as the following:



(i+1)

γ1

(i)

− γ1

(i+1)

= γ1



n n   γ2( j) −

γ1

(by equation (28))

j=1, j =i

j=1

= 2γ1(i+1) −

 ( j)

n  γ2( j) +

n 

γ1( j)

j=1, j =i,i+1

j=1 n  ≤ 2γ2(i+1) − γ2( j) +

n 

γ1( j) (by equation (27))

j=1, j =i,i+1

j=1

n 

= γ2(i+1) − γ2(i) −

j=1, j =i,i+1 n 

= γ2(i+1) − γ2(i) −

n 

γ2( j) +

γ1( j)

j=1, j =i,i+1

(γ2( j) − γ1( j) )

j=1, j =i,i+1

≤ γ2(i+1) − γ2(i) .

(by equation (27))

Putting the above result together with the fact that φ : [0, 1] → [0, 1] is an increasing real function, one can conclude that

νφ (h1 ) =

n−1  i=1

φ(γ1(i+1) − γ1(i) ) ≤

n−1 

φ(γ2(i+1) − γ2(i) ) = νφ (h2 ).

i=1

This gives rise to R(h1 ) = (SAM (h1 ), νφ (h1 )) ≤lex R(h2 ) = (SAM (h2 ), νφ (h2 )) which means that h1 is lexicographically less than h2 . 

242

B. Farhadinia / Information Sciences 327 (2016) 233–245

The above theorem indicates that the HFE lexicographical ranking method satisfies the monotone non-decreasing property. On the other hand, if we call 0h = {0} and 1h = {1} the empty HFE and the full HFE, respectively, then the following theorem shows that the HFE lexicographical ranking method satisfies the boundary conditions property.  Theorem 3.2. Let 0h = {0} and 1h = {1} denote the empty HFE and the full HFE, respectively. For any HFE h = γ ∈h {γ } =

{γ (1) , γ (2) , . . . , γ (n) }, it yields that R(0h ) ≤lex R(h) ≤lex R(1h ).

Proof. The proof is immediate from the fact that R(0h ) = (SAM (0h ), νφ (0h )) = (0, 0) and R(1h ) = (SAM (1h ), νφ (1h )) = (1, 0).



Usually, in multiple attribute decision making problems we need to compare HFSs. Hence, it is necessary to extend the ranking vector associated with HFE h to that for HFS H = {x, h(x) : x ∈ X }. This provides us with the HFS lexicographical ranking method. Definition 3.4. Let H = {xi , h(xi ) : xi ∈ X } be a HFS and assume that wi (i = 1, . . . , n) is the weight of the element xi (i =  1, . . . , n) with wi ∈ [0, 1] and ni=1 wi = 1. Then, we can further extend the ranking vector R(h) associated with HFE h to that associated with HFS H as follows:

R(H ) =

n 



wi (SAM (h(xi )), νφ (h(xi ))) =

i=1

n 

wi SAM (h(xi )),

i=1

n 



wi (νφ (h(xi ))) .

(30)

i=1

Then, for two HFSs H1 = {x, h1 (x) : x ∈ X } and H2 = {x, h2 (x) : x ∈ X }, we say that (i) H1 < H2 (ii) H1 ≤ H2 (iii) H1 = H2

if and only if R(H1 )
Obviously, the relations H1 > H2 and H1 ≥ H2 can be viewed as H2 < H1 and H2 ≤ H1 , respectively. Here we should emphasis that the process of extending the ranking vector R(h) associated with HFE h to the R(h) associated with HFS H is similar to the manner by which Xu and Xia in [20] and Farhadinia in [7] extended the score function for HFEs to that for HFSs. In particular, in [7,20] the score of HFS is defined by the mean of its HFEs scores. 4. Practical example Let us consider the following practical example, which was perviously reviewed by Farhadinia in [7], to illustrate the HFS lexicographical ranking method for handling a hesitant fuzzy multi-attribute decision making problem. Through this example, we compare the results of the HFS lexicographical ranking method with that of the existing methods. Example 4.1 (Adopted from [19]). The enterprise’s board of directors, which includes five members, is to plan the development of large projects (strategy initiatives) for the following five years. Suppose there are four possible projects Yi (i = 1, 2, 3, 4) to be evaluated. It is necessary to compare these projects to select the most important of them as well as order them from the point of view of their importance, taking into account four attributes suggested by the Balanced Scorecard methodology (it should be noted that all of them are of the maximization type): G1 : financial perspective, G2 : the customer satisfaction, G3 : internal business process perspective, and G4 : learning and growth perspective. And suppose that the weight vector of the attributes is w = (0.2, 0.3, 0.15, 0.35). In the following, the optimal project is obtained by utilizing the HFS lexicographical ranking method whose steps are: • Step 1: The decision makers provide their evaluations about the alternative Yi under the attribute Gj in the form of the hesitant

fuzzy decision matrix H = [H1 , . . . , H4 ] = [h(i) (G j )]4×4 . • Step 2: Utilize the HFS lexicographical ranking method for HFSs with φ(t ) = t 2 to obtain the priorities of the alternatives Yi (i = 1, 2, 3, 4). Compared to the efficient existing methods [3,12], the proposed HFS lexicographical ranking method relieves the user from the laborious duty of using more complex deviations by introducing the successive deviation and determining the priorities of alternatives directly from the HFS ranking values. Table 3 represents the hesitant fuzzy decision matrix which is provided by the decision makers through Step 1. Table 3 Hesitant fuzzy decision matrix.

Y1 Y2 Y3 Y4

G1

G2

G3

G4

{0.2, 0.4, 0.7} {0.2, 0.4, 0.7, 0.9} {0.3, 0.5, 0.6, 0.7} {0.3, 0.5, 0.6}

{0.2, 0.6, 0.8} {0.1, 0.2, 0.4, 0.5} {0.2, 0.4, 0.5, 0.6} {0.2, 0.4}

{0.2, 0.3, 0.6, 0.7, 0.9} {0.3, 0.4, 0.6, 0.9} {0.3, 0.5, 0.7, 0.8} {0.5, 0.6, 0.7}

{0.3, 0.4, 0.5, 0.7, 0.8} {0.5, 0.6, 0.8, 0.9} {0.2, 0.5, 0.6, 0.7} {0.8, 0.9}

B. Farhadinia / Information Sciences 327 (2016) 233–245

243

From Table 3, one can express the HFSs corresponding to Yi ’s for i = 1, 2, 3, 4 as follows:

HF SY1 = {Gi , hY1 (Gi ), i = 1, . . . , 4} = {G1 , {0.2, 0.4, 0.7}, G2 , {0.2, 0.6, 0.8}, G3 , {0.2, 0.3, 0.6, 0.7, 0.9}, G4 , {0.3, 0.4, 0.5, 0.7, 0.8}}, HF SY2 = {Gi , hY2 (Gi ), i = 1, . . . , 4} = {G1 , {0.3, 0.5, 0.6, 0.7}, G2 , {0.1, 0.2, 0.4, 0.5}, G3 , {0.3, 0.4, 0.6, 0.9}, G4 , {0.5, 0.6, 0.8, 0.9}}, HF SY3 = {Gi , hY3 (Gi ), i = 1, . . . , 4} = {G1 , {0.2, 0.4, 0.7, 0.9}, G2 , {0.2, 0.4, 0.5, 0.6}, G3 , {0.3, 0.5, 0.7, 0.8}, G4 , {0.2, 0.5, 0.6, 0.7}}, HF SY4 = {Gi , hY4 (Gi ), i = 1, . . . , 4} = {G1 , {0.3, 0.5, 0.6}, G2 , {0.2, 0.4}, G3 , {0.5, 0.6, 0.7}, G4 , {0.8, 0.9}}. By the help of the proposed HFS lexicographical ranking method and Eq. (30), one can get the HFS ranking values. For example,

R(HF SY1 ) = 0.2 × R(hY1 (G1 )) + 0.3 × R(hY1 (G2 )) + 0.15 × R(hY1 (G3 )) + 0.35 × R(hY1 (G4 )) = 0.2 × (0.4333, 0.13) + 0.3 × (0.5333, 0.2) + 0.15 × (0.54, 0.15) + 0.35 × (0.6, 0.07) = (0.5167, 0.133). Moreover, one gets

R(HF SY2 ) = (0.5275, 0.072),

R(HF SY3 ) = (0.4937, 0.155),

R(HF SY4 ) = (0.5708, 0.0285).

Thus

R(HF SY3 ) ≤lex R(HF SY1 ) ≤lex R(HF SY2 ) ≤lex R(HF SY4 ), which implies that

Y3 ≤ Y1 ≤ Y2 ≤ Y4 . It is noteworthy to say that the above alternatives priority is also obtained by applying both methods proposed by Chen et al. [3] and Liao et al. [12] provided that they are extended to those algorithms for HFS by the same manner given in Definition 3.4 where the mean of HFS is defined by the mean of its HFEs means, and the deviation of HFS is defined by the mean of its HFEs deviations. Putting this assumption together with Chen et al.’s measuring approach gives rise to

Scxx (HF SY1 ) = 0.2 × Scxx (hY1 (G1 )) + 0.3 × Scxx (hY1 (G2 )) + 0.15 × Scxx (hY1 (G3 )) + 0.35 × Scxx (hY1 (G4 )) = 0.2 × 0.4333 + 0.3 × 0.5333 + 0.15 × 0.54 + 0.35 × 0.6 = 0.5167, and

Scxx (HF SY2 ) = 0.5275,

Scxx (HF SY3 ) = 0.4937,

Scxx (HF SY4 ) = 0.5708.

Therefore

Scxx (HF SY3 ) < Scxx (HF SY1 ) < Scxx (HF SY2 ) < Scxx (HF SY4 ), which means that

Y3 ≤ Y1 ≤ Y2 ≤ Y4 . Once again putting the latter assumption together with Liao et al.’s measuring approach gives rise to

Slxx (HF SY1 ) = 0.5167,

Slxx (HF SY2 ) = 0.5275,

Slxx (HF SY3 ) = 0.4937,

Slxx (HF SY4 ) = 0.5708,

such that

Slxx (HF SY3 ) < Slxx (HF SY1 ) < Slxx (HF SY2 ) < Slxx (HF SY4 ), which means that

Y3 ≤ Y1 ≤ Y2 ≤ Y4 . Farhadinia [7] has already reviewed some existing HFS ranking methods whose ranking results are obtained directly from the HFS ranking values or by using aggregation operators. These results are represented in the first 10 rows of Table 4. The remaining rows are devoted to the results of Xu and Xia’s method [20], Chen et al.’s method [3], Liao et al.’s method [12] and the proposed HFS lexicographical ranking method.

244

B. Farhadinia / Information Sciences 327 (2016) 233–245 Table 4 Ranking function values of the HFSs and the rankings of alternatives. Method

Y1

Y2

Y3

Y4

Ranking

GHFWA1 (via Sxix ) GHFWA2 (via Sxix ) GHFWA5 (via Sxix ) GHFWA10 (via Sxix ) GHFWA20 (via Sxix ) GHFWG1 (via Sxix ) GHFWG2 (via Sxix ) GHFWG5 (via Sxix ) GHFWG10 (via Sxix ) GHFWG20 (via Sxix ) GHFWA1 (via SNia ) AM SWAM GM SWAM Min SWAM Max SWAM −d Xu and Xia’s method (via Sxuxhnh ) −d Xu and Xia’s method (via Sxuxhne ) Chen et al.’s method (via Scxx ) Liao et al.’s method (via Slxx ) The HFS lexicographical ranking

0.5634 0.5847 0.6324 0.6730 0.7058 0.4783 0.4546 0.4011 0.3564 0.3221 0.9018 0.5167 0.4618 0.2350 0.7950 0.5099 0.5024 0.5167 0.5167 (0.5167,0.133)

0.6009 0.6278 0.6807 0.7235 0.7576 0.4625 0.4295 0.3706 0.3264 0.2919 0.9101 0.5275 0.4845 0.2900 0.7800 0.4725 0.5366 0.5275 0.5275 (0.5275,0.072)

0.5178 0.5337 0.5723 0.6087 0.6410 0.4661 0.4526 0.4170 0.3809 0.3507 0.8921 0.4937 0.4575 0.2350 0.6850 0.5062 0.5456 0.4937 0.4937 (0.4937,0.155)

0.6524 0.6781 0.7314 0.7745 0.8077 0.5130 0.4755 0.4082 0.3609 0.3266 0.9528 0.5708 0.5606 0.4750 0.6600 0.4291 0.5101 0.5708 0.5708 (0.5708,0.0285)

Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y4 > Y1 > Y3 > Y2 Y4 > Y1 > Y3 > Y2 Y3 > Y4 > Y1 > Y2 Y3 > Y4 > Y1 > Y2 Y3 > Y4 > Y1 > Y2 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y1 > Y2 > Y3 > Y4 Y4 > Y2 > Y3 > Y1 Y1 > Y4 > Y2 > Y3 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3 Y 4 > Y2 > Y1 > Y3

In Table 4, GHFWAλ and GHFWGλ are the aggregation operators used in [19] to obtain the hesitant fuzzy elements hYi (i = 1, 2, 3, 4) and given respectively by

hYk := GHFWAλ=1 (hYk (G1 ), hYk (G2 ), hYk (G3 ), hYk (G4 ))





= ( ⊕4i=1 (wi hYλk (Gi ))) λ |λ=1 = 1

1−

hY(1) ∈hY (G1 ),...,hY(4) ∈hY (G4 ) k

k

1

λ

(1)

k

n  i=1

SWAM (H ) = GM

n 



wi

j=1

|h

(xi )|

i=1

SWAM (H ) = Min

n 



 1−

4

(1 − (1 − hY(ik) )λ )wi

i=1

n i=1

i

AM



⎠,

j=1

wi ( min{h(1) (xi ), h(2) (xi ), . . . , h(|h(xi )|) (xi )}),

i=1

SWAM (H ) = Max

n 

wi ( max{h(1) (xi ), h(2) (xi ), . . . , h(|h(xi )|) (xi )}),

i=1

−d

−d

and the distance-based ranking functions Sxuxhnh and Sxuxhne are in the form of (see [20]) −d

n 

−d

n 

Sxuxhnh (H ) =

−d

wi (Sxuxhnh (h(xi ))),

i=1

Sxuxhne (H ) =

i=1

−d

wi (Sxuxhne (h(xi ))).

 λ1 ⎫ ⎬

,



wi = 1; the HFS score functions SWAM (H ), SWAM (H ), SWAM (H ), and SWAM

 |h(1x )| ⎞

h( j) (xi )

|λ=1 ,

Yk

i=1

|h (xi )| 1 h( j) (xi ) , |h(xi )|

⎛ wi ⎝

 λ1

k

k

where wi ≥ 0 (i = 1, . . . , n) are the weights with are given by (see [7])

SWAM (H ) =

1−

(4)

hY ∈hY (G1 ),...,hY ∈hY (G4 ) k

AM

⎧ ⎨



( ⊗4i=1 (λhYk (Gi ))wi ) =

λ (1 − h(i) )wi

k

k

hYk := GHFWGλ (hYk (G1 ), hYk (G2 ), hYk (G3 ), hYk (G4 )) =

4

GM

Min

Max

B. Farhadinia / Information Sciences 327 (2016) 233–245

245

5. Conclusion In this contribution, a novel HFS ranking technique was introduced which can be used for handling multi-attribute decision making problems with HFS information. From the perspective considered here on the HFS representation, we find that the proposed ranking technique which is referred to as the HFS lexicographical ranking method has stronger discrimination and offers more easier implementation in comparison with previous ones. It is noteworthy to say that the lexicographical ranking method is invariant with respect to multiple occurrences of any element of HFE. Moreover, it relieves the user from the laborious duty of using more complex deviations by introducing the successive deviation. This implies that the HFS lexicographical ranking method can provide a more useful technique than previous ones to efficiently help the decision-maker. Sometimes in a consensus model for group decision making (GDM) problems [5,17], experts may belong to distinct research areas, and therefore they may be hesitant in expressing their preferences. In this situation, they need to express their preferences using HFSs which can be discussed in a future study where the proposed ranking method can be used to form similarity matrices for pairs of alternatives which are essential in computing similarity matrices for all alternatives and also for relations during a consensus model. Future work on the HFS ranking methods may continue to address a statistical comparative study of the effect of their application in a GDM by using the nonparametric Wilcoxon test as that given in [5]. References [1] K. Atanassov, Intuitionistic Fuzzy Sets, Theory and Applications, Physica-Verlag, Heidelberg/New York, 1999. [2] C. Calude, Information and randomness, An Algorithmic Perspective, EATCS Monographs in Theoretical Computer Science, Springer-Verlag, Berlin, 1994. [3] N.A. Chen, Z.S. Xu, M.M. Xia, The ELECTRE I multi-criteria decision-making method based on hesitant fuzzy sets, Int. J. Inf. Technol. Decis. Mak. 13 (2014) 1–37. [4] D. Dubois, H. Prade, Fundamentals of fuzzy sets, in: The Handbooks of Fuzzy Sets, vol. 7, Springer, Berlin, Germany, 2000. [5] F. Chiclana, J.M. Tapia Garcia, M.J.Del Moral, E. Herrera-Viedma, A statistical comparative study of different similarity measures of consensus in group decision making, Inf. Sci. 221 (2013) 110–123. [6] B. Farhadinia, A novel method of ranking hesitant fuzzy values for multiple attribute decision-making problems, Int. J. Intell. Syst. 28 (2013) 752–767. [7] B. Farhadinia, A series of score functions for hesitant fuzzy sets, Inf. Sci. 277 (2014) 102–110. [8] B. Farhadinia, Information measures for hesitant fuzzy sets and interval-valued hesitant fuzzy sets, Inf. Sci. 240 (2013) 129–144. [9] B. Farhadinia, Distance and similarity measures for higher order hesitant fuzzy sets, Knowledge-Based Syst. 55 (2014) 43–48. [10] D.F. Li, A ratio ranking method of triangular intuitionistic fuzzy numbers and its application to MADM problems, Comput. Math. Appl. 60 (2010) 1557–1570. [11] D.F. Li, J.X. Nan, M.J. Zhang, A ranking method of triangular intuitionistic fuzzy numbers and application to decision making, Int. J. Comput. Intell. Syst. 3 (2010) 522–530. [12] H.C. Liao, Z.S. Xu, M.M. Xia, Multiplicative consistency of hesitant fuzzy preference relation and its application in group decision making, Int. J. Inf. Technol. Decis. Mak. 13 (2014) 47–76. [13] H.C. Liao, Z.S. Xu, X.J. Zeng, Distance and similarity measures for hesitant fuzzy linguistic term sets and their application in multi-criteria decision making, Inf. Sci. 271 (2014) 125–142. [14] G. Qian, H. Wang, X. Feng, Generalized hesitant fuzzy sets and their application in decision support system, Knowledge-Based Syst. 37 (2013) 357–365. [15] R.M. Rodriguez, L. Martinez, F. Herrera, Hesitant fuzzy linguistic term sets for decision making, IEEE Trans. Syst. 20 (2012) 109–119. [16] V. Torra, Hesitant fuzzy sets, Int. J. Intell. Syst. 25 (2010) 529–539. [17] E. Herrera-Viedma, F.J. Cabrerizo, J. Kacprzyk, W. Pedrycz, A review of soft consensus models in a fuzzy environment, Inf. Fusion 17 (2014) 4–13. [18] J.Q. Wang, D.D. Wang, H.Y. Zhang, X.H. Chen, Multi-criteria outranking approach with hesitant fuzzy sets, OR Spectrum 36 (2014) 1001–1019. [19] M.M. Xia, Z.S. Xu, Hesitant fuzzy information aggregation in decision making, Int. J. Approx. Reason. 52 (2011) 395–407. [20] Z.S. Xu, M.M. Xia, Distance and similarity measures for hesitant fuzzy sets, Inf. Sci. 181 (2011) 2128–2138. [21] Z.S. Xu, M.M. Xia, On distance and correlation measures of hesitant fuzzy information, Int. J. Intell. Syst. 26 (2011) 410–425. [22] D.J. Yu, D.F. Li, Dual hesitant fuzzy multi-criteria decision making and its application to teaching quality assessment, J. Intell. Fuzzy Syst. 27 (2014) 1679– 1688. [23] L.A. Zadeh, Fuzzy sets, Inf. Comput. 8 (1965) 338–353. [24] B. Zhu, Z.S. Xu, M.M. Xia, Dual hesitant fuzzy sets, J. Appl. Math. 2012 (2012) 1–13.