Some Muirhead mean operators for probabilistic linguistic term sets and their applications to multiple attribute decision-making

Some Muirhead mean operators for probabilistic linguistic term sets and their applications to multiple attribute decision-making

Accepted Manuscript Title: Some Muirhead Mean operators for Probabilistic Linguistic Term Sets and Their Applications to Multiple Attribute Decision-M...

590KB Sizes 0 Downloads 22 Views

Accepted Manuscript Title: Some Muirhead Mean operators for Probabilistic Linguistic Term Sets and Their Applications to Multiple Attribute Decision-Making Author: Peide Liu Fei Teng PII: DOI: Reference:

S1568-4946(18)30149-2 https://doi.org/doi:10.1016/j.asoc.2018.03.027 ASOC 4776

To appear in:

Applied Soft Computing

Received date: Revised date: Accepted date:

22-8-2017 31-1-2018 18-3-2018

Please cite this article as: P. Liu, F. Teng, Some Muirhead Mean operators for Probabilistic Linguistic Term Sets and Their Applications to Multiple Attribute Decision-Making, Applied Soft Computing Journal (2018), https://doi.org/10.1016/j.asoc.2018.03.027 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Some Muirhead Mean operators for Probabilistic Linguistic Term Sets and Their Applications to Multiple Attribute Decision-Making Peide Liu a,*, Fei Tenga a,

School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan Shandong

250014, China *Corresponding author:E-mail:[email protected], TEL: +86-531-82222188

ip t

Abstract: Archimedean t-conorm and t-norm (ATT) consists of t-conorm (TC) and t-norm (TN) families, which can develop the general operational laws for some fuzzy sets (FSs). Linguistic scale functions (LSFs) generate different semantic values for the linguistic terms (LTs) based on the different usage environments. Muirhead mean (MM)

cr

aggregation operators have a prominent advantage of capturing interrelationship among any number of arguments. So it is essential to combine MM operators with probabilistic linguistic term sets (PLTSs) on the basis of the ATT and LSFs. In

us

this paper, we firstly propose the general operational laws for PLTSs by ATT and LSFs. Then, we develop the probabilistic linguistic Archimedean MM (PLAMM) operator, probabilistic linguistic Archimedean weighted MM (PLAWMM) operator, probabilistic linguistic Archimedean dual MM (PLADMM) operator and probabilistic linguistic Archimedean dual weighted MM (PLADWMM) operator, and further explore their special examples. Moreover, we provide two

an

multiple attribute decision-making (MADM) methods built on the proposed operators. Finally, some numerical examples are proposed to validate the proposed methods, which are compared to other existing methods to denote their

M

effectiveness.

Keywords: Fuzzy sets; Probabilistic linguistic term sets; Muirhead mean; ATT; Linguistic scale functions 1. Introduction

ed

Decision-making is a frequent behavior in daily life [1-6]. Owing to the fuzziness of human thinking and the complexity of objective things, people may tend to describe evaluation information by LTs instead of the quantitative form in fuzzy decision-making. For example, the comfort or safety of a car can be evaluated by LTs such as “good”, “poor”, or

ce pt

“medium”. Therefore, the researches on various linguistic models have attracted much more widespread attention by researches. Zadeh [7] proposed the fuzzy linguistic approach to model qualitative evaluation information. However, this approach has shortcomings in the evaluation information modeling and computing processes. Then many linguistic models have been developed, involving 2-tuple linguistic model [8], virtual linguistic model [9], type-2 representation model [10] and so on. These linguistic models have a general character, i.e., they only give a single LT to express evaluation information. But, decision makers (DMs) may be hesitant in several terms when they assess an object. In order to deal with

Ac

this situation, Rodriguez et al. [11] developed hesitant fuzzy LT sets (HFLTSs). Up to now, lots of extensive researches on HFLTSs have made many achievements [12-15]. Because the HFLTSs neglect the importance degree of each possible LT, there is an assumption that all possible LTs have the same weight which is unrealistic in real decision. Actually, DMs have different preference about several possible LTs so that they should have different weights. Based on this, Pang et al. [16] firstly proposed the probabilistic LT sets (PLTSs), which are composed of possible LTs and their corresponding probabilities. PLTS can be regarded as the extension of HFLTS, which considers the weights of possible LTs so that it can obtain more reasonable results [16-19]. Aggregation operators are important tools to aggregate evaluation information under different kinds of fuzzy environment. Although both aggregation operators and classical decision-making methods are able to rank finite alternatives, however, aggregation operators can get comprehensive values and then rank them. Obviously, it can be found that the methods based on aggregation operators are superior to classical decision-making methods. Therefore, aggregation operators have attracted increasing attention in the field of decision-making. However, under probabilistic linguistic environment, many experts and scholars mainly focused on the extensions of traditional decision-making 1

Page 1 of 33

methods, such as probabilistic linguistic VIKOR method [20], probabilistic linguistic TOPSIS method [16,18], probabilistic linguistic PROMETHEE method [21] and so on. Up to now, there are only two aggregation operators to aggregate probabilistic linguistic information, which are the probabilistic linguistic weighted average (PLWA) operator and probabilistic linguistic weighted geometric (PLWG) operator [16]. Both operators assume that all input arguments are independent in MADM problems, but the hypothesis would not be the case in most instances, they should be interrelationship of input arguments. So it is necessary to combine PLTSs with some classical aggregation operators which take the interrelationship of input arguments into account, such as Bonferroni mean (BM) operator [22-23], Heronian mean (HM) operator [24-25], Maclaurin symmetric mean (MSM) operator [26-27]. BM operator and HM

ip t

operator only consider the interrelationship between two arguments, and cannot handle the interrelationship among three or more arguments. However, in practical decision-making, there exist lots of situations where three or more input arguments also interrelate with each other in MADM problems. So, the more general and feasible operators should be

cr

applied to deal with interrelationship among any number of arguments. Muirhead mean (MM) operator [28] considers the interrelationship among any number of input arguments by a parameter vector, and it can be reduced to several special cases [29-31], involving arithmetic operator, geometric operator, BM operator and MSM operator. Therefore, we utilize

us

the MM aggregation operators to aggregate probabilistic linguistic information in this paper.

In aggregation operators, it is important to select appropriate operational laws. Generally, Algebraic TC and TN, as a useful tool, are frequently used to achieve the intersections and unions of fuzzy evaluation information, and got many

an

researches. Pang et al. [16] firstly provided the operational laws of PLTS. However, these operational laws cause the loss of probability information. Gou et al. [18] developed novel operational laws of PLTS base d on Algebraic TC and TN. At the same time, ATT contain various TCs and TNs, such as the TCs and TNs of Algebraic, Einstein, Hamacher, Frank,

M

Dombi and so on [32-34]. Therefore, it is essential to provide the operational laws of PLTS on the basis ATT, which are more general than others. In addition, it is worthwhile to note that the deviation between the subscripts of adjacent LTs may increase or decrease in linguistic evaluation scales as the subscripts of LTs are increasing under the different application scenarios. The LSFs can assign different semantic values to LTs under different situations [12, 35]. Moreover,

ed

these functions can overcome that the operational results of LTs exceed the bound linguistic term set (LTS) because they convert LTs into real-numbers in [0, 1]. Therefore, we should further add the LSFs into the operational laws of PLTS based on ATT in this paper.

ce pt

Based on above analysis, it is necessary to further discuss the relative theory of PLTS because it can reasonably describe human being’s subjective cognitions. In addition, MM operator provides a fairly flexible function to fuse evaluation values and makes it handle decision-making problems more suitably, and the LSFs can express different semantic values and ATT can give a general operational expression, so it is very valuable to process the MADM problems by the combination between PLTS and some MM aggregation operators based on LSFs and ATT. So, the purposes of this

Ac

paper are to

(1) Provide the general operational laws for PLTS to adapt to actual semantic situations based on ATT and LSFs, which can overcome the existing shortcomings, i.e., new operational laws of PLTS can (i) keep the probability information; (ii) keep the possible LTs within the bounds of LTSs; (iii) keep operations more flexible. (2) Extend MM operator to PLTSs and develop PLAMM operator, PLAWMM operator, PLADMM operator and PLADWMM operator, and further analyze their special cases. The proposed operators are more suitable for analyzing probabilistic linguistic information, whether evaluation values are correlative with each other or not, and they are a generalization of some existing operators (see their special cases in Section 3). (3) Give two MADM methods based on PLAWMM and PLADWMM operators with the PLTSs. The reminder of this paper is organized as follows. Section 2 introduced some basic concepts and relative theory. In Section 3, we propose the novel operational laws for PLTSs by combining ATT with LSFs and introduce some special cases of these operational laws. Then we develop some probability linguistic MM aggregation operators and further analyze their special cases. In Section 4, based on the proposed operators, we give two methods to solve MADM problems 2

Page 2 of 33

with PLTSs. Section 5 utilizes a numerical example to demonstrate the validity of the proposed methods and to compare them with the existing methods. The conclusions are discussed in Section 6.

2. Preliminaries 2.1 LSFs LSFs are the important tool for allocating different semantic values to LTs, which are more efficient to manage information and more flexible to express semantics with respect to different situation. LSFs can generate deterministic

ip t

results in accordance with various semantics. For a LT sk in LTS S , where = S {= sk | k 0,1,..., t} , the relationship between

sk and its subscript k is strictly monotonically increasing. The LTSs are introduced as follows:

Definition1 [35]. Let = S {= sk | k 0,1,..., t} be a LTS, t be an even number, θ k ∈ [0,1] be a numeric value, then the LSF

f : sk → θ k

cr

f that conducts the mapping from sk to θ k is defined as follows: (k = 0,1,..., t )

us

where 0 ≤ θ 0 < θ1 < ... < θt .

It is obvious that the function f is strictly monotonically increasing in accordance with subscript k . The symbol θ k

(k = 1, 2,..., t ) expresses preferences of the DMs for selecting the LT sk ∈ S (k = 1, 2,..., t ) . Therefore, the function f reflects

an

the semantics of the LTs. The following functions can be regarded as some LSFs.

k (k = 1, 2,..., t ) , t where θ k ∈ [0,1] , the evaluation scale of the above LTs is apportioned equally [36]. f1 ( sk= ) θ= k

(1)

M

(1)

 α t / 2 − α t / 2−k  2α t − 2 ) θ= (2) f 2 ( sk=  t k k −t α + α − 2  2α t − 2

t (k = 0,1, 2,..., ) 2 , t t (k = + 1, + 2,...., t ) 2 2

ed

(2)

The value of α can be determined by a subjective method which is defined as α = l m [37], where l represents the scale level, m is the importance ratio for indicator A to indicator B . In general, m = 9 is the maximum value of the

9 ≈ 1.37 .

( ) ( ) ( ) ( ) ( ) ( )

 t a − t −k  2 2 a  t 2  2  f 3 ( sk= ) θ=  k b  t + k− t 2  2 b  t 2  2 

Ac

(3)

7

ce pt

importance ratio, and if l = 7 , then= α

a

t (k = 0,1,..., ) 2

,

(3)

b

where a, b ∈ (0,1] . If a= b= 1 , then θ k =

t t (k = + 1, + 2,...., t ) 2 2

k . This LSF has the characteristic that the absolute deviation between the t

subscripts of adjacent LTs is decreasing with the extension from the middle of the LTS to both ends [12]. 2.2 Archimedean TC and TN The Archimedean TC S and TN T can be produced by the relative additive generators [33]. The Archimedean TN + = T ( x, y ) g −1 ( g ( x) + g ( y )) , where g ( x) is a monotonically decreasing function, which satisfies g ( x) :[0, t ] → R and

S ( x, y ) h −1 (h( x) + h( y )) , where h( x) is a g −1 ( x) : R + → [0, t ] , even lim g −1 ( x) = 0 and g −1 (0) = 1 . Moreover, Archimedean TC= x →∞

monotonically increasing function, which satisfies h( x) :[0, t ] → R + and h −1 ( x) : R + → [0, t ] , even lim h −1 ( x) = 1 and h −1 (0) = 0 . x →∞

3

Page 3 of 33

Furthermore, according to [33], we have h(= x) g (1 − x) . Some known Archimedean TC S and TN T are shown in Table 1 and Table 2. Table 1. Some Archimedean TN and relative additive generators Name

TN

Additive Generators

TA ( x, y ) = xy

Einstein TE

TE ( x, y ) =

Hamacher TH ,δ

TH ,δ =

Frank TF ,τ

g ( x) = − ln x

xy 1 + (1 − x)(1 − y )

g ( x) = ln

xy

δ + (1 − δ )( x + y − xy )

g ( x) ln =

2− x x

δ + (1 − δ ) x x

,δ > 0

ip t

Algebraic TA

τ = 1, g ( x) = − ln x

 (τ x − 1)(τ y − 1)   = TF ,τ logτ 1 +   τ −1  

cr

τ −1 τ ≠ 1, g ( x) = − ln x τ −1

Table 2. Some Archimedean TC and relative additive generators TC

Additive Generators

us

Name

S A ( x, y ) = x + y − xy

Einstein S E

S E ( x, y ) = S H ,δ =

Hamacher S H ,δ

h( x ) = − ln(1 − x)

xy 1 + xy

h( x) = ln

an

Algebraic S A

x + y − xy − (1 − δ ) xy δ + (1 − δ )(1 − x) ,δ > 0 h( x) ln = 1 − (1 − δ ) xy 1− x

 (τ 1− x − 1)(τ 1− y − 1)   = S F ,τ logτ 1 +   τ −1  

M

Frank S F ,τ

1+ x 1− x

τ= 1, h( x) = − ln (1 − x )

ed

τ −1 τ ≠ 1, g ( x) = − ln 1− x τ −1

2.3 Muirhead mean (MM)

The MM is an effective aggregation method, which makes fully use of all the input arguments and considers the

ce pt

interrelationship among multi-input arguments. MM was presented as follows: Definition2 [28]. The MM operator is defined as follows: 1

n 1  n Pr , MM P ( a1 , a2 , ⋅⋅⋅, an ) =  ∑ ∏ aϑPr( r )  ∑ r =1  n ! ϑ∈Sn r =1 

(4)

Ac

= where P ( P1 , P2 ,..., Pn ) ∈ R n is a parameter vector which reflects the risk appetite, S n is the group of all substitution of (1, 2,..., n) and (ϑ (1), ϑ (2),..., ϑ (n)) is one of S n . It is easy to find the special examples of the MM operator, shown as follows: (1)When P = (1, 0,..., 0) , MM (1,0,...,0) (a1 , a2 ,..., an ) =

1 n ∑ ar which is the arithmetic average operator. n r =1 1

(2)When P = (λ , 0,..., 0) , MM

( λ ,0,...,0)

1 n λ (a1 , a2 ,..., an ) =  ∑ ar λ  which is the generalized arithmetic average operator.  n r =1 

 n (3)When P = (1,1, 0,..., 0) , MM (1,1,0,...,0) (a1 , a2 ,..., an ) =  1 aa ∑  n(n − 1) r ,ς =1 r ς r ≠ς 

1

2  which is the BM operator [23].  

4

Page 4 of 33

1

κ  ar j n −κ κ n −κ κ ∑ ∏  ,  (( , ,  (( , 1≤ r1 <...< rκ ≤ n j =1 (1,1,...,1,0,0,...,0)  (4)When P = (1,1,...,1, 0, 0,..., 0) , MM (a1 , a2 ,..., an ) =  Cnκ  

κ   , which is the MSM operator [26].   

1

(5)When P = ( 1 , 1 ,..., 1 ) , MM n n n

(1,1,...,1)

 n n (a1 , a2 ,..., an ) =  ∏ ar  , which is the geometric average operator.  r =1 

ip t

Based on above information, we find that MM operator can be transformed into some existing operators and it captures the interrelationship among any arguments. Definition 3 [30]. The Dual MM (DMM) operator is defined as follows: 1

r

(5)

us

r =1

cr

n  n! 1  DMM P ( a1 , a2 , ⋅⋅⋅, an ) = n  ∏ ∑ Pr aϑ ( r ) r  ,  ∑ P  ϑ∈Sn r =1

= where P ( P1 , P2 ,..., Pn ) ∈ R n is a parameter vector which reflects the risk appetite, S n is the group of all substitution of

an

(1, 2,..., n) and (ϑ (1), ϑ (2),..., ϑ (n)) is one of S n . 2.4 PLTS

M

Definition4 [16]. Let S {= sk | k 0,1,..., t} be a LTS. Then a PLTS can be defined as: =

= L( p ) {L(i ) ( p (i ) ) | L(i ) ∈ S , = p (i ) ≥ 0, i 1, 2,..., # L( p ), ∑ i =1

# L( p)

p (i ) ≤ 1} ,

ed

where L(i ) ( p (i ) ) is the LT L(i ) associated with the probability p ( i ) , and # L( p ) is the number of all different LTs in

L( p ) .

ce pt

Definition5 [16].= Let L( p ) {= L( i ) ( p ( i ) ) | i 1, 2,..., # L( p )} be a PLTS, and r

(i )

(i )

is the subscript of LT L . Then the

score value (SV) of L( p ) is:

SV ( L( p ) ) = sα ,

where α =

# L( p )



r (i ) p(i )

# L( p)



(6)

p(i ) .

=i 1 =i 1

Ac

For two PLTSs L1 ( p1 ) and L2 ( p2 ) , if SV ( L1 ( p1 ) ) > SV ( L2 ( p2 ) ) , then L1 ( p1 ) is superior to L2 ( p2 ) , denoted by

L1 ( p1 )  L2 ( p2 ) ; If SV ( L1 ( p1 ) ) < SV ( L2 ( p2 ) ) , then L1 ( p1 ) is inferior to L2 ( p2 ) , denoted by L1 ( p1 )  L2 ( p2 ) . However, if SV ( L1 ( p1 ) ) = SV ( L2 ( p2 ) ) , we will cannot distinguish two PLTSs. Then, the deviation value (DV) of a PLTS is given to distinguish them, and shown as follows: Definition6 [16].= Let L( p ) {= L( i ) ( p ( i ) ) | i 1, 2,..., # L( p )} be a PLTS, and r ( i ) be the subscript of LT L( i ) , and

SV ( L( p ) ) = sα , where α =

# L( p )



r ( ) p( ) i

i

# L( p)



i p ( ) . Then the DV of L( p ) is:

=i 1 =i 1

5

Page 5 of 33



( (

12

))

# L( p) 2  # L( p) DV ( L( p ) ) =  ∑ p (i ) r (i ) − α  p (i ) . ∑ 1 1 = = i i  

(7)

For two PLTSs L1 ( p1 ) and L2 ( p2 ) with SV ( L1 ( p1 ) ) = SV ( L2 ( p2 ) ) , if DV ( L1 ( p1 ) ) > DV ( L2 ( p2 ) ) , then

L1 ( p1 )  L2 ( p2 ) ; if DV ( L1 ( p1 ) ) = DV ( L2 ( p2 ) ) , then L1 ( p1 ) is indifferent to L2 ( p2 ) , denoted by L1 ( p1 ) ≈ L2 ( p2 ) .

ip t

3. The probabilistic linguistic Archimedean MM aggregation operators The MM aggregation operators are classical aggregation tools for fusing multi-input arguments in which the input arguments have the existing interacting relationships. PLTS describes fuzzy information more easily, which consists of

cr

several possible LTs and corresponding probability information. Therefore, it is necessary to extend the MM aggregation operators to accommodate PLTS and propose some probabilistic linguistic MM aggregation operators such as PLAMM operator, PLAWMM operator, PLADMM operator and PLADWMM operator, which are based on the operational laws of

us

PLTSs built on ATT and LSFs, and sown as follows. 3.1 The novel operational laws for PLTSs built on ATT and LSFs

an

Definition 7. Let = S {= sk | k 0,1,..., t} be a LTS, L1 ( p1 ) and L2 ( p2 ) be any two PLTSs, and λ be a positive real number. Then the operational laws for PLTSs based on ATT and LSFs can be obtained as follows:

 (1) L1 ( p1 ) ⊕ L2 ( p2 ) = f −1  S ( γ 1(i ) , γ 2( j ) )  γ (i ) ∈ f ( L  ( j) 1 ),γ 2 ∈ f ( L2 ) 1

)( p

(i ) 1

 p2( j ) )   

}

M

{(

  f −1  h −1 h ( γ 1(i ) ) + h ( γ 2( j ) )= p1(i ) p2( j ) )  , i 1,= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ; (  γ (i ) ∈ f ( L   ( j) 1 ),γ 2 ∈ f ( L2 ) 1 

{ (

}

ed

)

 (2) L1 ( p1 ) ⊗ L2 ( p2 ) = f −1  T ( γ 1(i ) , γ 2( j ) )  γ (i ) ∈ f ( L  ( j) 1 ),γ 2 ∈ f ( L2 ) 1

ce pt

{(

)( p

(i ) 1

 p2( j ) )   

}

  (i ) ( j) f −1  g −1 g ( gg p1(i ) p2( j ) )  , i 1,= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ; (  1 ) + g ( 2 )=  gg   1( i ) ∈ f ( L1 ), 2( j ) ∈ f ( L2 ) 

{ (

)

 (3) λ L1 ( p1 ) = f −1   h −1 λ h ( γ 1(i ) )  γ (i ) ∈ f ( L ) 1 1

Ac

{ (

}



) ( p )}  , i = 1, 2,..., # L ( p ) ;

 λ (4) ( L1 ( p1 ) ) = f −1   g −1 λ g ( g 1(i ) )  g (i ) ∈ f ( L ) 1 1

{ (

(i ) 1

1



1



) ( p )}  , i = 1, 2,..., # L ( p ) . (i ) 1

1



(8)

1

(9)

(10)

(11)

Theorem 1. Let L( p ) , L1 ( p1 ) and L2 ( p2 ) be any three PLTs, and η ,η1 ,η 2 > 0 , then we have (1) L1 ( p1 ) ⊕ L2 ( p2 ) = L2 ( p2 ) ⊕ L1 ( p1 ) ;

(12)

(2) L1 ( p1 ) ⊗ L2 ( p2 ) = L2 ( p2 ) ⊗ L1 ( p1 ) ;

(13)

(3) η ( L1 ( p1 ) ⊕ L2 ( p2 ) ) = η L1 ( p1 ) ⊕ η L2 ( p2 ) ;

(14)

(4) η1 L( p ) ⊕ η 2 L( p ) =+ (η1 η2 ) L( p) ;

(15)

η1

(η1 +η2 )

η2

L( p ) (5) L( p ) ⊗ L( p ) =

;

(16)

(6) L1 ( p1 ) ⊗ L2 ( p2 ) =( L1 ( p1 ) ⊗ L2 ( p2 ) ) ; η

η

η

(17) 6

Page 6 of 33

(7) L1 ( p1 ) ⊕ ( L2 ( p2 ) ⊕ L3 ( p3 ) ) = ( L1 ( p1 ) ⊕ L2 ( p2 ) ) ⊕ L3 ( p3 ) ;

(18)

(8) L1 ( p1 ) ⊗ ( L2 ( p2 ) ⊗ L3 ( p3 ) ) = ( L1 ( p1 ) ⊗ L2 ( p2 ) ) ⊗ L3 ( p3 ) .

(19)

Proof. (1) Eq.(12) is clearly right in accordance with Eq.(8). (2) Eq.(13) is clearly right in accordance with Eq.(9). (3) For the left of Eq.(14), it is

  = hh h −1 h ( γ 1(i ) ) + h ( γ 2( j ) ) ( p1(i ) p2( j ) ) ( L1 ( p1 ) ⊕ L2 ( p2 ) )  f −1    γ1( i ) ∈ f ( L1 ),γ 2( j ) ∈ f ( L2 ) 

{ (

)) ( p

(i ) 1

ip t

{ ((

 p2( j ) )    

}

and for the right of Eq.(14), it is

 f −1  h −1 h h ( γ 1(i ) ) + h ( γ 2(i ) )  γ (i ) ∈ f ( L (i ) 1 ),γ 2 ∈ f ( L2 ) 1

{ ((



) ( p )}  ⊕ f (i ) 1

)) ( p

(i ) 1



) |i {L ( p =

}

(i ) 1

{ (

}

 f −1  h −1 h (γ 1( i ) ) + h (γ 2( j ) ) + h (γ 3( r ) )  γ ( i ) ∈ f ( L ),γ ( j ) ∈ (r) f ( L2 ),γ 3 ∈ f ( L3 ) 1 2 1

ce pt

{ (



) ( p )}  (i ) 2



 1, 2,..., # L1 ( p1 ) ⊕ f −1  h −1 h (γ 2( j ) ) + h (γ 3( r ) )  γ (i )∈ f ( L  ( j) ), f ( L ) γ ∈ 2 3 3  2

ed

(i ) 1

{ (

M

(4) For the left of Eq.(18), it is

L1 ( p1 ) ⊕ ( L2 ( p2 ) ⊕ L3 ( p= 3 ))

   h −1 h h ( γ 2(i ) )  γ (i ) ∈ f ( L ) 2  2

 p2(i ) )  .  

Therefore, Eq.(14) is right. Similarly, we can prove the Eq.(15)-Eq.(17).

−1

us

{ (

an

 L1 ( p1 ) ⊕ L2 ( p2 ) f −1   h −1 h ( γ 1(i ) ) hhh =  γ (i ) ∈ f ( L ) 1 1



cr

   f −1  h −1 h h ( γ 1(i ) ) + h ( γ 2( j ) )  γ (i ) ∈ f ( L   ( j) 1 ),γ 2 ∈ f ( L2 ) 1 



}  

)

)( p

(i ) 1

)( p

( j) 2

 p3( r ) )   

}

 p2( j ) p3( r ) )   

}

and for the right of Eq.(18), it is

 f −1  h −1 h (γ 1( i ) ) + h (γ 2( j ) )  γ (i )∈ f ( L  ( j) 1 ),γ 2 ∈ f ( L2 ) 1

{ (

p1 ) ⊕ L2 ( p2 ) ) ⊕ L3 ( p3 ) ( L1 (=

 f −1  h −1 h (γ 1( i ) ) + h (γ 2( j ) ) + h (γ 3( r ) )  γ ( i ) ∈ f ( L ),γ ( j ) ∈ (r) f ( L ), γ ∈ f ( L ) 1 2 3 2 3 1

Ac

{ (

)( p

(i ) 1

)( p

 = p2( j ) )  ⊕ L3( r ) ( p3( r ) ) | r 1, 2,..., # L3 ( p3 )  

(i ) 1

} {

}

 p2( j ) p3( r ) )   

}

Therefore, Eq.(18) is right.

Similarly, we can prove the Eq.(19). In the following, we discuss some special examples of the operational laws for PLTs with respect to the different additive operators g ( x) . −1 −x −1 −x 1. If g ( x) = − ln( x) , then we get g ( x) = e , h( x) = − ln(1 − x) , h ( x) = 1 − e and operational laws for PLTs based

on Algebraic ATT can be obtained as follows:

 (1) = L1 ( p1 ) ⊕ L2 ( p2 ) f −1  1 − (1 − γ 1( i ) )(1 − γ 2( j ) )  γ (i )∈ f ( L  ( j) ), ( ) γ ∈ f L 1 2 2 1

{(

)( p

(i ) 1

 p2( j ) )  ,  

}

7

Page 7 of 33

= i 1,= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ;   (2) L1 ( p1 ) ⊗ L2 ( p2 ) = f −1  p1(i ) p2( j ) )  , i 1,= γ 1(i )γ 2( j )= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ; ( )(  γ (i ) ∈ f ( L   ( j) 1 ),γ 2 ∈ f ( L2 ) 1 

{

{(

 (3) 1 − 1 − γ 1(i ) λ L1 ( p1 ) f −1   =  γ (i ) ∈ f ( L ) 1 1

( L1 ( p1 ) )

λ

 = f −1    γ (i ) ∈ f ( L ) 1 1

) ) ( p )} , i = 1, 2,..., # L ( p ) ; λ

(i )



1

1



1

 {((γ ) ) ( p )}  , i = 1, 2,..., # L ( p ) . (i ) λ 1

(i ) 1

1

1

ip t

(4)

(

}

cr

ex −1 2  1 + x  −1  2− x −1 = h x ( ) ln h ( x ) = , then we get , , and operational laws for PLTs 2. If g ( x) = ln  ( ) = g x    ex + 1 ex + 1  1− x   x  based on Einstein ATT can be obtained as follows:

us

(i ) ( j)   γ 1 + γ 2  (i ) ( j )   (1) L1 ( p1 ) ⊕ L2 ( p2 ) = f −1  p p  , i 1,= = 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ;    i ( )  γ ( i ) ∈ f ( L ),γ ( j ) ∈ f ( L )  1 + γ 1 γ 2( j )  ( 1 2 )    1 2  1 2

an

   (i ) ( j )   γ 1(i )γ 2( j ) (2) L1 ( p1 ) ⊗ L2 ( p2 ) = f −1  p p ,    ( ) ( ) i j  γ ( i ) ∈ f ( L ),γ ( j ) ∈ f ( L )  1 + (1 − γ 1 )(1 − γ 2 )  ( 1 2 )    1 2  2 1

= i 1,= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ;

(4)

( L1 ( p1 ) )

λ

= f

−1

λ   2 γ 1(i )   λ  ( i )    γ1 ∈ f ( L1 )  2 − γ 1(i ) + γ 1(i )  

(

   ( p (i ) )   , i = 1, 2,..., # L ( p ) ; 1 1  1    

M

  1 + γ (i ) λ − 1 − γ (i ) λ ( 1 ) 1 )  (  λ  ( i )   (i ) λ  γ1 ∈ f ( L1 )  (1 + γ 1 ) + (1 − γ 1(i ) )  

  ( )  ( p )   , i = 1, 2,..., # L ( p ) . ) ( )   

ed

(3) λ L1 ( p1 ) = f

−1

(i )

λ

1

1

1

h −1 ( x) =

ce pt

 δ + (1 − δ ) x   δ + (1 − δ )(1 − x )  δ −1 , h( x) = ln  3. If g ( x) = ln   , δ > 0 , then we get g ( x) = x  , e + (δ − 1) x 1− x    

ex −1 and operational laws for PLTs based on Hamacher ATT can be obtained as follows: ex + δ −1

Ac

  γ 1(i ) + γ 2( j ) − γ 1(i )γ 2( j ) − (1 − δ ) γ 1(i )γ 2( j ) f −1  (1) L1 ( p1 ) ⊕ L2 ( p2 ) =    γ ( i ) ∈ f ( L ),γ ( j ) ∈ f ( L )  1 − (1 − δ ) γ 1(i )γ 2( j ) 1 2  2 1

 (i ) ( j )    ( p1 p2 )   ,   

=i 1,= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ;

  γ 1(i )γ 2( j )  f −1  (2) L1 ( p1 ) ⊗ L2 ( p2 ) =   γ (i ) ∈ f ( L   δ + (1 − δ ) γ 1(i ) + γ 2( j ) − γ 1(i )γ 2( j ) ( j) 1 ),γ 2 ∈ f ( L2 )   1

(

   ( p1(i ) p2( j ) )   , )   

=i 1,= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ; (3) λ L1 ( p1 ) = f

−1

λ λ   (1 + (δ − 1) γ 1(i ) ) − (1 − γ 1(i ) )   λ λ  ( i )    γ1 ∈ f ( L1 )  (1 + (δ − 1) γ 1(i ) ) + (δ − 1) (1 − γ 1(i ) )  

   ( p (i ) )   , i = 1, 2,..., # L ( p ) ; 1 1  1    

8

Page 8 of 33

(4) ( L1 ( p1 ) ) = f λ

−1

λ   δ ( γ 1(i ) )   λ λ  ( i )    γ1 ∈ f ( L1 )  (δ + (1 − δ ) γ 1(i ) ) + (δ − 1) ( γ 1(i ) )  

   ( p (i ) )   , i = 1, 2,..., # L ( p ) . 1 1  1    

When δ = 1 , these operational laws become the Algebraic operational laws; When δ = 2 , these operational laws become the Einstein operational laws.

 τ −1  If g ( x) = − ln  x  , τ ≠1 ,  τ −1 

 τ − 1 + e− x   τ −1  get g −1 ( x) = logτ   , h( x) = − ln  1− x  , −x e  τ −1   

we

   −1  (1) L= f  1 − logτ  1 ( p1 ) ⊕ L2 ( p2 ) (i ) ( j)  gg 1 ∈ f ( L1 ), 2 ∈ f ( L2 )   

(

)(

)   ( p

(i ) (i )  1 − 1 τ 1− 2 − 1 τ 1−gg 1 +  τ −1  

ip t

  and operational laws for PLTs based on Frank ATT can be obtained as follows:    p )  ,  

cr

 τ − 1 + e− x h −1 ( x) = 1 − logτ  −x  e

then

(i ) 1

  

( j) 2

us

4.

= i 1,= 2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ;     ( j )  logτ  (i )  gg 1 ∈ f ( L1 ), 2 ∈ f ( L2 )   

(

)(

)   ( p

(i ) ( j)  1 −1 τ 2 −1 τ gg 1 +  τ −1  

  

(i ) 1

  p )  ,  

an

(2) L1 ( p1 ) ⊗ L2 ( p2 ) f =

−1

( j) 2

      1 − logτ  (i )  g1 ∈ f ( L1 )   

3.2 PLAMM operator

)

    (i )     ( p1 )   , i = 1, 2,..., # L1 ( p1 ) ;     

(

)

l  g 1( i ) 1 τ −  1 + l −1 (τ − 1)  

    (i )     ( p1 )   , i = 1, 2,..., # L1 ( p1 ) .     

ce pt

    l −1 (4) ( L1 ( p1 ) ) f    logτ =  g1( i ) ∈ f ( L1 )    

(

l (i )  τ 1−g1 − 1  1 + l −1 (τ − 1)  

ed

(3) l= L1 ( p1 ) f

−1

M

2,..., # L1 ( p1 ); j 1, 2,..., # L2 ( p2 ) ; =i 1,=

Ac

Definition 8. Let L1 ( p1 ) , L2 ( p2 ) , ..., Ln ( pn ) be PLTSs, where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) . Then Archimedean MM operator of PLTSs is defined as follows: n Pr 1 ) )  ⊕ ⊗ ( Lϑ ( r ) ( pϑ ( r ) ) ) PLAMM ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn = 1  n ! ϑ∈Sn r = P

1

 ∑n Pr  r =1 , 

(20)

= where P ( P1 , P2 ,..., Pn ) ∈ R n is a parameter vector, S n is the group of all substitutions of (1, 2,..., n) , and (ϑ (1), ϑ (2),..., ϑ (n)) is one of Sn . Then the PLAMM P is called the probabilistic linguistic Archimedean MM (PLAMM) operator. Theorem 2. Let L1 ( p1 ) , L2 ( p2 ) , ..., Ln ( pn ) be PLTSs, where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) , and 9

Page 9 of 33

P ( P1 , P2 ,..., Pn ) ∈ R n be a parameter vector. The result aggregated from (20) is also a PLTS, shown as follows: = PLAMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )

Proof. According to the operational rules of the PLTs, we have ϑ (r )

 Pr ( pϑ ( r ) ) ) = f −1  g −1 Pr g ( g ϑ(i()r ) )  g ( i ) ∈ ( ) f L  ϑ (r ) ϑ (r )

{( (



)) ( p )}  , (i )

ϑ (r )

cr

(L

(21)

ip t

             n  1    .   −1  n   −1  1 −1  −1 (i ) (i )      = f   ∏ ∏ pϑ ( r )     h  g  ∑ Pr g ( g ϑ ( r ) )       g  n g  h  n !  ϑ∑ (i ) ∈Sn   1  r=          ϑ∈Sn r = 1      P  g ϑ( i(1)) ∈ f ( Lϑ (1) ),  ∑ r  g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),      r = 1  ....,    g ( i ) ∈ f ( Lϑ ( n ) )   ϑ (n) 



us

 n Pr     n  −1  n f −1  g  ∑ Pr g ( g ϑ(i()r ) )    ∏ pϑ(i()r )    , ⊗ ( Lϑ ( r ) ( pϑ ( r ) ) ) =    r =1  ( i ) ∈ f ( Lϑ (1) ), ( i ) ∈ f ( Lϑ ( 2 ) ),...., ( i ) ∈ f ( Lϑ ( n ) )   r =1    r =1    ϑ (1) ϑ (2) ϑ (n)  ggg

n

Pr

M

ϑ∈Sn r = 1

an

⊕ ⊗ ( Lϑ ( r ) ( pϑ ( r ) ) )

      n n       −1    −1  −1  (i ) (i ) =f  Pr g ( g ϑ ( r ) )       ∏ ∏ pϑ ( r )    ,   h  ∑  h  g  ∑ (i ) r= 1       ϑ∈Sn r = 1     g ϑ( i(1)) ∈ f ( Lϑ (1) ),   ϑ∈Sn   g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),   ....,   g (i ) ∈ f ( L )  ϑ (n) ϑ (n) 

ce pt

ed

        n n     n     −1     Pr    −1 1 1 −1  (i ) (i )    ⊕ ⊗ = ( ) g L p f h h g P g p ∑  ( ϑ (r) ϑ (r) )    ϑ∈Sn r =  ∑ r ( ϑ ( r ) )        ∏ ∏ ϑ ( r )    ,  g ( i ) ∈ 1   n= !  ϑ∈S    n!  = r 1        ϑ∈Sn r 1 f (L ),     g ϑ( i(1)) ∈ f ( Lϑϑ(1)( 2 ) ),    n  ϑ (2)  ....,  (i )  g ϑ ( n ) ∈ f ( Lϑ ( n ) ) 

1

Ac

n Pr  n 1 Pr ⊗ ( Lϑ ( r ) ( pϑ ( r ) ) )  ∑  ϑ⊕ r =1 ∈ = 1 S r n  n! 

            n  1       n    −1  1  =f −1  g  h −1   ∑  h  g −1  ∑ Pr g ( g ϑ(i()r ) )          ∏ ∏ pϑ(i()r )    .  g  n    n !  ϑ∈Sn    (i )  1  r=          ϑ∈Sn r = 1    g ϑ( i(1) ) ∈ f ( Lϑ (1) ),   ∑ Pr     g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),     r =1   ....,   g (i ) ∈ f ( L ( n) )   ϑ (n) ϑ 

Therefore,

n Pr 1 ) )  ⊕ ⊗ ( Lϑ ( r ) ( pϑ ( r ) ) ) PLAMM ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn = 1  n ! ϑ∈Sn r = P

1

 ∑n Pr  r =1 

              n  1   .   −1  n   −1  1  (i ) (i ) −1      = f −1  g h h g P g p g   ( )      g    ∑ ∑  ∏ ∏ ( ) ( ) r r r ϑ ϑ         n !  ϑ∈Sn    (i )  n 1  r=         ϑ∈Sn r = 1    g ϑ( i(1)) ∈ f ( Lϑ (1) ),   ∑ Pr     g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),   r =1   ....,     g ( i ) ∈ f ( Lϑ ( n ) )   ϑ (n) 

Next, The PLAMM has the property of Commutativity. Property1 (Commutativity). If L1 '( p1 '), L2 '( p2 '),  , Ln '( pn ') is any permutation of L1 ( p1 ), L2 ( p2 ),  Ln ( pn ) , then 10

Page 10 of 33

PLAMM p ( L1 '( p1 '), L2 '( p2 '),..., Ln '( pn ') ) = PLAMM p ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) . Proof. 1

n Pr 1 PLAMM ( L1 '( p1 '), L2 '( p2 '),..., Ln '( pn ')= ⊗ ( Lϑ ( r ) '( pϑ ( r ) ') ) )  ϑ⊕ ∈Sn r = 1 n ! 

 ∑n Pr  r =1 

P

1

 ∑n Pr P  r =1= PLAMM ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) . 

ip t

n Pr 1 =  ⊕ ⊗ ( Lϑ ( r ) ( pϑ ( r ) ) ) 1  n ! ϑ∈Sn r =

In the following, we discuss the special examples of PLAMM operator in accordance with different parameter vectors. 1. When P = (1, 0,..., 0) , PLAMM becomes the probabilistic linguistic Archimedean arithmetic average operator.

cr

       −1  1  n 1 n     n (i )    . −1  (1,0,0,...,0) (i ) γ PLAMM f h h p ⊕ Lr ( pr ) = ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = ( )       ∑  ∏ r r      γ ( i ) ∈ f ( L1 ),   n  r =1 n r =1     r =1     γ12( i ) ∈ f ( L2 ),   ....,( i )   γ n ∈ f ( Ln ) 

)

us

(

an

2. When P = (λ , 0,..., 0) , PLAMM reduces to the probabilistic linguistic Archimedean generalized arithmetic average operator.

1

     1 = f −1    g −1   g1( i ) ∈ f ( L1 ),   λ  g 2( i ) ∈ f ( L2 ),   ....,( i )  g n ∈ f ( Ln )

M

λ λ 1 n ) )  ⊕ ( Lr ( pr ) )  ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn = 1 = r n 

(( (

 1 n g  h −1   ∑ h g −1 λ g ( g r(i ) )   n  r =1

)))

        n (i )    .      ∏ pr         r =1     

ed

PLAMM

( λ ,0,0,...,0)

ce pt

3. When P = (1,1, 0,..., 0) , PLAMM becomes the probabilistic linguistic Archimedean BM operator.

 1 n ⊕ ( Lr ( pr ) ) ⊗ ( Lς ( pς ) ) PLAMM (1,1,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) =  ς r ,  n(n − 1) r ≠ς=1 

(

Ac

         −1  1  −1  1  n (i ) (i ) −1  f h g −1 g ( gg g  g  h  ∑  r )+ g( ς )   g ( i ) ∈ f ( L1 ),  2   n(n − 1)  r ,ς =1  r ≠ς  g12( i ) ∈ f ( L2 ),      ....,( i )  g n ∈ f ( Ln )

( (

))

1

)

2   

     n   pr(i ) pς(i )    .       r∏     ,ς =1      r ≠ς    

−κ κ ,  (n( , 4. When P = (1,1,...,1, 0, 0,..., 0) , PLAMM becomes the probabilistic linguistic Archimedean MSM operator.

(

κ  ⊕ ⊗ Lrj ( prj )  (( ,  1 r ... r n j 1 ≤ < < ≤ = PLAMM (1,1,...,1,0,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) =  1 κ Cnκ   κ, 

n −κ

)

1

κ    

11

Page 11 of 33

     1  = f −1    g −1    g ( i ) ∈ f ( L1 ),  κ  g12( i ) ∈ f ( L2 ),    ....,( i )  g n ∈ f ( Ln )

  1 g  h −1  κ  C   n

    κ  ∑  h  g −1  ∑ g g r(ji )     j =1  1≤ r1 <...< rκ ≤ n  

( )

      κ       (i )          ∏ ∏ prj    .         1≤ r1 <...< rκ ≤ n j =1     

5. When P = (1,1,...,1) , PLAMM becomes the probabilistic linguistic Archimedean geometric average operator.      n n n       1      ⊗ L (p )  = PLAMM (1,1,...,1) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    g −1   ∑ g (g r( i ) )     ∏ pr( i )    .  r =1 ( r r )     g 1( i ) ∈ f ( L=  r 1     n  r 1= 1 ),    g 2( i ) ∈ f ( L2 ),   ....,  (i ) ( ) f L ∈ g n  n 

cr

ip t

1 n

6. When P = ( 1 , 1 ,..., 1 ) , PLAMM becomes the probabilistic linguistic Archimedean geometric average operator. n n n

an

us

    1   1 1 1 n n n   ( , ,..., ) 1     n PLAMM n n n ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    g −1   ∑ g ( g r(i ) )     ∏ pr(i )    .  r⊗=1 ( Lr ( pr ) )  =  g1( i ) ∈ f ( L1 ),        r =1     n  r =1  g 2( i ) ∈ f ( L2 ),   ....,( i )   g n ∈ f ( Ln ) 

According to the additive generators g ( x) , we obtain some specific probabilistic linguistic Archimedean MM

M

operators. (1) If g ( x) = − ln( x) , then we get

ed

  1     n 1    Pr     n n     , Pr   n !  ∑    r =1 −1  P (i ) (i )  APLAMM ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) f  =  ∏ ∏ p    1 −  ϑ∏ 1 − ∏ ( γ ϑ ( r ) )      ϑ∈Sn r 1 ϑ ( r )    (i ) = r 1      ∈Sn   γ ϑ( i(1)) ∈ f ( Lϑ (1) ),=     γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),      ....,    γ (i ) ∈ f ( L (n) )   ϑ (n) ϑ 

ce pt

which is called the Algebraic probabilistic linguistic Archimedean MM (APLAMM) operators.

 2− x (2) If g ( x) = ln   , then we get  x 

where

Ac

  1   2 ( A − B ) n Pr  ∑   r =1 EPLAMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    1 1 ) γ ϑ( i(1) ∈ f ( Lϑ (1) ),  A + 3B n ( ) ∑ Pr + ( A − B ) ∑n Pr  (i ) γ ϑ ( 2) ∈ f ( Lϑ ( 2) ),  r 1 =r 1 =  ....,   γ ( i ) ∈ f ( Lϑ ( n ) ) ( ) n ϑ 

n  Pr Pr  n = A  ∏  ∏ ( 2 − γ ϑ(i()r ) ) + 3∏ ( γ ϑ(i()r ) ) = = r 1  ϑ∈Sn  r 1

1

     n    ∏ ∏ pϑ(i()r )    ,     ϑ∈Sn r = 1        

n  Pr P   n!  n (i ) = , B 2 − γ − ( ) (γ ϑ(i()r ) ) r   ∏ ∏ ϑ (r )  ∏  = r 1  =  ϑ∈Sn  r 1

1

  n!  .  

which is called the Einstein probabilistic linguistic Archimedean MM (EPLAMM) operators.

 δ + (1 − δ ) x  (3) If g ( x) = ln   , δ > 0 , then x  

12

Page 12 of 33

  1   δ ( A − B ) ∑n Pr    r =1 HPLAMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    1 1  ) n ∈ f ( Lϑ (1) ),  γ ϑ( i(1) 2  (i )  A + (δ − 1) B ∑ Pr + (δ − 1)( A − B ) ∑n Pr γ ϑ ( 2) ∈ f ( Lϑ ( 2) ),  r =1  ....,  r =1  γ (i ) ∈ f ( L )   ϑ (n) ϑ (n)

(

     n   , (i )   ∏1 pϑ (r )     ϑ∏ ∈S r =     n    

)

where 1

n n   Pr Pr   n ! Pr P  n  n (i ) δ δ γ A  ∏  ∏ (δ + (1 − δ ) γ ϑ( i()r ) ) + (δ 2 − 1) ∏ ( γ ϑ( i()r ) ) = B , 1 = + − − ( ) ( ) (γ ϑ(i()r ) ) r   ∏ ∏ ϑ (r )  ∏  = = = r 1 r 1 =  ϑ∈Sn  r 1   ϑ∈Sn  r 1

1

  n!   . 

ip t

which is called the Hamacher probabilistic linguistic Archimedean MM (HPLAMM) operators. When δ = 1 , the HPLAMM operator reduces to the APLAMM operator; when δ = 2 , the HPLAMM operator reduces to the EPLAMM operator.

cr

 τ −1  (4) If g ( x) = − ln  x  , τ ≠ 1 , then  τ −1 

 1 1    n n  ,  (τ − 1)( B − A ) ∑ Pr + ( (τ − 1) A + B ) ∑ Pr    n   r =1 (i ) r =1   p  ∏1 ϑ ( r )   1     ϑ∏ ∈Sn r =   ( (τ − 1) A + B ) ∑n Pr     r =1     

(

(

)

)

an

       = f −1    logτ (i )  g ϑ( i(1)) ∈ f ( Lϑ (1) ),  g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),   ....,  g ( i ) ∈ f ( L )   ϑ (n) ϑ (n)

)

(

  P γ (i ) A  ∏  ∏ (τ − 1) r − ∏ τ ϑ ( r ) − 1 = r 1 = =  ϑ∈Sn  r 1 n

1 n!

Pr

 ,B   = 

M

where n

us

FPLAMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )

(

)

1 n!

  P  γ ϑ( i()r ) − 1 + ∏ (τ − 1) r    .  ∏   (τ − 1) ∏ τ r 1 =   ϑ∈Sn   =r 1 n

Pr

n

ed

which is called the Frank probabilistic linguistic Archimedean MM (FPLAMM) operators. 3.3 Probabilistic linguistic Archimedean weighted MM operator

ce pt

Definition 9. Let L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) be PLTSs, where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) , let the weight of input n

argument Lr ( pr ) be wr , where wr ∈ [0,1] and ∑ wr = 1 . Then Archimedean weighted MM operator of PLTSs is defined as r =1

follows:

Ac

n Pr 1 ) )  ⊕ ⊗ ( nwϑ ( r ) Lϑ ( r ) ( pϑ ( r ) ) ) PLAWMM ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn = 1 ϑ∈Sn r = ! n  P

1

 ∑n Pr  r =1 , 

(22)

= where P ( P1 , P2 ,..., Pn ) ∈ R n is a parameter vector, S n is the group of all substitutions of (1, 2,..., n) ,and (ϑ (1), ϑ (2),..., ϑ (n)) is one of S n . Then the PLAWMM P is called the probabilistic linguistic Archimedean weighted MM (PLAWMM) operator. Theorem 3. Let L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) be PLTSs, where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) , let the weight of input n

argument Lr ( pr ) be wr , where wr ∈ [0,1] and ∑ wr = 1 , then the aggregated result of the PLAWMM operator from r =1

definition 9 is also a PLTS, shown as follows: 13

Page 13 of 33

PLAWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )        1    n  −1  1 −1  = f  g  h −1   ∑  h  g −1  ∑ Pr g h −1 nwϑ ( r ) h ( g ϑ(i()r ) )  g  n   n!    (i ) 1  g ϑ( i(1) ) ∈ f ( Lϑ (1) ),   ∑ Pr    ϑ∈Sn    r = g ϑ ( 2) ∈ f ( Lϑ ( 2) ),   ....,   r =1  g ( i ) ∈ f ( Lϑ ( n ) )  ϑ (n)

( ( (

)))

     n    .  (i )            ∏ ∏ pϑ ( r )            ϑ∈Sn r = 1        

(23)

Theorem 4. The PLAMM operator is a special example of the PLAWMM operator.

ip t

Proof. T

1 1 1 If w =  , ,...,  , then n n n 

cr

PLAWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )

an

us

                n    n      −1  1  −1  1              (i ) (i ) −1  1 −1     = f −1  g g h h g P g h n h p g    ∑  r   ( ϑ ( r ) )             ∏ ∏ ϑ ( r )      n   n!  ∑   (i ) 1 ϑ∈S r = 1   n     r= Pr    ϑ∈Sn    g ϑ( i(1)) ∈ f ( Lϑ (1) ),   n ∑  g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),     r =1   ....,   g ( i ) ∈ f ( Lϑ ( n ) )   ϑ (n) 

= PLAMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) .

)

ed

(

M

            n  1      −1  n   −1  1 (i ) (i ) −1  −1 = f  Pr g ( g ϑ ( r ) )         ∏ ∏ pϑ ( r )      g  n g  h  n !  ϑ∑  h  g  ∑ (i ) 1 ∈S n   r=          ϑ∈Sn r = 1      P  g ϑ( i(1)) ∈ f ( Lϑ (1) ),  ∑ r   g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),   r =1   ....,     g ( i ) ∈ f ( Lϑ ( n ) )   ϑ (n) 

ce pt

Property 2 (Commutativity). If L1 '( p1 '), L2 '( p2 '),..., Ln '( pn ') is any substitution of L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) , then

PLAWMM p ( L1 '( p1 '), L2 '( p2 '),..., Ln '( pn ') ) = PLAWMM p ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) . In the following, we discuss the special examples of PLAWMM operator in accordance with different parameter vector P . 1. When P = (1, 0,..., 0) , PLAWMM becomes the probabilistic linguistic Archimedean weighted arithmetic average operator.

Ac

      n n n      PLAWMM (1,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    h −1  ∑ wr h ( γ r(i ) )    ∏ pr(i )    . ⊕ ( wr Lr ( pr ) ) = r =1  γ ( i ) ∈ f ( L1 ),   r =1    r =1     γ12( i ) ∈ f ( L2 ),   ....,( i )   γ n ∈ f ( Ln ) 

2. When P = (λ , 0,..., 0) , PLAWMM becomes the probabilistic linguistic Archimedean generalized weighted arithmetic average operator. 1

PLAWMM

( λ ,0,...,0)

λ λ 1 n ) )  ⊕ ( nwr Lr ( pr ) )  ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn = r = 1 n 

14

Page 14 of 33

    1 = f −1    g −1  λ   g1( i ) ∈ f ( L1 ),    g 2( i ) ∈ f ( L2 ),   ....,( i )  g n ∈ f ( Ln )

( ( (

 1 n g  h −1   ∑  h g −1 λ g h −1 nwr h ( g r(i ) )   n  r =1 

(

))))

   n   (i )                 ∏ pr    .        r =1      

3. When P = (1,1, 0,..., 0) , PLAWMM becomes the probabilistic linguistic Archimedean BM operator.

(

         −1  1  −1  1  n  (i ) −1  + g h −1 nwς h ( ς(i ) ) f   g  g  h  ∑  h g −1 g h−1 nwr h (gg  r ) ) 2   n(n − 1)  r= ς = 1  g ϑ( i(1) ∈ f ( Lϑ (1) ),    (i )  r ≠ς    g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),   ....,  g ( i ) ∈ f ( Lϑ ( n ) )  ϑ (n)

))))

     n  (i ) (i )   .    ∏ pr pς           r ,ς =1         r ≠ς    

cr

)) ( (

(

)

us

( ((

1

2   

ip t

 1 n  PLAWMM (1,1,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = ⊕ ( nwr Lr ( pr ) ) ⊗ ( nwς Lς ( pς ) ) , r ς  n(n − 1) r ≠ς=1 

−κ κ ,  (n( , 4. When P = (1,1,...,1, 0, 0,..., 0) , PLAWMM becomes the probabilistic linguistic Archimedean MSM operator. 1

κ    

  1 g  h −1  κ  C   n

    κ  ∑  h  g −1  ∑ g h −1 nwr h ( g r(i ) )     j =1  1≤ r1 <...< rκ ≤ n  

( (

ed

     1  = f −1    g −1   g1( i ) ∈ f ( L1 ),   κ  g 2( i ) ∈ f ( L2 ),    ....,( i )  g n ∈ f ( Ln )

M

an

κ κ  −κ κ,  ⊕ ⊗ ( nwr Lr ( pr ) )  (n( ,  1 ... 1 r r n j ≤ < < ≤ = (1,1,...,1,0,0,...,0) 1 κ PLAWMM ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) =  Cnκ  

))

   κ    (i )          ∏ ∏ prj    .         1≤ r1 <...< rκ ≤ n j =1     

PLAWMM

(1,1,...,1)

ce pt

5. When P = (1,1,...,1) , PLAWMM becomes the probabilistic linguistic Archimedean geometric average operator. 1

n n ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )=  r⊗=1 ( nwr Lr ( pr ) )   

      n n  1    = f −1    g −1   ∑ g h −1 nwϑ ( r ) h ( g ϑ(i()r ) )     ∏ pr(i )    .  g1( i ) ∈ f ( L1 ),      r =1     n  r =1  g 2( i ) ∈ f ( L2 ),   ....,( i )   g n ∈ f ( Ln ) 

Ac

( (

))

6. When P = ( 1 , 1 ,..., 1 ) , PLAWMM becomes the probabilistic linguistic Archimedean geometric average operator. n n n

PLAWMM

( 1 , 1 ,..., 1 ) n n n

( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) =

1

n

⊗ ( nwr Lr ( pr ) ) n

r =1

      n n  1    = f −1    g −1   ∑ g h −1 nwϑ ( r ) h ( g ϑ(i()r ) )     ∏ pr(i )    .  g ( i ) ∈ f ( L1 ),      r =1     n  r =1  g12( i ) ∈ f ( L2 ),   ....,( i )   g n ∈ f ( Ln ) 

( (

))

15

Page 15 of 33

According to different forms g ( x) , some specific probabilistic linguistic Archimedean weighted MM operators can be obtained. (1) If g ( x) = − ln( x) , then we have

APLAWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )   1     n 1    Pr     n n   nwϑ ( r ) Pr   n !  ∑     ,  r =1 (i ) (i )  − − − − f −1  1 1 1 1 p γ ( )          ∏  ∏ ∏∏ ϑ (r )     ϑ∈Sn r 1 ϑ ( r )    (i ) r 1 = =    γ ϑ( i(1)) ∈ f ( Lϑ (1) ),    ϑ∈Sn     γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),    ....,     γ (i ) ∈ f ( L )  ϑ (n)  ϑ (n) 

ip t

)

(

=

which is called the Algebraic probabilistic linguistic Archimedean weighted MM (APLAWMM) operator.

cr

 2− x (2) If g ( x) = ln   , then  x 

us

EPLAWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )

where

(

A= ∏ (1 + γ ϑ(i()r ) ) n

nwϑ ( r )

+ 3 (1 − γ ϑ( i()r ) )

nwϑ ( r )

M

an

1   1 1  n    Pr 2  ∏ ( A + 3B ) n! − ∏ ( A − B ) n!  ∑  r =1  ϑ∈Sn    ϑ∈Sn  = f −1    1 1 (i ) n 1 1  1 1  n   γ ϑ( i(1)) ∈ f ( Lϑ (1) ),   Pr P ∑ ∑ γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),  +  ∏ ( A + 3B ) n! − ∏ ( A − B ) n!  r 1 r  ∏ ( A + 3B ) n! + 3 ∏ ( A − B ) n!  r 1 = =  ....,  γ ( i ) ∈ f ( Lϑ ( n ) )   ϑ∈Sn ∈ ∈ ϑ∈Sn ϑ ϑ S S n   n   ϑ (n)

∏ ((1 + γ ) ,B = n

Pr

r 1 =r 1

)

nwϑ ( r ) (i ) ϑ (r)

− (1 − γ ϑ( i()r ) )

nwϑ ( r )

      n    ∏ ∏ pϑ(i()r )    ,     ϑ∈Sn r = 1        

), Pr

ed

which is called the Einstein probabilistic linguistic Archimedean weighted MM (EPLAWMM) operator.

ce pt

 δ + (1 − δ ) x  (3) If g ( x) = ln   , δ > 0 , then we get x  

HPLAWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )          = f −1    (i )  γ ϑ( i(1)) ∈ f ( Lϑ (1) ),   2 γ ϑ ( 2) ∈ f ( Lϑ ( 2) ),   ....,   ∏ A + (δ − 1) B  γ ( i ) ∈ f ( L )   ϑ∈Sn  ϑ (n) ϑ (n) 

where

= A

(

)

Ac

(

    − ∏ ( A − B)  ,  n   ϑ∈Sn   p (i )  1 1   ∏ ∏ ϑ (r )   1 n 1 1  ϑ∈Sn r =   1  1  n   Pr Pr  n!  + (δ 2 − 1)  ∏ ( A − B ) n!   ∑ + (δ − 1)  ∏ A + (δ 2 − 1) B n! − ∏ ( A − B ) n!  ∑ r =1 r =1     ϑ∈Sn  ϑ∈Sn   ϑ∈Sn     δ  ∏ A + (δ 2 − 1) B  ϑ∈Sn

∏ ((δ + (1 − δ ) (1 − γ ϑ ) )

)

1

1 n!

+ (δ − 1)(1 − γ

1 n!

 ∑n Pr  r =1 

(

, B ∏ ((δ + (1 − δ ) (1 − γ )=

)

P n nwϑ ( r ) nwϑ ( r ) r (i ) 2 (i ) (r) ϑ (r) r 1 =r 1 n

)

(i ) ϑ (r)

))

nwϑ ( r )

− (1 − γ ϑ( i()r ) )

nwϑ ( r )

), Pr

which is called the Hamacher probabilistic linguistic Archimedean weighted MM (HPLAWMM) operator. Especially, when δ = 1 , the HPLAWMM operator reduces to APLAWMM operator; when δ = 2 , the HPLAWMM operator reduces to EPLAWMM operator.

 τ −1  (4) If g ( x) = − ln  x  , τ ≠ 1 , then we get  τ −1 

FPLAWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) 16

Page 16 of 33

1 1    1 1  n 1  n 1     Pr P  ∑ ∑ n n ! ! n n ! ! +  (τ − 1) ∏ ( B − A ) + ∏ ( (τ − 1) A + B )  r 1 r   =  (τ − 1)  ∏ ( (τ − 1) A + B ) − ∏ ( B − A )  r 1 = ϑ∈Sn ϑ∈Sn ϑ∈Sn ϑ∈Sn      −1   = f    logτ  1 (i ) 1  n 1   g ϑ( i(1)) ∈ f ( Lϑ (1) ),  P  ∑ g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),  n ! n !  (τ − 1) ∏ ( B − A ) + ∏ ( (τ − 1) A + B )  r =1 r  ....,   g ( i ) ∈ f ( L )  ϑ∈Sn ϑ∈Sn    ( ) n ϑ ( ) n ϑ 

     ,  n     ∏ ∏ pϑ(i()r )         ϑ∈Sn r = 1         

where

(

)

(

Pr

)

Pr

ip t

n nwϑ ( r ) nwϑ ( r ) nw nw 1−γ ( i ) 1−γ ( i )     , B ∏  (τ − 1) τ ϑ ( r ) − 1 A ∏  (τ − 1) ϑ ( r ) − τ ϑ ( r ) − 1 = + (τ − 1) ϑ ( r )  ,  =    r 1= r 1  which is called the Frank probabilistic linguistic Archimedean weighted MM (FPLAWMM) operator。 n

3.4 Probabilistic linguistic Archimedean dual MM operator

cr

Definition 10. Let L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) be PLTSs, where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) . Then Archimedean MM operator of PLTSs is defined as follows:

1

(24)

an

r =1

us

n 1   n! ⊕ ( Pr Lϑ ( r ) ( pϑ ( r ) ) )  ,  ϑ⊗ n ∈ = S r 1 n  ∑ Pr 

PLADMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln (= pn ) )

= where P ( P1 , P2 ,..., Pn ) ∈ R n is a parameter vector, S n is the group of all substitutions of (1, 2,..., n) ,and (ϑ (1), ϑ (2),..., ϑ (n)) is one of Sn . Then the PLADMM is called the probabilistic linguistic Archimedean dual MM

M

P

(PLADMM) operator. 5.

Let L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) be

PLTSs,

where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) ,

and

ed

Theorem

= P ( P1 , P2 ,..., Pn ) ∈ R n be a vector of parameters, then the aggregated result of the PLADMM operator is also a PLTS,

ce pt

shown as follows:

PLADMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )

(25)

Ac

           .  n  −1  1    −1  n   −1  1  −1  (i ) (i ) = f  g  h  ∑ Pr h ( g ϑ ( r ) )        ∏ ∏ pϑ ( r )      h  n h  g  n !  ϑ∑     ϑ∈S r = ) ∈ = 1 S r   1   ∈ f ( Lϑ (1) ),  g ϑ( i(1) n n         (i )   ∑ Pr   g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),    r =1   ...,    g ( i ) ∈ f ( Lϑ ( n ) )   ϑ (n) 

Property 3 (Commutativity). If Lr '( pr ') (r = 1, 2,..., n) is any substitution of Lr ( pr ) (r = 1, 2,..., n) , then

PLADMM p ( L1 '( p1 '), L2 '( p2 '),..., Ln '( pn ') ) = PLADMM p ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) . In the following, we discuss the special examples of PLADMM operator in accordance with different parameter vectors. 1. When P = (1, 0,..., 0) , PLADMM becomes the probabilistic linguistic Archimedean arithmetic average operator.

17

Page 17 of 33

      n n n       1         PLADMM (1,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = Lr ( pr ) )  = f −1    g −1   ∑ g ( g r(i ) )     ∏ pr(i )    . (  r⊗  g1( i ) ∈ f ( L1 ),   =1      r =1     n  r =1  g 2( i ) ∈ f ( L2 ),   ...,( i )   g n ∈ f ( Ln )  1 n

2. When P = (λ , 0,..., 0) , PLADMM becomes the probabilistic linguistic Archimedean generalized geometric average operator. 1

1 n n )) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn=  r⊗=1 ( λ Lϑ ( r ) ( pϑ ( r ) ) )  λ 

ip t

PLADMM

( λ ,0,...,0)

      n n           1 1       = f −1    h −1  h  g −1   ∑ g h −1 λ h ( g r( i ) )       ∏ pr( i )    . λ  g1( i ) ∈ f ( L1 ),         r =1   n  r =1     g 2( i ) ∈ f ( L2 ),   ...,( i )   g n ∈ f ( Ln ) 

us

cr

))

( (

3. When P = (1,1, 0,..., 0) , PLADMM becomes the probabilistic linguistic Archimedean geometric BM operator. 1

an

 n ( n −1) 1 n  PLADMM (1,1,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) =⊗ Lr ( pr ) ⊕ Lς ( pς ) )  (  2  rr ,≠ςς=1            −1  1  −1  1 −1  (i ) (i )  f g h −1 h ( gg  h  h  g   r ) + h( ς )  r=∑  g 1( i ) ∈ f ( L1 ),  2 ( − 1) n n ς = 1     r ≠ς  g 2( i ) ∈ f ( L2 ),      ....,( i )  g n ∈ f ( Ln )

M

( (

))

       n         pr( i ) pς( i )    .         r∏       r ,≠ςς=1     

(

)

1

κ 1  Cnκ ⊕ Lrj ( prj )  ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn )=)  1≤ r <⊗ ... 1 r n j < ≤ = κ 1 κ 

ce pt

PLADMM

−κ κ,   (n( , (1,1,...,1,0,0,...,0)

ed

−κ κ ,  (n( , 4. When P = (1,1,...,1, 0, 0,..., 0) , PLADMM becomes the probabilistic linguistic Archimedean MSM operator.

  1 h  g −1  κ   Cn  

   κ  ∑ g  h −1  ∑ h g r( i ) j r =  1≤ r1 <...< rκ ≤ n   j 1  

( )

Ac

     1  = f −1    h −1     g 1( i ) ∈ f ( L1 ),  κ   g 2( i ) ∈ f ( L2 ),  ....,  (i )  g n ∈ f ( Ln )

   κ           (i )         ∏ ∏ prj    .        1≤ r <...< r ≤ n j = 1 1 κ       

5. When P = (1,1,...,1) , PLADMM becomes the probabilistic linguistic Archimedean arithmetic average operator.        1 n     n (i )    −1  1  n  . −1  (1,1,...,1) (i ) PLADMM f h   ∑ h ( γ ϑ ( r ) )     ∏ pr   ⊕ ( Lr ( pr ) ) = ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) =      γ1( i ) ∈ f ( L1 ),  n r =1     r =1   n  r =1    γ 2( i ) ∈ f ( L2 ),   ....,( i )  ∈ γ ( ) f L n  n 

6. When P = ( 1 , 1 ,..., 1 ) , PLADMM becomes the probabilistic linguistic Archimedean arithmetic average operator. n n n

18

Page 18 of 33

      1 1 1 n n n   ( , ,..., )     1 1         ⊕  Lϑ ( r ) ( pϑ ( r ) )  = PLADMM n n n ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    h −1  ∑  h ( γ ϑ(i()r ) )     ∏ pr(i )    r =1 n  γ ( i ) ∈ f ( L1 ),   r =1  n       r =1     γ12( i ) ∈ f ( L2 ),   ....,( i )  ( ) ∈ γ f L n  n  According to different forms g ( x) , some specific probabilistic linguistic Archimedean MM operators can be shown

as follows: (1) If g ( x) = − ln( x) , then

cr

ip t

    1    n 1       n n P     Pr  n ! ∑ r  −1  P (i ) (i )   r =1   ∏ ∏ pϑ ( r )    , APLADMM ( L1 ( p1 ), L2 (= p2 ),..., Ln ( pn ) ) f   1 − 1 − ϑ∏ 1 − ∏ (1 − γ ϑ ( r ) )    ϑ∈S r 1  )  ∈ = S r 1    ∈ f ( Lϑ (1) ),= γ ϑ( i(1) n  n      (i )   γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),       ...,  γ (i ) ∈ f ( L n )  ( ) ϑ ( ) n ϑ  

us

which is called the Algebraic probabilistic linguistic Archimedean dual MM (APLADMM) operator.

 2− x (2) If g ( x) = ln   , then  x 

where 1

M

an

  1 1  n n   ( A + 3B ) ∑ Pr − ( A − B ) ∑ Pr   r 1 =r 1 EPLADMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    = 1 1 ) ∈ f ( Lϑ (1) ),  A + 3B n γ ϑ( i(1) ( ) ∑ Pr + ( A − B ) ∑n Pr  (i ) γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),  r 1 =r 1 =  ...,   γ (i ) ∈ f ( L (n) )  ϑ (n) ϑ

     n   ,   ∏ ∏ pϑ( i()r )        ϑ∈Sn r = 1        

1

ed

n n   Pr Pr   n ! Pr Pr   n !  n  n = A  ∏  ∏ (1 + γ ϑ( i()r ) ) + 3∏ (1 − γ ϑ( i()r ) )   ,= B  ∏  ∏ (1 + γ ϑ( i()r ) ) − ∏ (1 − γ ϑ( i()r ) )   , = =r 1 =r 1  =   ϑ∈Sn  r 1  ϑ∈Sn  r 1

which is called the Einstein probabilistic linguistic Archimedean dual MM (EPLADMM) operator.

ce pt

 δ + (1 − δ ) x  (3) If g ( x) = ln   , δ > 0 , then we get x  

where

Ac

 1  1  n  A + (δ 2 − 1) B ∑ Pr − ( A − B ) n Pr  ∑ r =1   r =1 HPLADMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = f −1    1 1 ) n 2 ∈ f ( Lϑ (1) ),  γ ϑ( i(1)  (i )  A + (δ − 1) B ∑ Pr + (δ − 1)( A − B ) ∑n Pr γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),  r =1  ..., r =1  γ ( i ) ∈ f ( L ( n ) )   ϑ (n) ϑ

)

(

(

)

     n   (i )   ∏1 pϑ ( r )   ,   ϑ∏ ∈Sn r =        

1

1

n n Pr Pr   n   n Pr   n ! Pr   n ! (i ) (i ) , 1 1 1 A  ∏  ∏ δ + (1 − δ ) (1 − γ ϑ( i()r ) ) + (δ 2 − 1) ∏ (1 − γ ϑ( i()r ) ) = B δ δ γ γ = + − − − − ( ) ( ) ( )   ∏ ∏ ∏ ϑ (r) ϑ (r)    , = =r 1 = =r 1    ϑ∈Sn  r 1  ϑ∈Sn  r 1

(

)

(

)

which is called the Hamacher probabilistic linguistic Archimedean dual MM (HPLADMM) operator. Especially, when δ = 1 , the HPLADMM operator reduces to APLADMM operator; when δ = 2 , the HPLADMM operator reduces to EPLADMM operator.

 τ −1  (4) If g ( x) = − ln  x  , τ ≠ 1 , then we get  τ −1  19

Page 19 of 33

FPLADMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )       −1  f   1 − logτ (i )  g ϑ( i(1)) ∈ f ( Lϑ (1) ),   g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),   ..., (i )  g ϑ ( n ) ∈ f ( Lϑ ( n ) )

1 1  n n − − + − + 1 A B 1 B A τ τ ( )( ) ( ) ( )  ∑ Pr ∑ Pr r =1 r =1  1  ( (τ − 1) B + A) ∑n Pr  r =1 

     n     ∏ ∏ pϑ(i()r )    ,      ϑ∈Sn r = 1        

(

1

)

(

)

1

Pr

 n!  , 

cr

n n n Pr   n P  n! P 1−γ ( i ) 1−γ ( i ) = A ∏  (τ − 1) ∏ τ ϑ ( r ) − 1 + ∏ (τ − 1) r  , = B ∏  ∏ (τ − 1) r − ∏ τ ϑ ( r ) − 1 ϑ∈Sn  =r 1 =r 1 = ϑ∈Sn  r 1 =r 1 

ip t

where

which is called the Frank probabilistic linguistic Archimedean dual MM (FPLADMM) operator.

us

3.5 Probabilistic linguistic Archimedean dual weighted MM operator

Definition 11. Let L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) be PLTSs, where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) , let the weight of n

an

input argument Lr ( pr ) be wr , where wr ∈ [0,1] and ∑ wr = 1 . Then Archimedean dual weighted MM operator of PLTSs is r =1

defined as follows:

(

)

1

n nwϑ ( r )  n ! 1  ⊕ Pr ( Lϑ ( r ) ( pϑ ( r ) ) ) PLADWMM ( L1 ( p1 ), L2 ( p2 ),..., Ln (= pn ) ) n  ϑ⊗  , ∈ = S r 1  n  P ∑ r

M

P

(26)

r =1

ed

= where P ( P1 , P2 ,..., Pn ) ∈ R n is a parameter vector, S n is the group of all substitutions of (1, 2,..., n) ,and

ce pt

(ϑ (1), ϑ (2),..., ϑ (n)) is one of S n . Then, the PLADWMM P is called the probabilistic linguistic Archimedean dual weighted MM (PLADWMM) operator.

Theorem 6. Let L1 ( p1 ), L2 ( p2 ), Ln ( pn ) be PLTSs, where Lr ( pr ) = {L(ri ) ( pr(i ) )} (r = 1, 2,..., n) , let the weight of input n

argument Lr ( pr ) be wr , where wr ∈ [0,1] and ∑ wr = 1 , then the aggregated result of the PLADWMM operator of the

Ac

r =1

PLTSs is also a PLTS, shown as follows:

PLADWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )        1   = f −1    h −1  n  (i )   g ϑ( i(1)) ∈ f ( Lϑ (1) ),   ∑ Pr g ϑ ( 2) ∈ f ( Lϑ ( 2) ),   ....,   r =1  g (i ) ∈ f ( L (n) )  ϑ (n) ϑ

 1   n h  g −1   ∑ g  h −1  ∑ Pr h g −1 nwϑ ( r ) g ( g ϑ(i()r ) )   1   n !  ϑ∈Sn   r =

( ( (

)))

     n     (i ) .   p            ∏ ∏ ϑ (r )          ϑ∈Sn r = 1        

(27)

Theorem 7. The PLADMM operator is a special case of the PLADWMM operator. Property 4 (Commutativity). If L1 '( p1 '), L2 '( p2 '),..., Ln '( pn ') is any substitution of L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) , then

20

Page 20 of 33

PLADWMM p ( L1 '( p1 '), L2 '( p2 '),..., Ln '( pn ') ) = PLADWMM p ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) . Next, we explore some special examples of PLADWMM operator in accordance with different parameter vector P . 1. When P = (1, 0,..., 0) , PLADWMM becomes the probabilistic linguistic Archimedean geometric weighted average operator.

r =1

wr

(

)

   n   (i )   .    ∏ pr       r =1     

ip t

n

PLADWMM (1,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = ⊗ ( Lr ( pr ) )

     n f −1    g −1  ∑ wr g ( g r( i ) ) =  g ( i ) ∈ f ( L1 ),   r =1  g12( i ) ∈ f ( L2 ),  ....,( i )  g n ∈ f ( Ln )

2. When P = (λ , 0,..., 0) , PLADWMM becomes the probabilistic linguistic Archimedean generalized geometric weighted

(

)

1

n 1 n nw )) λ ( Lr ( pr ) ) r  ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn=  r⊗ λ  =1 

     1  1 n = f −1    h −1  h  g −1   ∑ g h −1 λ h g −1 nwϑ ( r ) g ( g ϑ(i()r ) )  g1( i ) ∈ f ( L1 ),   λ   n  r =1  g 2( i ) ∈ f ( L2 ),   ....,( i )  g n ∈ f ( Ln )

(

))))

   n       (i )   .       ∏ pr          r =1     

an

( ( (

us

PLADWMM

( λ ,0,...,0)

cr

average operator.

M

3. When P = (1,1, 0,..., 0) , then PLADWMM becomes the probabilistic linguistic Archimedean geometric weighted BM operator.

nw 1 n nw  ⊗ ( Lr ( pr ) ) r ⊕ ( Lς ( pς ) ) ς PLADWMM (1,1,0,...,0) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) = 2  rr ,≠ςς=1 

ed

(

         −1  1  −1  1  n (i ) −1  f h h g g h −1 h g −1 nwr g ( gg + h g −1 nwς g ( ς(i ) )   r )  n(n − 1)  r∑  g1( i ) ∈ f ( L1 ),   2  , 1 = ς    r ≠ς  g 2( i ) ∈ f ( L2 ),      ....,( i )  g n ∈ f ( Ln )

(

ce pt

( ((

)) ( (

))))

)

1

 n ( n −1)   

     n  (i ) (i )    pr pς          r∏       r ,≠ςς=1     

−κ κ ,  (n( , 4. When P = (1,1,...,1, 0, 0,..., 0) , PLADWMM becomes the probabilistic linguistic Archimedean dual weighted MSM

Ac

operator. PLADWMM

−κ, κ,   (n( (1,1,...,1,0,0,...,0)

(

)

1

κ 1 nw  Cnκ ⊗ ( Lr ( pr ) ) r  ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn )=)  1≤ r <⊕ κ  1 ...< rκ ≤ n j =1 

       1 1  = f −1    h −1  h  g −1  κ  g1( i ) ∈ f ( L1 ),   κ   Cn    g 2( i ) ∈ f ( L2 ),    ....,( i )  g n ∈ f ( Ln )

   κ  ∑ g  h −1  ∑ h g −1 nwr g g r(i ) j j  1≤ r1 <...< rκ ≤ n   rj =1  

( (

( )))

   κ          (i )          ∏ ∏ prj    .        1≤ r <...< r ≤ n j =1    1 κ   

5. When P = (1,1,...,1) , then PLADWMM becomes the probabilistic linguistic Archimedean arithmetic weighted average operator.

21

Page 21 of 33

)) PLADWMM (1,1,...,1) ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn =      1  n = f −1    h −1   ∑ h g −1 nwr g ( g r(i ) )  g1( i ) ∈ f ( L1 ),   n  r =1  g 2( i ) ∈ f ( L2 ),  ....,( i )  g n ∈ f ( Ln )

(( (

1 n nwr   r⊕=1 ( Lr ( pr ) )  n 

)))

   n   (i )       ∏ pr    .     r =1     

ip t

6. When P = ( 1 , 1 ,..., 1 ) , PLADWMM becomes the probabilistic linguistic Archimedean arithmetic weighted average n n n operator.

( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) =

n nwϑ ( r )  1 ⊕  ( Lϑ ( r ) ( pϑ ( r ) ) )  n 

r =1

      n n       1  = f −1    h −1  ∑  h g −1 nwϑ ( r ) g ( g r(i ) )     ∏ pr(i )    .  g1( i ) ∈ f ( L1 ),   r =1  n     r =1     g 2( i ) ∈ f ( L2 ),   ....,( i )  ( ) f L ∈ g n  n 

))

an

( (

cr

1 1 1 ( , ,..., ) n n n

us

PLADWMM

According to different forms g ( x) , some specific probabilistic linguistic Archimedean dual weighted MM operators

M

can be shown as follows: (1) If g ( x) = − ln( x) , then

    1    n 1       n n P  P r    , nwϑ ( r ) r  n ! ∑   (i ) (i ) 1 −  r =1   − − − γ APLADWMM P ( L1 ( p1 ), L2 (= p2 ),..., Ln ( pn ) ) f −1  p 1 1 1 ( ϑ (r ) )  ∏ ∏ ϑ ( r )       ϑ∏  ∏   (i )   ϑ∈Sn r 1 ∈Sn  = r 1   ∈ f ( Lϑ (1) ),     γ ϑ( i(1)) =     γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),       ....,  γ ( i ) ∈ f ( Lϑ ( n ) )   ϑ (n) 

)

ce pt

ed

(

which is called the Algebraic probabilistic linguistic Archimedean dual weighted MM (APLADWMM) operator.

 2− x (2) If g ( x) = ln   , then  x 

Ac

EPLADWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )

1 1   1 1  n 1 1  n     Pr P ∑ ∑ n n n n ! ! ! ! −  ∏ ( A + 3B ) − ∏ ( A − B )  r 1 r  =   ∏ ( A + 3B ) + 3 ∏ ( A − B )  r 1 = ϑ∈Sn ϑ∈Sn ϑ∈Sn   ϑ∈Sn    −1  = f    1 1 (i ) 1 1  n 1 1  n   γ ϑ( i(1)) ∈ f ( Lϑ (1) ),   P Pr r γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),  +  ∏ ( A + 3B ) n! − ∏ ( A − B ) n!  ∑  ∏ ( A + 3B ) n! + 3 ∏ ( A − B ) n!  ∑ = r 1= r 1  ....,  γ ( i ) ∈ f ( L )   ϑ∈Sn ϑ∈Sn ϑ∈Sn   ϑ∈Sn   ϑ (n) ϑ (n)

      n    ∏ ∏ pϑ(i()r )    ,     ϑ∈Sn r = 1         

where

(

A= ∏ ( 2 − γ ϑ(i()r ) ) n

nwϑ ( r )

+ 3 ( γ ϑ(i()r ) )

nwϑ ( r )

)

Pr

(

,B = ∏ ( 2 − γ ϑ(i()r ) ) n

r 1= r 1

nwϑ ( r )

− ( γ ϑ(i()r ) )

nwϑ ( r )

)

Pr

,

which is called the Einstein probabilistic linguistic Archimedean dual weighted MM (EPLADWMM) operator.

 δ + (1 − δ ) x  (3) If g ( x) = ln   , δ > 0 , then we get x   22

Page 22 of 33

HPLADWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) ) 1 1   1 1 1  n 1  n     Pr 2 2 2 n! n! ∑ ∑ n n ! ! + − + − − − + − − − A 1 B ( 1) A B A 1 B A B δ δ δ ( )  r 1 = ( )  r 1 Pr ( ) ( )  = ∏ ∏   ϑ∏  ϑ∏ S S S S ϑ ϑ ∈ ∈ ∈ ∈   n n n n    = f −1     1 1 (i ) 1 1 1  n 1  n   γ ϑ( i(1)) ∈ f ( Lϑ (1) ),   P 2 2 2 r n! n! ∑ ∑ γ ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),  n n ! ! + − + − − + − + − − − A 1 B ( 1) A B 1 A 1 B A B δ δ δ δ )  r 1 = ( )  ∏ )  r 1 Pr ( ) ( ) ∏ =  ...., ∏( ∏(  γ ( i ) ∈ f ( L )   ϑ∈Sn ϑ∈Sn ϑ∈Sn ϑ∈Sn     ϑ (n) ϑ (n)

(

)

(

(

)

)

(

)

     ,  n     ∏ ∏ pϑ(i()r )         ϑ∈Sn r = 1        

where

+ (δ − 1)( γ

n

)

, B ∏ ( (δ + (1 − δ ) γ )=

n nwϑ ( r ) nwϑ ( r ) Pr (i ) 2 (i ) ϑ (r ) (r ) r 1= r 1

)

nwϑ ( r ) (i ) ϑ (r )

− ( γ ϑ(i()r ) )

nwϑ ( r )

)

Pr

,

ip t

∏ ( (δ + (1 − δ ) γ ϑ )

= A

which is called the Hamacher probabilistic linguistic Archimedean dual weighted MM (HPLADWMM) operator.

cr

Especially, when δ = 1 , the HPLADWMM operator reduces to APLADWMM operator; when δ = 2 , the HPLADWMM

 τ −1  (4) If g ( x) = − ln  x  , τ ≠ 1 , then  τ −1 

FPLADWMM P ( L1 ( p1 ), L2 ( p2 ),..., Ln ( pn ) )

us

operator reduces to EPLADWMM operator.

1 1    1 1 1  n 1    n  P  n! n ! ∑ Pr n !  ∑ r +  (τ − 1) n! + 1 1 1 A B B A B A A B τ τ τ − − + − − − − + ( ) ( ) ( ) ( ) ( ) ( ) ( )    1 1 r r = = ∏ ∏ ∏    ϑ∏    ϑ∈Sn ϑ∈Sn ϑ∈Sn    ∈Sn    = f −1   1 − logτ  1 (i ) n 1 1    g ϑ( i(1)) ∈ f ( Lϑ (1) ),   g ϑ ( 2 ) ∈ f ( Lϑ ( 2 ) ),  ( B − A) n! + ∏ ( (τ − 1) A + B ) n!  ∑r=1 Pr  ....,  (τ − 1) ϑ∏   g ( i ) ∈ f ( L )  ϑ∈Sn ∈Sn     ϑ (n) ϑ (n)

)

(

an

(

where n

r

(

)

(

P

)

r n  γ ϑ( i()r )  = −1 B , ∏   (τ − 1) τ   r 1  1=



∏  (τ − 1)

nwϑ ( r )

− τ

γ ϑ( i()r )

−1

nwϑ ( r )

nwϑ ( r )

+ (τ − 1)

ed

= A

)

M

(

)

   ,    n    ( ) i    ∏ ∏ pϑ ( r )        ϑ∈Sn r = 1         

Pr

nwϑ ( r )

  , 

which is called the Frank probabilistic linguistic Archimedean dual weighted MM (F-PLADWMM) operator.

ce pt

4. Two MADM methods based on MM aggregation operators for PLTSs In this part, we utilize the PLAWMM operator or PLADWMM operator to solve MADM problems under probabilistic linguistic environment. Because the PLAWMM operator or PLADWMM operator don’t have the specific representation with functions g ( x) and h( x) . In order to directly handle evaluation information, we first select a specific function g ( x) . Then we use a special case of PLAWMM operator or PLADWMM operator to aggregate input arguments.

Ac

Because the Hamacher operator is the generalization of Algebraic operator and Einstein operator, and the Hamacher operator is easier than Frank operator in calculation, we use the Hamacher operator to simplify the PLAWMM operator or PLADWMM operator. Therefore, the HPLAWMM operator and the HPLADWMM operator are unitized to deal with evaluation information.

For a MADM problem, let A = { A1 , A2 ,..., Am } be the group of alternatives, C = {C1 , C2 ,..., Cn } be group of attributes n

and wr be the weight of attributes Cr (r = 1, 2,..., n) , with wr ∈ [0,1] and ∑ wr = 1 . Suppose that R = [ rer ]m×n is the decision r =1

rer {L(eri ) ( per(i ) ) | L(eri ) ∈ S , = per(i ) ≥ 0, i 1, 2,..., # Ler ( per ), ∑ i =1 = matrix, where

# Ler ( per )

per(i ) ≤ 1} is the evaluation value of the

alternative Ae on the criteria Cr which is expressed by the form of PLTS. In the following, we will apply the HPLAWMM (or HPLADWMM) operator to rank the alternatives, the schematic 23

Page 23 of 33

diagram of the proposed methods is shown in Fig.1, and the steps are summarized as follows:

In order to simplify computation, we select the LSF f1 ( sk= ) θ= k

k , which is simple and commonly used. t

Step 1. Normalize the attribute values. In general, there are cost type and benefit type in the attributes, we convert cost type to benefit one by the Eq.(28). (For convenience, the converted result is still shown by Ler ( per ) ).

  = rer f −1   (1 − γ er(i ) )( per(i ) )  .  γ (i ) ∈ f ( L )  er  er 

ip t

(28)

HPLAWMM or HPLAWDMM operator shown as follows:

us

re = HPLAWMM P ( re1 , re 2 ,..., ren ) (for the optimistic decision-makers),

cr

Step 2. Aggregate all attribute values rer (r = 1, 2,..., n) of each alternative to the comprehensive value re by

Step 3. Rank re based on the score function. Step 4. Rank all alternatives.

M

The bigger the PLTS re is, the better the alternative Ae is.

an

or re = HPLADWMM P ( re1 , re 2 ,..., ren ) ( for the pessimistic decision makers).

ed

Identify the Probabilistic linguistic decision matrix

Normalize the Probabilistic linguistic decision matrix

No

ce pt

The type is concident in the attributes

Yes

Ac

Aggregate all attributes of each alternative by HPLAWMM / HPLAWDMM operator

Calculate the score values of each alternatives

Rank alternatives by the score values

End

Fig.1 Schematic diagram of the proposed methods

5. Illustrative example In the following, we utilize a practical example to illustrate the practicality and effectiveness of PLTSs: For this decision-making problem, suppose that there are four potential projects ( A1 , A2 , A3 , A4 ) to be evaluated by a company’s board of directors. These projects may cause losses and gains so it is necessary to select the most optimal one among them to maximize profit with respect to four attributes: (1) C1 : Financial perspective; (2) C2 : Customer satisfaction; (3) C3 : Internal business process; (4) C4 : Learning and growth. The experts compare each pair of projects by the LTS 24

Page 24 of 33

S = {s0 , s1 , s2 , s3 , s4 } ={low, a little low, medium, a little high, high}. The preference information given by the experts is shown in Table 3. Suppose the weighted vector of attributes is w = (0.2,0.3,0.3,0.2) . T

Table3. The decision matrix with PLTSs

C2

C3

C4

A1

{( s2 ,1)}

{( s1 ,0.6)}

{( s3 ,0.4),( s4 ,0.4)}

{( s3 ,0.8)}

A2

{( s2 ,0.8)}

{( s1 ,0.8)}

{( s0 ,0.6),( s1 ,0.2)}

{( s2 ,0.6)}

A3

{( s1 ,0.4)}

{( s3 ,0.6)}

{( s2 ,0.8),( s3 ,0.2)}

{( s0 ,0.5)}

A4

{( s1 ,0.8)}

{( s1 ,0.6)}

{( s4 ,0.5),( s3 ,0.5)}

{( s2 ,1)}

cr

5.1 The decision steps

ip t

C1

Step 1: Because all attributes are benefit type, it is no need to do the standardization.

us

Step 2: Aggregate all attribute values Ler ( per ) (r = 1, 2,..., n) to obtain the comprehensive evaluation value of each alternative by HPLAWMM or HPLADWMM operator (suppose P = (0.25, 0.25, 0.25, 0.25) ), which are shown in Table 4. Table 4. The comprehensive evaluation value re by HPLAWMM or HPLADWMM operator

r2

r4

r3

an

r1

operator H-PLAWMM

{( s1.78 ,0.19),( s1.97 ,0.19)}

{( s0 ,0.23),( s1.18 ,0.08)}

{( s0 ,0.12)}

{( s1.46 ,0.24),( s1.33 ,0.24)}

H-PLADWMM

{( s2.48 ,0.19),( s4.00 ,0.19)}

{( s1.52 ,0.23),( s1.65 ,0.08)}

{( s1.69 ,0.10),( s2.04 ,0.02)}

{( s4.00 ,0.24),( s1.96 ,0.24)}

M

Step 3: Calculate the SV S (re ) (e = 1, 2,3, 4) of the comprehensive evaluation values re in Table 5. Step 4: Rank all alternatives.

ed

According to the SV S (re ) (e = 1, 2,3, 4) , we rank all alternatives A1 , A2 , A3 , A4 . From Table 5, we know that the ranking results are same by different operators, and the optimal alternative is A1 . Table 5. The SV SV (re ) of the comprehensive evaluation values

HPLAWMM HPLADWMM

SV S (r1 )

SV S (r2 )

SV S (r3 )

SV S (r4 )

Ranking results

2.09

0.35

0

1.56

A1  A4  A2  A3

3.24

1.56

1.76

2.98

A1  A4  A2  A3

ce pt

operator

Ac

5.2 The effect of parameter vector P and parameter δ on the ranking order of the alternatives For discussing the role of the parameter P on the ranking order, the different P is applied to analyze the ranking results which are shown in Table 6.

As shown in Table 6, we find that the ranking results obtained by HPLAWMM and HPLADWMM operator with different P are faintly different. Besides, the values of SVs become greater as the values of parameter vector P are larger, whereas the values of SVs in HPLADWMM operator are just the opposite. Based on this, the parameter vector P represents the decision makers’ mentality. In the practical applications, if a decision maker is optimistic, the parameter vector is generally defined as P = (1, 0,..., 0) in HPLAWMM operator or as P = (0, 0,...,1) in HPLADWMM operator; if the decision maker is pessimistic, the parameter vector is generally defined as P = (0, 0,...,1) in HPLAWMM operator or as

P = (1, 0,..., 0) in HPLADWMM operator; if the DM is neutral, the parameter vector is generally defined as 1 1 1 P = ( , ,..., ) . So, every decision maker can select the suitable parameter vector P with respect to their risk n n n 25

Page 25 of 33

preferences. Table 6. Ranking orders for different parameter vector P (suppose δ = 1 ) HPLAWMM

P = (1,1,0,0)

P = (1,1,1,0)

P = (1,1,1,1)

A1  A4  A3  A2

A1  A4  A3  A2

S (r1 ) = 2.08 , S (r2 ) = 0.33 , S (r3 ) = 0 , S (r4 ) = 1.67 S (r1 ) = 1.23 , S (r2 ) = 0.18 , S (r3 ) = 0 , S (r4 ) = 0.95 S (r1 ) = 0.87 , S (r2 ) = 0.12 , S (r3 ) = 0 , S (r4 ) = 0.66 S (r1 ) = 2.49 , S (r2 ) = 1.31 , S (r3 ) = 1.49 , S (r4 ) = 1.94 S (r1 ) = 2.65 , S (r2 ) = 1.51 , S (r3 ) = 1.73 , S (r4 ) = 2.12 S (r1 ) = 3.24 , S (r2 ) = 1.56 , S (r3 ) = 1.76 , S (r4 ) = 2.98

A1  A4  A3  A2

A1  A4  A3  A2

A1  A4  A2  A3

A1  A4  A2  A3

A1  A4  A2  A3

ip t

P = (3,0,0,0)

A1  A4  A3  A2

Ranking

A1  A4  A2  A3

cr

P = (2,0,0,0)

S (r1 ) = 3.20 , S (r2 ) = 1.28 , S (r3 ) = 2.05 , S (r4 ) = 3.01 S (r1 ) = 3.24 , S (r2 ) = 1.38 , S (r3 ) = 2.31 , S (r4 ) = 3.09 S (r1 ) = 3.28 , S (r2 ) = 1.44 , S (r3 ) = 2.46 , S (r4 ) = 3.16 S (r1 ) = 2.31 , S (r2 ) = 1.19 , S (r3 ) = 1.58 , S (r4 ) = 1.76 S (r1 ) = 2.17 , S (r2 ) = 1.18 , S (r3 ) = 1.22 , S (r4 ) = 1.64 S (r1 ) = 2.09 , S (r2 ) = 0.35 , S (r3 ) = 0 , S (r4 ) = 1.56

an

P = (1,0,0,0)

SV S (re )

Ranking

us

SV S (re )

HPLADWMM

M

P

A1  A4  A3  A2

A1  A4  A3  A2

A1  A4  A3  A2

ed

For further discussing the effect of the parameter δ on ranking results, we apply different δ in HPLAWMM and HPLADWMM operators. The corresponding evaluation results are shown in Table 7. As shown in Table 7, we find that the values of SV S (re ) decrease with increasing parameter values δ from 0.1 to 100 in HPLAWMM operator, while the values of SV S (re ) increase with increasing parameter values δ from 0.1 to 100 in

ce pt

HPLADWMM operator. Therefore, the parameter δ also can be regard as the decision makers’ risk preference. When the parameter δ is assigned different values, the values of SVs are different although the ranking orders are the same in this example. Generally, the values of parameter are δ = 1 or δ = 2 , as they are Algebraic operators or Einstein operators. Table 7. Ranking results by different values of δ (suppose P = (1,0,0,0) )

δ = 0.1

δ = 0.5

δ = 1.0

δ = 1.5

HPLAWMM SV S (re )

Ac

δ

S (r1 ) = 3.27 , S (r2 ) = 1.36 , S (r3 ) = 2.26 , S (r4 ) = 3.09 S (r1 ) = 3.23 , S (r2 ) = 1.32 , S (r3 ) = 2.14 , S (r4 ) = 3.04 S (r1 ) = 3.20 , S (r2 ) = 1.28 , S (r3 ) = 2.05 , S (r4 ) = 3.01 S (r1 ) = 3.19 , S (r2 ) = 1.25 , S (r3 ) = 1.99 ,

HPLADWMM SV S (re )

Ranking

S (r1 ) = 1.86 , S (r2 ) = 0.32 , S (r3 ) = 0.00 , S (r4 ) = 1.49 S (r1 ) = 1.99 , S (r2 ) = 0.33 , S (r3 ) = 0.00 , S (r4 ) = 1.59 S (r1 ) = 2.08 , S (r2 ) = 0.33 , S (r3 ) = 0.00 , S (r4 ) = 1.67 S (r1 ) = 2.14 , S (r2 ) = 0.33 , S (r3 ) = 0.00

A1  A4  A3  A2

A1  A4  A3  A2

A1  A4  A3  A2

A1  A4  A3  A2

Ranking

A1  A4  A2  A3

A1  A4  A2  A3

A1  A4  A2  A3

A1  A4  A2  A3

26

Page 26 of 33

δ =2

δ =5

δ = 100

S (r4 ) = 1.72 S (r1 ) = 2.18 , S (r2 ) = 0.33 , S (r3 ) = 0.00 , S (r4 ) = 1.76 S (r1 ) = 2.32 , S (r2 ) = 0.34 , S (r3 ) = 0.00 , S (r4 ) = 1.88 S (r1 ) = 2.68 , S (r2 ) = 0.34 , S (r3 ) = 0.00 , S (r4 ) = 2.29

A1  A4  A3  A2

A1  A4  A3  A2

A1  A4  A3  A2

A1  A4  A2  A3

A1  A4  A2  A3

A1  A4  A2  A3

ip t

S (r4 ) = 2.98 S (r1 ) = 3.17 , S (r2 ) = 1.22 , S (r3 ) = 1.95 , S (r4 ) = 2.97 S (r1 ) = 3.14 , S (r2 ) = 1.13 , S (r3 ) = 1.79 , S (r4 ) = 2.93 S (r1 ) = 3.11 , S (r2 ) = 0.78 , S (r3 ) = 1.28 , S (r4 ) = 2.89

5.3 Comparative analyses

cr

For showing the prominent advantages of the proposed methods in dealing with the MADM problems under probabilistic linguistic environment, we compare the proposed methods with other existing methods including

us

probabilistic linguistic weighted aggregation operators [16,18] and extended TOPSIS method in [16]. 5.3.1 Comparison with Pang et al’s method [16] and Gou et al’s method [18]

We make a comparison between our proposed methods and the methods in [16],[18]. The specific ranking results are

an

shown in Table 8.

Table 8. Ranking results by different methods Methods

PLWA-N operator [18] HPLAWMM operator HPLADWMM operator

S (r1 ) = 1.48 , S (r2 ) = 0.83 , S (r3 ) = 0.95 , S (r4 ) = 1.27 S (r1 ) = 1.59 , S (r2 ) = 0.61 , S (r3 ) = 0 , S (r4 ) = 1.38

M

PLWG operator[16]

S (r1 ) = 3.20 , S (r2 ) = 1.28 , S (r3 ) = 2.05 , S (r4 ) = 3.01 S (r1 ) = 3.20 , S (r2 ) = 1.28 , S (r3 ) = 2.05 , S (r4 ) = 3.01

(suppose P = (1,0,0,0) and δ = 1 )

ed

PLWA operator [16]

SVs

S (r1 ) = 2.08 , S (r2 ) = 0.33 , S (r3 ) = 0 , S (r4 ) = 1.67 (suppose P = (1,0,0,0) and δ = 1 )

ranking

A1  A4  A3  A2 A1  A4  A2  A3 A1  A4  A3  A2 A1  A4  A3  A2 A1  A4  A2  A3

ce pt

From Table 8, we find that the ranking orders obtained by PLWA operator [16] and HPLAWMM operator are same although the values of SVs are different, and the ranking results obtained by PLWG operator [16] and HPLADWMM operator are same. This shows that the proposed methods are reasonable and effective. In the following, we further make comparisons of the differences between the proposed methods and the existing methods.

(1) By comparison with methods based on PLWA operator and PLWG operator in [16], the operational laws for

Ac

PLTSs in [16] may exceed the bounds of LTSs, or the corresponding probability information must be lost after operators. As we all known, it is unreasonable that the comprehensive values produced by [16] reduce to the hesitant fuzzy LTSs. Our proposed methods make sure that the comprehensive values are still PLTSs and the possible LTs in PLTS within the bounds of LTSs after information fusion. (2) Compared with PLWA-N operator in [18], we can see that the values of SVs by PLWA-N operator in [18] and by HPLAWMM operator in this paper are same. When the parameter δ is 1 and the parameter vector is P = (1, 0, 0, 0) , the HPLAWMM operator comes to the PLWA-N operator in [18]. In other words, the PLWA-N operator in [18] is a special example of the HPLAWMM operator. Based on above analyses, this shows that our proposed methods are more general and flexible than Gou et al’s method [18]. In addition, the proposed methods are based on MM aggregation operators and ATT, which provide the opportunity for DMs to select appropriate parameter value δ and parameter vector P according to their risk preferences. The operational laws in Gou et al’s method [18] are the special case of our operational laws, and the PLWA-N operator in [18] is also a special case of PLAWMM operator. Therefore, our methods have a wider range of applications. 27

Page 27 of 33

(3) By comparison with methods based on PLWA-N [18], PLWA [16] and PLWG operators [16], the obvious advantage of the proposed methods is that the HPLAWMM and HPLADWMM operators pay attention to the interaction among multi-input arguments, whereas PLWA (PLWA-N) and PLWG operator assume the input arguments are independent. Therefore, our proposed methods can be more suitable for handling practical decision-making problems where the input arguments have interacting relationships. 5.3.2 Comparison with extended TOPSIS method [16] For showing another advantage of our proposed methods which can provide the comprehensive values of each alternative, we compare the proposed methods with extended TOPSIS method.

ip t

From Table 9, we can find that the proposed methods not only give the ranking results of all alternatives, but also give the comprehensive values, while the extended TOPSIS method only gives the ranking results. Moreover, the extended TOPSIS method consists of three steps (PIS and NIS, deviation degrees, and closeness coefficient) which may cause the

cr

information loss, especially the corresponding probability information. But our proposed methods can keep evaluation information as completely as possible. Based on above analyses, it can validate that our proposed methods are suitable to deal with MADM problems with probabilistic linguistic information. Methods

Comprehensive values

Extended TOPSIS

No

an

method [16]

us

Table 9. Ranking results by different methods

HPLAWMM operator

r1 = {( s2.40 ,0.19),( s4.00 ,0.19)} , r2 = {( s1.22 ,0.23),( s1.45 ,0.08)} ,

( δ = 1 P = (1,0,0,0) )

r3 = {( s1.98 ,0.10),( s2.36 ,0.02)} , r4 = {( s2.01 ,0.24),( s4.00 ,0.24)} r1 = {( s2.21 ,0.19),( s3.14 ,0.19)} , r2 = {( s0 ,0.23),( s1.36 ,0.08)} ,

H-PLADWMM operator ( δ = 1 , P = (1,0,0,0) )

A1  A4  A3  A2 A1  A4  A3  A2 A1  A4  A2  A3

M

r3 = {( s0 ,0.12)} , r4 = {( s1.78 ,0.24),( s2.80 ,0.24)}

ranking

5.3.3 Comparison with Gou et al’s method [22]

In order to further analyze the validity of proposed methods, we compare the proposed methods with Gou et al’s

ed

method [22] built on HFLWBM operator because the HFLTSs can be rewritten as PLTSs. HFLTS contain many possible LTs but doesn’t include the importance degree of these LTs. So it can be considered that all possible LTs have the same possibility or same importance degree in a HFLTS. Table 10.

ce pt

In this section, we cite the numerical example from reference [22], the hesitant linguistic decision matrix is shown in Table10. Evaluation values of four hospitals

C1 A1

< x11 ,{s0 , s1} >

A2

< x21 ,{s1} > < x31 ,{s1} >

A4

< x41 ,{s0 , s1} >

Ac

A3

C2

C3

< x12 ,{s2 } >

< x13 ,{s−1 , s0 } >

< x22 ,{s−1 , s0 } >

< x23 ,{s2 } >

< x32 ,{s1 , s2 } >

< x33 ,{s2 , s3} >

< x42 ,{s0 , s1} >

< x43 ,{s1} >

According to Table 10, the hesitant fuzzy linguistic decision matrix is rewritten as the probabilistic linguistic decision matrix which is shown in Table 11. Then we utilize the proposed method based on PLAWMM operator to fuse the probabilistic linguistic information and compare the ranking result with those produced by Gou et al’s method [22]. The ranking results are shown in Table 12. Table 11. Probabilistic linguistic decision matrix

C1

C2

C3

C1

A1

< x11 ,{( s0 ,0.5),( s1 ,0.5)} >

< x12 ,{( s2 ,1)} >

< x13 ,{( s−1 ,0.5),( s0 ,0.5)} >

A2

< x21 ,{( s1 ,1)} >

< x22 ,{( s−1 ,0.5),( s0 ,0.5)} >

< x23 ,{( s2 ,1)} >

A3

< x31 ,{( s1 ,1)} >

< x32 ,{( s1 ,0.5),( s2 ,0.5)} >

< x33 ,{( s2 ,0.5),( s3 ,0.5)} >

A4

< x41 ,{( s0 ,0.5),( s1 ,0.5)} >

< x42 ,{( s0 ,0.5),( s1 ,0.5)} >

< x43 ,{( s1 ,1)} >

28

Page 28 of 33

Table 12. Ranking results by different methods Aggregation operator

Parameter value

Ranking results

HFLWBM operator [22]

= p 1,= q 1

A3  A4  A2  A1

HPLAWMM operator

P = (1,1,0) , δ = 1

A3  A4  A2  A1

HPLAWMM operator

P = (1,1,0) , δ ≥ 50.8

A3  A2  A4  A1

HPLAWMM operator

P = (1,1,1) , δ = 1

A3  A4  A1  A2

From Table 12, we can see that the ranking orders obtained by HFLWBM operator [22] and HPLAWMM operator ( P = (1,1, 0) , δ = 1 ) are same in this case. This further verified that our proposed methods are reasonable and effective.

ip t

Moreover, the ranking results obtained by HPLAWMM operator have been changed from A3  A4  A2  A1 to

A3  A2  A4  A1 when δ ≥ 50.8 . So we can demonstrate that the proposed methods are more general because of using ATT in the proposed methods. In addition, compared with Gou et al’s method built on HFLWBM operator [22], we can see

cr

that the ranking orders by our proposed method based on HPLAWMM operator ( P = (1,1,1) , δ = 1 ) is slightly different from the one based on Gou et al’s method [22]. This difference shows an advantage of our proposed method which considers the interrelationship among all input-arguments, while Gou et al’s method [22] only considers the

us

interrelationship between two arguments. 5.3.4 Comparison with MM operator [28]

an

MM operator was proposed by Muirhead [28], which can only handle with attribute values which take the form of crisp numbers. However, in practical decision making problems, crisp numbers don’t play a role in describing qualitative information due to the increasing complexity and variety of socio-economic environment. DMs always express their judgments in terms of several possible linguistic terms with different preference distributions. PLTS felicitously satisfy

M

this requirement. But the MM operator cannot solve decision making problems under probabilistic linguistic environment. So it is essential to propose some probabilistic linguistic MM operators to solve such problems. How to utilize the MM to aggregate PLTSs is a key point. PLTS is quite different from crisp number, both in information form and the operational

ed

rules. The operational rules of crisp numbers are inapplicable to manage PLTSs. So, it is not as simple as replacing the crisp numbers with PLTSs in MM operator proposed by Muirhead [28]. Based on this, we propose the generalized probabilistic linguistic operational rules by ATT, which is the foundation of information aggregation operators. With the

ce pt

help of the proposed operational rules, we fuse the PLTSs into MM operator. To sum up, our proposed method can utilize the MM operator to integrate PLTSs based on the novel operational rules. So, the proposed operators are not simple extensions of Muirhead mean operator. Based on above analysis, the comparisons of different methods can be concluded in Table 13. Table 13. The comparisons of different methods

whether consider the interrelationship of multi-input arguments

whether obtain the comprehensive values

whether avoid the information loss

whether makes the method flexible by parameter

Ac

Methods

Extended TOPSIS method [16] PLWA operator [16]

No

No

No

No

No

Yes

No

No

PLWG operator[16]

No

Yes

No

No

PLWA-N operator [18]

No

Yes

Yes

No

HFLWBM operator [22] HPLAWMM operator HPLADWMM operator

Yes Yes Yes

Yes Yes Yes

Yes Yes Yes

No Yes Yes

Based on above comparative analyses, the advantages of our proposed methods can be concluded as follows: (1) Our proposed methods capture interrelationship among any arguments, which is necessary in the process of real decision-making, while the existing methods cannot focus on the interrelation among arguments or only consider the interrelationship between two arguments. This point is a novelty in processing interrelationship among multi-input 29

Page 29 of 33

arguments. Although some existing methods can solve MADM problems in which the input arguments are mutually dependent and correlative, they never handle decision-making problems under probabilistic linguistic environment. Without doubt there also exist some methods based on probabilistic linguistic aggregation operators, while they can only integrate the probabilistic linguistic information which are independent with each other. Given all that, our proposed methods can handle probabilistic linguistic information no matter how much the input arguments are interrelated and interact, which is an unusual property. (2) Our proposed methods are more general and flexible than others because they are the generalization of lots of aggregation operators, such as PLAWA operator/PLAWG operator, PLAWBM operator/PLADWBM operator,

ip t

PLAWMSM operator/PLADWMSM operator. This is the second novelty. If the input arguments are independent, our proposed methods will reduce to the PLAWA operator/PLAWG operator. If arbitrary two arguments are interacted with each other, they will reduce to the PLAWBM operator/PLADWBM operator. Certainly, if any input arguments are

cr

mutually dependent, they can also reduce to the PLAWMSM operator/PLADWMSM operator. From this point of view, our proposed methods can more flexibly solve the MADM problems and have the broadest applications scope. (3) The proposed operational laws have a wide range of the flexibility and the versatility based on ATT and LSFs. The

us

proposed operational laws are the generalization of the existing operational rules for PLTSs, which keep the possible LTs within the bounds of LTSs and retain their probability information after aggregation information. This point is the foundation of the proposed methods, which is the third novelty. The proposed operational laws can reduce to specific cases

an

by selection different ATT according to its purpose so that they have good suitability, scalability, and acceptability. The applications of various LSFs can be determined according to the semantics of LTs in real applications and can keep original information completely.

M

(4) Our proposed methods not only obtain the ranking results, but also get the quantitative relationships among alternatives by comprehensive values, which can avoid information loss in the process of aggregating information, while some other probabilistic linguistic methods based on traditional decision-making methods can only give the ranking results.

ed

Obviously, this is another novelty.

6. Conclusion

ce pt

PLTS consisted of several possible linguistic terms and their relative distributions, which is more suitable for expressing qualitative evaluation information in practical decision-making process. The operational laws of the PLTSs are the foundation about the research on PLTSs. However, in existing operational laws of PLTSs, there are some weaknesses. For example, for Pang et al’s operational laws [16], we can find the aggregated result is reduced to HFLTS, which leads to the loss of the preference information about possible linguistic terms; for Gou et al’s operational laws [18], we know they are constructed based on Algebraic TC and TN which is a special case of ATT. In order to fill these limitations, we

Ac

proposed the more general operational laws for PLTSs based on ATT and LSFs in this paper. Then we have analyzed the special examples with different g ( x) and discussed some operational laws for PLTSs based on these special examples. Moreover, with the help of these proposed operational rules, we could fuse PLTSs into MM and DMM operators, and developed some novel operators, including the PLAMM operator, PLAWMM operator, PLADMM operator and PLADWMM operator. After that, we have proposed two MADM methods based on PLAWMM operator and PLADWMM operator with respect to Hamacher additive generator. The prominent properties of the proposed methods are that they can capture interrelationship among any arguments and they are the generalization of lots of aggregation operators so that they more reasonable and more effective than Gou et al’s method [18,22] and Pang et al’s method [16]. In further research, it is essential to use our proposed methods to solve the practical problems such as supplier selection, investment appraisal, risk assessment, etc. On the other hand, we will further study other special cases of ATT under probabilistic linguistic environment. Certainly, we will combine the ATT with many information aggregation operators to other fuzzy environment, involving possibility neutrosophic soft set, probabilistic dual hesitant fuzzy set, generalized orthopair fuzzy set and so on. 30

Page 30 of 33

Acknowledgments This work is supported by the National Natural Science Foundation of China (Nos. 71771140 and 71471172), the Special Funds of Taishan Scholars Project of Shandong Province (No. ts201511045), Shandong Provincial Social Science Planning Project (Nos. 16CGLJ31 and 16CKJJ27), the Teaching Reform Research Project of Undergraduate Colleges and Universities in Shandong Province (2015Z057), and Key research and development program of Shandong Province (2016GNC110016).

ip t

References [1] X.H. Yu, Z.S. Xu, J.Q. Hu, et al., Systematic decision making: a extended multi-criteria decision making model, Technological and Economic Development of Economy 23(1) (2017) 157-177.

cr

[2] M.A.Butt, M. Akram, A new intuitionistic fuzzy rule-based decision-making system for an operating system process scheduler, SpringerPlus 5(1) (2016) 1-17

[3] M.A. Butt, M. Akram, A novel fuzzy decision-making system for CPU scheduling algorithm, Neural Computing and

us

Applications 27(7) (2016) 1927-1939.

[4] S. Habib, M. Akram, A. Ashraf, Fuzzy Climate Decision Support Systems for Tomatoes in High Tunnels, International Journal of Fuzzy Systems 19(3) (2017) 751-775. Systems (2017) https://doi.org/10.1007/s40815-017-0368-0

an

[5] F. Zafar, M. Akram, A novel decision-making method based on rough fuzzy information, International Journal of Fuzzy [6] M.Sarwar,M.Akram, Certain algorithms for computing strength of competition in bipolar fuzzy graphs, International

M

Journal of Uncertainty, Fuzziness and Knowledge Based Systems 25(6)(2017) 877-896. [7] L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning-I, Information sciences 8(3) (1975) 199-249. fuzzy systems 8(6) (2000) 746-752.

ed

[8] F. Herrera, L. Martinez, A 2-tuple fuzzy linguistic representation model for computing with words, IEEE Transactions on [9] Z.S. Xu, Deviation measures of linguistic preference relations in group decision making, Omega 33 (3) (2005) 249-254.

ce pt

[10] I.B. Türkşen, Type 2 representation and reasoning for CWW, Fuzzy Sets and Systems 127(1) (2002) 17-36. [11] R.M. Rodriguez, L. Martinez, F. Herrera, Hesitant fuzzy linguistic term sets for decision making, IEEE Transactions on Fuzzy Systems 20(1) (2012) 109-119.

[12] J.Q. Wang, J.T. Wu, J. Wang, et al, Interval-valued hesitant fuzzy linguistic sets and their applications in multi-criteria decision-making problems, Information Sciences 288 (2014) 55-72. [13] C.P. Wei, R.M. Rodríguez, L. Martínez, Uncertainty measures of extended hesitant fuzzy linguistic term sets, IEEE

Ac

Transactions on Fuzzy Systems (2017) Doi: 10.1109/TFUZZ.2017.2724023. [14] C.P. Wei, N. Zhao, X. Tang, Operators and comparisons of hesitant fuzzy linguistic term sets, IEEE Transactions on Fuzzy Systems 22 (3) (2014) 575-585. [15] J.T. Wu, J.Q. Wang, J. Wang, et al, Hesitant fuzzy linguistic multicriteria decision making method based on generalized prioritized aggregation operator, The Scientific World Journal 2014 (2014) 1-16. [16] Q. Pang, H. Wang, Z.S. Xu, Probabilistic linguistic term sets in multi-attribute group decision making, Information Sciences 369 (2016) 128-143. [17] C.Z. Bai, R. Zhang, L. Qian, et al, Comparisons of probabilistic linguistic term sets for multi-criteria decision making, Knowledge-Based Systems 119 (2017) 284-291. [18] X.J. Gou, Z.S. Xu, Novel basic operational laws for linguistic terms, hesitant fuzzy linguistic term sets and probabilistic linguistic term sets, Information Sciences 372 (2016) 407-427. [19] H.C. Liao, L.S. Jiang, Z.S. Xu, et al, A linear programming method for multiple criteria decision making with probabilistic linguistic information, Information Sciences 415 (2017) 341-355. 31

Page 31 of 33

[20] X. Zhang, X. Xing, Probabilistic linguistic VIKOR method to evaluate green supply chain initiatives, Sustainability 9 (7) (2017) 1-18. [21] Z.S. Xu, S.Q. Luo, H.C. Liao, A probabilistic linguistic PROMETHEE method and its application in medical service, Journal of Systems Engineering (2017), in press. [22] X.J. Gou, Z.S. Xu, H.C. Liao, Multiple criteria decision making based on Bonferroni means with hesitant fuzzy linguistic information, Soft Computing 21(21)(2017) 6515–6529. [23] M.M. Xia, Z.S. Xu, B. Zhu, Geometric Bonferroni means with their application in multi-criteria decision making, Knowledge-Based Systems 40 (2013) 88-100.

ip t

[24] P.D. Liu, F. Teng, Multiple attribute group decision making methods based on some normal neutrosophic number Heronian Mean operators, Journal of Intelligent & Fuzzy Systems 32(3) (2017) 2375-2391.

[25] D.J. Yu, Hesitant fuzzy multi-criteria decision making methods based on Heronian mean, Technological and Economic

cr

Development of Economy 23 (2) (2017) 296-315.

[26] J.D. Qin, X.W. Liu, Approaches to uncertain linguistic multiple attribute decision making based on dual Maclaurin symmetric mean, Journal of Intelligent & Fuzzy Systems 29(1) (2015) 171-186

us

[27] J.D. Qin, X.W. Liu, W. Pedrycz, Hesitant fuzzy Maclaurin symmetric mean operators and its application to multiple-attribute decision making, International Journal of Fuzzy Systems 17(4) (2015) 509-520. [28] R.F. Muirhead, Some methods applicable to identities and inequalities of symmetric algebraic functions of n letters,

an

Proceedings of the Edinburgh Mathematical Society 21 (1902) 144-162.

[29] P.D. Liu, D.F. Li, Some Muirhead Mean Operators for Intuitionistic Fuzzy Numbers and Their Applications to Group Decision Making, PloS one 12(1) (2017) 1-28.

M

[30] J.D. Qin, X.W. Liu, 2-tuple linguistic Muirhead mean operators for multiple attribute group decision making and its application to supplier selection, Kybernetes 45(1) (2016) 2-29.

[31] P.D. Liu, X.L. You, Interval neutrosophic Muirhead mean operators and their application in multiple attribute group decision-making, International Journal for Uncertainty Quantification 7(4) (2017)303-334.

ed

[32] J. Dombi, A general class of fuzzy operators, the DeMorgan class of fuzzy operators and fuzziness measures induced by fuzzy operators, Fuzzy sets and systems 8(2) (1982) 149-163. [33] E. P. Klement, R. Mesiar, E. Pap, Triangular Norms. Dordrecht: Kluwer Academic Publishers, 2000.

ce pt

[34] G. Klir, B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications, Prentice Hall,1995. [35] H. Zhou, J.Q. Wang, H.Y. Zhang, et al, Linguistic hesitant fuzzy multi-criteria decision-making method based on evidential reasoning, International Journal of Systems Science 47(2) (2016) 314-327. [36] A.Y. Liu, F.J. Liu, Research on method of analyzing the posterior weight of experts based on new evaluation scale of linguistic information, Chinese J Manage Sci 19 (2011) 149-155.

Ac

[37] G.Y. Bao, X.L. Lian, M. He, et al, Improved two-tuple linguistic representation model based on new linguistic evaluation scale, Control and Decision 25 (5) (2010) 780-784.

32

Page 32 of 33

*Highlights (for review)

Research Highlights (1) We proposed the general operational laws for PLTSs by ATT and LSFs. (2) We proposed some probabilistic linguistic Muirhead mean aggregation operators.

ip t

(3) The operators capture the interrelationships among any number of arguments. (4) Some desirable properties and special cases of these operators are discussed.

cr

(5) Novel MADM methods based on these operators are proposed.

Ac

ce pt

ed

M

an

us

(6) Some numerical examples are proposed to validate the proposed methods

Page 33 of 33