Expert Systems With Applications 99 (2018) 83–92
Contents lists available at ScienceDirect
Expert Systems With Applications journal homepage: www.elsevier.com/locate/eswa
Granulating linguistic information in decision making under consensus and consistency Francisco Javier Cabrerizo a,∗, Juan Antonio Morente-Molinera b, Witold Pedrycz c,e, Atefe Taghavi d, Enrique Herrera-Viedma a,e,∗∗ a
Department of Computer Science and Artificial Intelligence, University of Granada, C/ Periodista Daniel Saucedo Aranda s/n, Granada 18071, Spain Universidad Internacional de La Rioja (UNIR), Spain Department of Electrical & Computer Engineering, University of Alberta, Edmonton, Canada d Graduate University of Advanced Technology, Kerman, Iran e Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia b c
a r t i c l e
i n f o
Article history: Received 30 June 2017 Revised 28 December 2017 Accepted 20 January 2018
Keywords: Group decision making Granulation Linguistic information Consistency Consensus
a b s t r a c t This study is concerned with group decision making contexts in which linguistic preference relations are used to provide the evaluations of results. On the one hand, granulation of linguistic terms, which are used as entries of the preference relations, is carried out for the purpose of dealing with the linguistic information. Formally, the problem is expressed as a multi-objective optimization task in which a performance index composed of the weighted averaging of the criteria of consensus and consistency is maximized via an appropriate association of the linguistic terms with information granules formed as intervals. On the other hand, once the linguistic terms are made operational by mapping them to the corresponding intervals, a selection process, in which the consistency achieved by each agent is also considered, is employed with intent to construct the solution to the decision problem under consideration. An experimental study is reported by demonstrating the main features of the proposed approach. Furthermore, some drawbacks and advantages are also analyzed. © 2018 Elsevier Ltd. All rights reserved.
1. Introduction A general group decision making scenario, as considered here, is composed of various agents (experts, decision makers, individuals, etc.) and a collection of alternatives being possible solutions to the considered decision problem (Capuano, Chiclana, Fujita, HerreraViedma, & Loia, 2017; Evangelos, 20 0 0; Hwang & Lin, 1987; Liu, Dong, Chiclana, Cabrerizo, & Herrera-Viedma, 2017). The objective of this kind of decision problem is to arrange the alternatives that are considered as possible solutions ranking from best to worst according to the testimonies provided by the agents. For the characterization of the value of each alternative, we consider the use of linguistic terms. This should make easy the expression of human assessments and judgments (Cabrerizo et al., 2017; Herrera, Alonso, Chiclana, & Herrera-Viedma, 2009; Montero, 2009). ∗
Corresponding author. Corresponding author at: Department of Computer Science and Artificial Intelligence, University of Granada, C/ Periodista Daniel Saucedo Aranda s/n, Granada 18071, Spain. E-mail addresses:
[email protected] (F.J. Cabrerizo),
[email protected] (J.A. Morente-Molinera),
[email protected] (W. Pedrycz),
[email protected] (E. Herrera-Viedma). ∗∗
https://doi.org/10.1016/j.eswa.2018.01.030 0957-4174/© 2018 Elsevier Ltd. All rights reserved.
Each individual agent evaluates qualitatively how good an alternative (possible solution) is with respect to each other. In particular, linguistic pairwise comparisons are assumed to represent the agents’ preferences, that is, preference degrees between two particular alternatives are provided using linguistic terms. The alternatives are classified subjectively in the sense that they are assessed by the agents (human beings) and for this reason these evaluations can be given in an intuitively appealing way by using linguistic terms (Zadeh, 1973), that is, values expressed in natural language that should allow an ease of usage and a required human consistency (Golunska & Kacprzyk, 2016; Montero, 2009). For example, linguistic terms like Bad, Good, Very Bad, and so on, could be considered (Zadeh, 1973). It is assumed that each agent is capable of providing such an assessment via a linguistic preference relation (Herrera & Herrera-Viedma, 20 0 0). Preference relations are assumed here because they are more accurate than preference elicitation approaches based on non-pairwise comparisons (Millet, 1997). Several computational linguistic models, that is, approaches for dealing with linguistic information in computing with words (Herrera et al., 2009), have been developed, say the model based on membership functions (Zadeh, 1975), the symbolic model based
84
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92
on ordinal scales (Cabrerizo, Morente-Molinera, Pérez, LópezGijón, & Herrera-Viedma, 2015; Herrera & Herrera-Viedma, 20 0 0; Morente-Molinera, Mezei, Carlsson, & Herrera-Viedma, 2017), the 2-tuple linguistic model (Herrera & Martínez, 20 0 0; Martínez, Ruan, & Herrera, 2010), the model based on discrete fuzzy numbers (Massanet, Riera, Torrens, & Herrera-Viedma, 2014), among many others. Recently, a new approach making the linguistic information operational through information granulation was proposed (Cabrerizo, Herrera-Viedma, & Pedrycz, 2013; Cabrerizo, Ureña, Pedrycz, & Herrera-Viedma, 2014; Pedrycz & Song, 2014). In contrast, in this new approach, both the semantics and the distribution of the linguistic terms, instead of being established a priori, are obtained as reported by an optimization task where a certain performance criterion is maximized or minimized by an appropriate association of the linguistic values with a certain family of information granules. An information granule is a complex information entity that has to be efficiently treated in the computing setting appropriate to the information granulation framework that is assumed (Wang, Pedrycz, Gacek, & Liu, 2016). The individual consistency (Herrera-Viedma, Herrera, Chiclana, & Luque, 2004) of the assessments provided by an agent was considered as performance index by Cabrerizo et al. (2013) and Pedrycz and Song (2014). Consistency is related to contradictory opinions provided by an individual agent. However, the problem of consistency itself implies another question: when all the agents are said to be consistent, that is, when they have similar opinions, which is known as consensus (Cabrerizo et al., 2015; HerreraViedma, Cabrerizo, Kacprzyk, & Pedrycz, 2014; Zhang, Dong, & Herrera-Viedma, 2018). Consensus requires that a majority of the group of agents agree on the solution achieved, but that the minority of the group of agents approve to go along with the solution. Because obtaining a consensus solution is essential, it is an important objective in group decision making contexts and, therefore, many authors have studied this area (Cabrerizo, Alonso, & HerreraViedma, 2009; Cabrerizo, Moreno, Pérez, & Herrera-Viedma, 2010; Kacprzyk & Zadrozny, 2010; del Moral, Chiclana, Tapia, & HerreraViedma, 2018; Palomares, Estrella, Martínez, & Herrera, 2014; Xu, Cabrerizo, & Herrera-Viedma, 2017). The underlying objective of this study is to propose a general approach to modeling, and then support, the resolution process of a group decision making situation in which the agents’ assessments are represented via linguistic preference relations. The proposed approach is composed of two steps. The first one becomes indispensable to make the linguistic information operational so the final solution can be achieved. In this step, the linguistic terms are transformed into formal constructs of information granules, which are then handled within the computing setting that is appropriate to the given information granulation framework. Here, the granulation formalism being considered pertains to intervals and two optimization criteria are used to arrive at the formalization of the linguistic terms through intervals, namely consistency and consensus. This helps transforming linguistic information into meaningful intervals in such a way that the final ranking of alternatives with highest consistency and consensus is achieved. To accomplish high flexibility when formulating this optimization task, the Particle Swarm Optimization (PSO) framework (Kennedy & Eberhart, 1995) is used as a viable technique of global optimization. Once the linguistic information is made operational, the second step consists in applying a selection process (Cabrerizo, Heradio, Pérez, & Herrera-Viedma, 2010; Herrera-Viedma, Chiclana, Herrera, & Alonso, 2007) that obtains the solution according to the assessments given by the group of agents. In this step, the consistency achieved by each agent is used with the aim of assigning higher significance to the most consistent agents. The main originality of the proposed approach is that both the semantics and the distribution of the linguistic terms are not es-
tablished a priori, but they are obtained according to an optimization task where a optimization criterion, which is based on both consistency and consensus, is maximized by a suitable mapping on the linguistic terms on the information granules. Therefore, solutions with higher levels of consistency and consensus are obtained using this approach and, as a consequence, better results in GDM scenarios are derived. The study is organized as follows. Section 2 is related to a granulation of the linguistic information, which forms a core component of the proposed approach. Then, the PSO framework is described as it is used as an optimization technique, and some aggregation operators are introduced as they are utilized in the selection process. We formally introduce a general approach to support the resolution process of a group decision making situation (Section 3). An experimental setting and its results are illustrated in Section 4. Then, the characteristics of the developed proposal are analyzed in Section 5. Finally, Section 6 covers main conclusions and future studies. 2. Preliminaries This section introduces a granulation of the linguistic terms that are used in the linguistic preference relations. This guides us to the operational version of additional processing producing the rank of alternatives. Next, the PSO framework is described. Lastly, both the Ordered Weighted Averaging (OWA) operator (Yager, 1988) and the Induced Ordered Weighted Averaging (IOWA) operator (Yager & Filev, 1999) are introduced. 2.1. Granulation of linguistic information The pairwise comparison between alternatives is provided via a linguistic value belongings to a linguistic terms set S = {s1 , s2 , . . . , sg }, being g its granularity (Herrera et al., 2009). It is usually assumed that there exists a linear order ≺ among the linguistic terms, such that ∀si , sj ∈ S, si ≺sj , being j > i, means that the linguistic term sj denotes a better evaluation than the linguistic term si . As example, a linguistic term set S, with granularity equal to five, could be composed of the following linguistic terms: MW (Much Worse), W (Worse), E (Equal), B (Better), and MB (Much Better). Each linguistic term itself is not operational, which signifies that no further processing can be carried out. This means that a granulation (Song & Pedrycz, 2011), which is defined as the process of forming something into granules, of the linguistic terms is required. A number of formalisms of information granulation may be considered here including shadowed sets, fuzzy sets, rough sets, intervals, just to cite to some options (Pedrycz, 2013). A certain optimization task, where a certain performance criterion is optimized, may be formulated with the aim of arriving at the operational realization of the granules of information. For instance, the consistency of individual agents was used as performance index in Pedrycz and Song (2014) and Cabrerizo et al. (2013). 2.2. PSO framework Kennedy and Eberhart (1995) proposed the PSO framework, which is a population-based meta-heuristic algorithm. It is included in the class of swarm intelligence algorithms inspired by the social dynamics and the emerging behavior in organized colonies by social norms as, for example, colonies of ants, swarms of bees, flock of birds, schools of fish, and even human social behavior (Gou et al., 2017; Zhou, Gao, Liu, Mei, & Liu, 2011). In contrast to other evolutionary methods, PSO is an iterative algorithm starting with an initial population of individuals, called
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92
particles, which is distributed in a random manner over the search space as candidate solutions. The set of particles, also known as a swarm, move through the D-dimensional search space following typical dynamics in search of a global optimum. Each candidate solution or particle in PSO has a velocity, vi = [vi1 vi2 . . . viD ], and a position, ϕi = [ϕi1 ϕi2 . . . ϕiD ], (solution vector). Unlike most existing meta-heuristic search algorithms, PSO memorizes both the best solution vector found by all candidate solutions and the best solution vector found by each candidate solution in the search process, thereby enabling them to move toward the global optima (Gou et al., 2017). Here, a fitness function is generally used to determine the best solution vector in the D-dimensional search space. The velocity of the jth dimension of the ith candidate solution, vij , is updated in t + 1 generation (iteration) as:
vi j (t + 1 ) = ω · vi j (t ) + c1 · r1 · ( pbesti j − ϕi j (t )) + c2 · r2 · (gbest j − ϕi j (t ))
ϕi j (t + 1 ) = ϕi j (t ) + vi j (t + 1 )
(2)
PSO has the advantage of simple implementation, fewer control parameters, and better convergence performance, among others (Gou et al., 2017). Therefore, it has brought the attention of many researchers and the basic PSO and its variants have been successfully applied for continuous optimization problems and also in discrete optimization problems (Marinakis, Migdalas, & Sifaleras, 2017).
The operator was proposed by Yager (1988). In this construct, the values to be aggregated have to be reordered based on their magnitude. Definition 1. An OWA operator of dimension n is a function φW : Rn → R having a weight vector, W = (w1 , . . . , wn ) with wi ∈ [0, 1] and i wi = 1, associated with it. This operator aggregates a list of n values {b1 , . . . , bn } as follows: n
wi · bσ ( i )
wi = Q (i/n ) − Q ((i − 1 )/n ),
i = 1, . . . , n
(4)
A widely used membership function for a regular increasing monotone quantifiers is:
Q (r ) =
⎧ 0 ⎪ ⎪ ⎨
if r < a
r−a
if a ≤ r ≤ b
1
if r > a
b−a ⎪ ⎪ ⎩
(5)
with a, b, r ∈ [0, 1]. Some examples of these quantifiers are most, at least half, and as many as possible, where the parameters, a and b, are (0.3, 0.8), (0, 0.5), and (0.5, 1), respectively. If a fuzzy linguistic quantifier Q is utilized to calculate the weight vector associated with the OWA operator, then it is denoted by φ Q . 2.4. The IOWA operator The IOWA operator (Yager & Filev, 1999) was defined as a variant of the OWA operator allowing for a distinct rearrangement of the arguments that are aggregated. Definition 2. An IOWA operator of dimension n is a function W : (R × R )n →R having a weight vector, W = (w1 , . . . , wn ) with wi ∈ [0, 1] and i wi = 1, associated with it. This operator aggregates the set of second arguments of a list of n two-tuples { u1 , b1 , . . . , un , bn } as follows:
W ( u1 , b1 , . . . , un , bn ) =
n
wi · bσ ( i )
(6)
i=1
being σ a permutation of {1, . . . , n} such that uσ (i ) ≥ uσ (i+1 ) , ∀i = 1, . . . , n − 1, that is, uσ (i) , bσ (i) is the two-tuple with uσ (i) the i-th highest value in the set {u1 , . . . , un }. In (6), the reordering of the collection of first arguments,
2.3. The OWA operator
φW (b1 , . . . , bn ) =
criteria that are modeled as fuzzy subsets of a collection of alternatives, in the process of quantifier-guided aggregation, this aggregation operator may be employed in the aggregation step to represent the fuzzy majority concept via fuzzy linguistic quantifiers Q (Zadeh, 1983), showing the ratio of criteria “necessary for a good solution” that are satisfied (Yager, 1996). This representation is carried out via the fuzzy linguistic quantifier Q to obtain the weight vector associated with this aggregation operator. For instance, if a regular increasing monotone quantifier Q is used, the approach for evaluating the total satisfaction of Q criteria by the alternative xj is performed by computing the weight vector associated with the OWA operator conforming to the following expression:
(1)
The first part of the right hand side of (1) corresponds to the inertia of prior velocity, the cognition component is represented by the second part, and the third part symbolizes the cooperation among candidate solutions and represents the social component. The ϕ ij is the jth dimension of the solution vector of the ith candidate solution, the pbesti is the previous best solution of the ith candidate solution, and the gbest is the global best candidate solution discovered by all candidate solutions up to this point. The inertia component ω and the acceleration constants c1 and c2 are established by the user. The inertia weight ω has an important effect on balancing the local search and the global search in the algorithm. A low value of ω means the swarm trends to local search, and a high value of ω trends to global search. Its value is typically established between 0 and 1. Also r1 and r2 are random numbers uniformly generated in the unit interval. In the basic PSO, values of cognition part and social component become small with increasing number of generations (Shahabinejad & Sohrabpour, 2017). With regard to the solution, the jth variable of the ith candidate solution, ϕ ij , is updated as follows:
85
(3)
i=1
being σ : {1, . . . , n} → {1, . . . , n} a permutation such that bσ (i ) ≥ bσ (i+1 ) , ∀i = 1, . . . , n − 1, that is, bσ (i) is the ith highest value in the set {b1 , . . . , bn }. The determination of the weight vector W used in this operator is an important aspect. In particular, considering a set of n
{u1 , . . . , un }, being based on their value, induces the reordering of the collection of the arguments to be aggregated, {b1 , . . . , bn }, related to them. For this reason, {b1 , . . . , bn } are called the values of the argument variable and {u1 , . . . , un } the values of the order inducing variable (Yager & Filev, 1999). In the case of the IOWA operator, the same method to obtain the weight vector associated with the OWA operator may be also used. Then, if the weight vector associated with the IOWA operator is calculated via a fuzzy linguistic quantifier Q, it is denoted by Q . 3. Group decision making approach Given a collection of alternatives, X = {x1 , x2 , . . . , xn }, to be taken into consideration in a certain decision making problem by a group of agents, A = {a1 , a2 , . . . , am }, the goal is to classify the alternatives by assigning them some preference degrees. In particular, linguistic pairwise comparisons between the alternatives are assumed in this study. Then, in the estimation process of the
86
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92
ranking of the alternatives, the starting point is the entries of the linguistic preference relations, which are acquired through collecting the pairwise assessments provided by the agents. A linguistic preference relation, PR ⊂ X × X, is depicted by a mapping μPR : X × X → S, being μPR (xi , x j ) = pri j understood as the degree in which the alternative xi is preferred to the alternative xj . This preference degree is provided by using a linguistic term belongings to a linguistic term set S (Herrera et al., 2009). Prior to making any evaluation, the linguistic term set containing the possible linguistic terms is provided to the group of agents. When each agent, ai , has offered her/his linguistic preference relation, PRi , the resolution process is applied. First, the linguistic terms are made operational through a granulation and, second, a selection process is employed to produce the ranking of the alternatives. In what follows, both steps are described in detail. 3.1. Making operational the linguistic information To deal with the linguistic terms, the first step of the proposed approach consists of mapping the linguistic terms to the corresponding information granules. In this study, we consider a granulation of the linguistic terms expressed in the language of intervals. In light of the interval-valued form of the information granules, each linguistic term is abstracted to the form of a certain interval [ p j , p j+1 ] situated in the closed interval [0, 1]. Therefore, given a linguistic term set whose granularity is equal to g, a family of intervals, I1 , I2 , . . . , Ig , is formed and completely defined by the vector of cutoff points, p = [ p1 p2 . . . pg−1 ], where 0 < p1 < p2 < · · · < pg−1 < 1 and I1 = [0, p1 ), I2 = [ p1 , p2 ), . . . , I j = [ p j−1 , p j ), . . . , In = [ pg−1 , 1]. The formation of the linguistic terms as intervals is performed by the optimization of a certain performance index, which is composed of two complementary criteria: 1. Consensus achieved among the group of agents, O1 . Consensus is considered because, sometimes, solutions that are not well accepted by some agents could be obtained and, in this cases, they could think their assessments have not been appropriately considered. As a consequence, the agents might refuse the solution obtained (Saint & Lawson, 1994). 2. Consistency of each individual agent, O2 . Consistent opinions (Cutello & Montero, 1994), that is, opinions not exhibiting any contradiction, are more relevant than opinions containing contradictions. Therefore, to avoid misleading solutions in this kind of decision problem, the study of consistency is very important. The solution obtained is sought to be better if both the consensus and the individual consistency are high. Therefore, the quality of a given vector of cutoff points is obtained via a performance criterion O, which is composed of the weighted averaging of the consensus and the consistency criteria:
O = δ · O1 + ( 1 − δ ) · O2
(7)
being δ ∈ [0, 1], a parameter used to establish a trade-off between the consistency level obtained at the individual agent and the consensus achieved within the group (Herrera-Viedma, Alonso, Chiclana, & Herrera, 2007). A low value of δ means that higher importance is given to the consistency obtained at the level of individual agent. In particular, when δ = 0, we are only interested in the consistency level obtained at the individual agent. However, δ is usually set up with a value higher than 0.5 to pay more attention to the consensus at the group level. The objective is to maximize this performance criterion that is utilized as optimization criterion. To maximize the optimization criterion O, the PSO framework is employed. What is essential in this setting is the determination of an appropriate association between the particle’s representation and the problem space. Here, each particle is represented
by a vector of cutoff points placed in the closed interval [0, 1]. As an example, considering the linguistic term set S provided in Section 2.1 the following mapping is formed: MW: [0, p1 ), W: [p1 , p2 ), E: [p2 , p3 ), B: [p3 , p4 ), and MB: [p4 , 1], being p1 , p2 , p3 , and p4 , the cutoff points. In particular, if g words or linguistic terms are considered to be used by the group of agents to express their assessments, it means that each particle is constituted by g − 1 cutoff points. Therefore, in this example, a particle is represented as [p1 p2 p3 p4 ]. The particle’s quality, that is, the ability of the particle for solving the problem, is measured via a fitness function during the movement of the particle. In this study, the objective is the maximization of the optimization criterion O by modifying the location in the [0, 1] interval of the cutoff points. With regard to the determination of the fitness function, we have to bear in mind that a linguistic preference relation, PR, is utilized by each agent to provide her/his judgments. Due to the fact that each information granule is encountered in the form of an interval, we have to consider that preference relations composed of interval-valued entries should return single numeric values of the fitness function. Therefore, a collection of preference relations, R1 , . . . , Rm , are formed by sampling the linguistic preference relations, P R1 , . . . , P Rm , provided by the agents. In R1 , . . . , Rm , each entry is represented by a single numerical value generated from the continuous uniform distribution defined over the related sub-interval placed in the closed interval [0, 1] corresponding to the linguistic term of the entry in the original linguistic preference relation. For example, let us suppose that in the linguistic preference relation provided by the agent a1 we have that the entry pr112 is equal to Better, and that the interval associated 1 is computed with the linguistic term Better is [0.72, 0.85). Then, r12 by the continuous uniform distribution on [0.72, 0.85). The fitness function, f, which is applied to assess the quality of each particle, is determined as the arithmetic mean of the values of optimization criterion O computed over each set of N samples:
f =
N 1 Oi N
(8)
i=1
Here, the arithmetic mean has been used. However, other compensative aggregation functions could be also used. Given the nature of each entry of the preference relations, R1 , . . . , Rm , which is a single numeric value placed in the closed interval [0, 1], the method proposed by HerreraViedma et al. (2007) to obtain the consensus when fuzzy preference relations are used by the group of agents is used to calculate O1 . In particular, the consensus degree on the relation, cr, is used as O1 (see Appendix A for more details). On the other hand, to obtain the consistency value of each individual agent, the methodology presented by Herrera-Viedma et al. (2007) is utilized (see Appendix B). In particular, O2 is computed as the weighted averaging of the consistency level associated with each individual agent, that is, O2 = the agent ai .
m
i=1 cl
m
i
, being cli the consistency level associated with
3.2. Selection process When each agent has expressed her/his linguistic preference relation and the linguistic information has been made operational, the ranking of the alternatives can be obtained by carrying out a selection process (Cabrerizo et al., 2010; Herrera-Viedma et al., 2007) composed of two consecutive steps: •
First, a collective preference relation is defined to represent the overall degree of preference between the alternatives. This step is referred to as an aggregation.
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92 •
Second, the overall degree of preference between every pair of alternatives is transformed to classify them from the best to the worst. This step is called exploitation. Both steps are described in detail.
3.2.1. Aggregation A collective preference relation is calculated in this step via the aggregation of the linguistic preference relations given by the group of agents. Again, because the linguistic terms as formed as intervals, each entry of the linguistic preference relation P Rk = ( prki j ) given by the agent ak is sampled N times, being the arithmetic mean utilized as value for the corresponding entry in the preference relation R¯ k = (r¯ikj ). In this aggregation, the consistency at the level of individual agent should be considered in the sense that more importance should be associated to the opinions given by the most consistent agents. This can be modeled by using the consistency achieved by each agent as the order inducing value of the IOWA operator used to aggregate the preferences (Yager & Filev, 1999). Therefore, the collective preference relation, R¯ c = (r¯icj ), is computed via an IOWA operator where the ordering of the agents starting from the most consistent to the least consistent one induces the ordering of the preference degree aggregated:
r¯icj = Q ( cl 1 , r¯i1j , . . . , cl m , r¯imj )
(9)
3.2.2. Exploitation The alternatives are ranked in this step from best to worst. It is done by using two choice degrees of alternatives calculated via the OWA operator and the fuzzy majority concept of alternatives: the quantifier-guided non-dominance degree (QGNDD) and the quantifier-guided dominance degree (QGDD) (Cabrerizo et al., 2010; Herrera-Viedma et al., 2007), which acts over R¯ c to arrange the alternatives from the best to the worst. From this classification, the solution of the decision problem under consideration is obtained. •
QGNDDi : It measures the degree in which an alternative is not dominated by a fuzzy majority of the rest of the alternatives. The following expression is used to compute it:
s QGNDDi = φQ (1 − r¯1s i , 1 − r¯2s i , . . . , 1 − r¯(si−1)i , 1 − r¯(si+1)i , . . . , 1 − r¯ni )
(10)
•
where the degree in which the alternative xi is strictly dominated by the alternatives xj is represented by r¯sji = max{r¯cji − r¯icj , 0}. QGDDi : It measures the dominance that an alternative has over the remaining ones in a fuzzy majority sense. The next expression is used to compute it: c QGDDi = φQ (r¯ic1 , r¯ic2 , . . . , r¯ic(i−1) , r¯ic(i+1) , . . . , r¯in )
(11)
Two different policies can be carried out to apply both the QGNDD and the QGDD over the collection of the alternatives X: •
•
A conjunctive policy in which both the QGNDD and the QGDD are applied to the collection of the alternatives. It obtains two sets of alternatives, being the intersection of these two sets the final solution set of alternatives. A sequential policy in which one of the choice degrees, QGNDD or QGDD, is chosen and then applied to the collection of the alternatives to get the solution set of alternatives. As this solution set is composed of two or more alternatives, the other choice degree is applied to this set of alternatives to choose the alternative with this best second choice degree.
87
4. Experimental studies The resolution of a decision problem using the developed approach is illustrated in this section. First, the parameter settings of the PSO framework is provided. 4.1. Parameter settings As a consequence of intense experimentation, the following values were assigned to the parameters involved in the PSO framework: •
•
•
•
•
The swarm was formed by 200 candidate solutions. This value was selected because identical or very similar results were obtained by the PSO in subsequent runs. The PSO was run for 400 generations. After this number of generations, it was observed that the values returned by the fitness function were not further changed. The acceleration constants in (1) were set to 2, while the inertia weight was set to 0.2. These values were used as they are commonly considered in the existing approaches. The parameter δ in the performance index was set to 0.75 as we wish to make the consensus criterion the most important. N was set to 500 in (8) because similar results were obtained by higher values of N, so no further increases of the values of N are beneficial.
On the other hand, the decision problem is composed of four agents, A = {a1 , a2 , a3 , a4 }, and a collection of five alternatives, X = {x1 , x2 , x3 , x4 , x5 }. In addition, the agents provide their assessment over the alternatives via linguistic preference relations utilizing the linguistic term set given in Section 2.1. 4.2. Example Suppose that the agents provide the following linguistic preference relations:
⎛
− ⎜MB ⎜E P R1 = ⎜ ⎜E ⎝ MW
⎛
− ⎜E ⎜B P R2 = ⎜ ⎜MW ⎝ E
⎛
− ⎜B ⎜W P R3 = ⎜ ⎜B ⎝ MW
⎛
− ⎜B ⎜MW P R4 = ⎜ ⎜MW ⎝ W
W − W W E
E MB − W B
E − MW MW B
W MB − E MW
MW − E B B
B E − E MB
W − MB MB E
MB MW − W B
E MB MB − MB
MB MB E − MB
W W E − W MB MW B − MW
⎞
B E ⎟ W ⎟ ⎟ MW ⎟ ⎠ −
⎞
E MW ⎟ MB ⎟ ⎟ MW ⎟ ⎠ −
⎞
MB W ⎟ MW ⎟ ⎟ B ⎟ ⎠ −
⎞
B E ⎟ W ⎟ ⎟ MB⎟ ⎠ −
4.2.1. Making linguistic information operational Once all the agents have expressed their linguistic preference relations, the developed approach to make operational the linguistic information is carried out. The values returned by the fitness
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92
value 0.80
0.79
cr cl1 cl2 cl3 cl4
0.75
0.77
0.78
f
0.85
0.80
0.90
0.81
88
0
100
200
300
400
0
100
generation
200
300
400
generation
(a) Fitness function f obtained in consecutive(b) Criteria in consecutive PSO generations (δ = 0.75). PSO generations (δ = 0.75).
40
Fig. 1. Plots of the values returned by f and the criteria in consecutive PSO generations.
Optimized distribution of cutoff points Uniform distribution of cutoff points
20
30
4.2.2. Selection process Once the linguistic terms are made operational, the ranking of alternatives from best to worst can be obtained by applying the selection process. First, the preference relations R¯ 1 , R¯ 2 , R¯ 3 and R¯ 4 are obtained:
⎛
− ⎜0.91 ⎜0.70 R¯ 1 = ⎜ ⎜0.70 ⎝ 0.30
10
frequency
the values returned by the fitness function always increase (see Fig. 1a).
0
⎛
0.65
0.70
0.75
0.80
0.85
0.90
performance index
− ⎜0.70 ⎜0.79 R¯ 2 = ⎜ ⎜0.29 ⎝ 0.70
⎞
0.62 − 0.62 0.62 0.70
0.70 0.91 − 0.62 0.78
0.70 0.91 0.91 − 0.90
0.78 0.70⎟ 0.62⎟ ⎟ 0.29⎟ ⎠ −
0.70 − 0.31 0.29 0.78
0.62 0.91 − 0.70 0.29
0.91 0.91 0.70 − 0.91
0.70 0.30⎟ 0.91⎟ ⎟ 0.30⎟ ⎠ −
0.30 − 0.70 0.78 0.78
0.78 0.70 − 0.70 0.91
0.62 0.62 0.70 − 0.63
0.91 0.62⎟ 0.29⎟ ⎟ 0.78⎟ ⎠ −
0.63 − 0.91 0.91 0.70
0.91 0.30 − 0.62 0.78
0.91 0.28 0.78 − 0.30
0.78 0.70⎟ 0.62⎟ ⎟ 0.91 ⎟ ⎠ −
⎞
Fig. 2. Histogram of performance values.
⎛
function in consecutive generations of the PSO are reported in Fig. 1a. The most notable improvement of the performance criterion occurs at the first generations of the PSO, then, it continues being maximized in a slow way until the last generation. The best value of the performance criterion returned by the PSO is equal to 0.806, being 0.023 its standard deviation. The optimal cutoff points reported by the PSO for the set S are 0.59, 0.66, 0.74, and 0.82. This means the corresponding intervals to the linguistic terms are: MW: [0,0.59), W: [0.59,0.66), E: [0.66,0.74), B: [0.74,0.82), and MB: [0.82,1]. The progression obtained in successive generations of the optimization measured in terms of both the consensus and the consistency at level of individual agent is shown in Fig. 1b. Given the fact that a joint processing of the linguistic values utilized by the agents to provide their evaluations is considered, when new optimal cutoff points are returned by the PSO for the linguistic terms, both the consensus and the consistency levels change in a way that some of them decrease and others increase their value. However,
− ⎜0.78 ⎜0.62 R¯ 3 = ⎜ ⎜0.78 ⎝ 0.30
⎛
− ⎜0.78 ⎜0.29 R¯ 4 = ⎜ ⎜0.31 ⎝ 0.62
⎞
⎞
On the one hand, we aggregate the preference relations R¯ 1 , R¯ 2 , 3 ¯ R and R¯ 4 , via the IOWA operator by making use of the fuzzy linguistic quantifier “most”, modeled by the RIM quantifier Q (r ) = r 1/2 . This fuzzy linguistic quantifier, using (4), produces a weight vector composed of four values to compute each r¯icj value. For inc value is calculated as follows: stance, the r¯21
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92
w1 = Q (1/4 ) − Q (0 ) = 0.5 − 0 = 0.50
89
Table 1 Ranking of alternatives obtained by different values of δ .
w2 = Q (2/4 ) − Q (1/4 ) = 0.71 − 0.5 = 0.21 w3 = Q (3/4 ) − Q (2/4 ) = 0.87 − 0.71 = 0.16 w4 = Q (1 ) − Q (3/4 ) = 1 − 0.87 = 0.13 cl 1 = 0.89, cl 2 = 0.83, cl 3 = 0.86, cl 4 = 0.81
σ ( 1 ) = 1, σ ( 2 ) = 3, σ ( 3 ) = 2, σ ( 4 ) = 4
Value of δ
Ranking of alternatives
0.0 0.25 0.50 0.75 1.0
x3 x1 x2 x4 x5 x1 x2 x3 x4 x5 x1 x2 x3 x5 x4 x2 x1 x3 x5 x4 x2 x1 x5 x3 x4
c 1 3 2 4 r12 = w1 · r21 + w2 · r21 + w3 · r21 + w4 · r21 = 0.83
Then, the collective fuzzy preference relation becomes:
⎛−
⎜0.83 P Rc = ⎜0.64 ⎝ 0.60 0.40
0.57 − 0.62 0.64 0.73
0.73 0.79 − 0.65 0.73
0.78 0.77 0.81 − 0.77
⎞
0.79 0.62⎟ 0.60⎟ ⎠ 0.47 −
Using, for example, the QGDD and the weight vector W =
{0.50, 0.21, 0.16, 0.13} obtained previously, the following values are
higher values of the criterion O1 are obtained. Fig. 3 a includes the values obtained by O1 for different values of the parameter δ . Different from the criterion O2 , the higher the value of δ , the higher the value of O1 as more importance is given to the consensus criterion. On the other hand, in Table 1, the ranking of alternatives obtained by different values of δ is shown. As we can see, different values of δ provide different rankings of alternatives.
produced:
QGDD1 = 0.75 QGDD2 = 0.78 QGDD3 = 0.72 QGDD4 = 0.62 QGDD5 = 0.71 Finally, applying, for example, the sequential policy, the ranking of alternatives obtained is: x2 x1 x3 x5 x4 . 5. Discussion We analyze the performance and some possible advantages and drawbacks of the developed approach. 5.1. Uniform distribution of cutoff points A vector of cutoff points uniformly distributed over the unit interval, i.e., [0.20 0.40 0.60 0.80], is considered with the purpose of comparing the results achieved by the proposed approach. Using these cutoff points, the performance index assumes an average value of 0.720, being 0.027 its standard deviation. If we compare this value with the value returned by the proposed approach, which is equal to 0.806, we observe that the performance index achieves now a lower value. Fig. 2 shows the histogram of the values assumed by the performance index. It can be observed that, in the case of the uniform distribution of the cutoff points, a longer tail of distribution expands in the direction of lower values of O. In addition, the following ranking of alternatives is obtained by using a uniform distribution of cutoff points: x1 x2 x5 x3 x4 , which is different from the ranking obtained by our proposed approach (see Section 4.2.2).
5.3. Consensus reaching processes The developed approach generates a final ranking of alternatives with the highest consensus and consistency that are possible, according to the linguistic preference relations provided by the agents. It has been observed that the proposed approach produces a higher value of the performance index than in case when using a vector of predefined cutoff points. However, there exists a limit in which both the consensus and the consistency values cannot be increased. To overcome this limitation, the agents should modify their first preferences. Therefore, a consensus reaching process should be added to the approach (Butler & Rothstein, 1987; Herrera-Viedma et al., 2014). It is defined as a negotiation process composed by some discussion rounds in which the agents accept to change their judgments according to the advice expressed by a moderator, which is aware of both the consistency and the consensus levels achieved in each discussion round of the process via the computations of both consistency and consensus measures. 5.4. Computational linguistic models We compare here some classical linguistic computational models with the proposed approach. •
5.2. Performance of the approach for chosen values of δ The impact of the values of the parameter δ standing in the composite criterion O on the performance of the developed approach is depicted in Fig. 3. On the one hand, for δ = 0, the optimization concerns exclusively the consistency achieved at individual agent level (criterion O2 ) and, therefore, a higher value of the criterion O2 is obtained (see Fig. 3 b). On the other hand, when δ assumes nonzero values, the optimization criterion O2 reaches lower values. It is expected because the performance criterion maximized by the PSO is not O2 itself but O incorporating also the effect of the consensus level achieved among the agents. In particular, for δ = 1, the optimization concerns only the consensus level achieved among the agents (criterion O1 ) and, therefore,
•
•
The model based on membership functions. The Extension Principle is used by this approach (Degani & Bortolan, 1988). In such a way, the result of an aggregation function over a set of linguistic terms in a fuzzy number that usually does not have an associated linguistic term on the linguistic term set. Hence, we need to apply an approximation function to associate it to a particular linguistic term on the linguistic term set (retranslation problem). On the one hand, the use of the approximation function introduces a loss of information. On the other hand, we need to establish a priori both the semantics and the distribution of the linguistic terms. In such a way, solutions with lower levels of consensus and consistency could be obtained. The model based on type-2 fuzzy sets. Type-2 fuzzy sets are used by this model to represent the linguistic terms (Mendel, 2002). This model also suffers from the retranslation problem and, therefore, the obtained 2-type fuzzy set from an aggregation operators must be mapped into a linguistic term. In addition, this model needs to define a priori both the semantics and the distribution of the linguistic terms. Model based on ordinal scales. This model uses a convex combination of linguistic terms that directly acts over the indexes of the linguistic terms (Herrera, Herrera-Viedma, & Verdegay, 1997). In this model, it is usually assumed that the cardinality
0.86
O2
0.85 0.84
0.76
O1
0.77
0.78
0.87
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92 0.79
90
0
100
200
300
λ = 0.0 λ = 0.25 λ = 0.50 λ = 0.75 λ = 1.0
0.83
0.75
λ = 0.0 λ = 0.25 λ = 0.50 λ = 0.75 λ = 1.0 400
0
100
generation
200
300
400
generation
(a) O1 for chosen values of δ.
(b) O2 for chosen values of δ.
Fig. 3. Plots of O1 and O2 for chosen values of δ .
•
of the set is odd and that the linguistic terms are symmetrically placed around a middle term. As a consequence, the distribution of the linguistic terms is known and uniform. As we have seen in the above example, when the distribution of the linguistic terms is uniform, a solution with lower levels of consensus and consistency is usually achieved. Furthermore, as the result of the aggregation is not usually an integer, that is, it does not correspond to one of the linguistic terms in the set, it also requires an approximation function. Therefore, this model suffers from loss of information. 2-tuple linguistic computational model. This model is a symbolic model carrying out processes of computing with words without loss of information (Herrera & Martínez, 20 0 0). To do so, the results are expressed in the initial linguistic domain extended to a pair of values including the linguistic term and additional information. However, in this model, the distribution of the linguistic term set is uniform and it must be defined a priori.
6. Concluding remarks We have presented the resolution approach of a group decision making problem in which linguistic pairwise comparisons between the alternatives have been used to represent the agents’ assessments. It is important to emphasize that when linguistic evaluations are used, the linguistic terms have to be made operational and therefore the step of information granulation is essential. In particular, we have shown the algorithmic framework and the methodology of associating the linguistic terms with the corresponding intervals (information granules) in such a way that a final solution with the highest consensus and consistency is determined. We conclude with some suggestions for future studies: •
•
To test the proposed approach, an example composed by four agents and five alternatives has been shown. However, it would be interesting to scale the number of agents and alternatives. The proposed approach assume static sets of alternatives. However, it should be adapted to deal with dynamic decision frameworks (Pérez, Cabrerizo, & Herrera-Viedma, 2010), that is, decision processes in which, through the decision making time, some new alternatives might appear and other disappear. It could happen due to better alternatives to solve the problem could be found, or some alternatives could be evaluated poorly, or, while the agents are discussing the solution, the availability of some alternatives could change.
•
For the clarity and conciseness of the presentation, we focus on the realization of information granules modeled as intervals, however the underlying conceptual framework is equally suitable to cope with other formal realizations of information granules (say, rough sets, fuzzy sets, and others).
Acknowledgment The authors would like to acknowledge FEDER financial support from the Projects TIN2013-40658-P and TIN2016-75850-P. Appendix A. Consensus measures The consensus level achieved among the group of agents is obtained by measuring the similarity among the assessments that the agents have provided. To do so, the coincidence concept is usually utilize to calculate soft consensus measures (Herrera et al., 1997). In particular, if preference relations are employed, soft consensus measures are provided at the three different levels of a preference relation: pairs of alternatives, alternatives, and relation. Then, soft consensus measures are obtained in the following way (Cabrerizo et al., 2010): 1. For each pair of agents, (ak , al ) (k = 1, . . . , m − 1, l = k + 1, . . . , m ), a similarity matrix, SMkl = (smkl ), is determined as: ij k l smkl i j = 1 − D ( ri j , ri j )
(A.1)
with D being a distance function (Deza & Deza, 2009). In this study, as we assume that the entries of the preference relations are single numeric values in the [0, 1] interval, the following distance function may be utilized:
D(rikj , ril j ) = |rikj − ril j |
(A.2)
viz. D: [0, 1] × [0, 1] → [0, 1]. 2. An aggregation function φ is utilized to aggregate all the similarity matrices. For instance, the average can be utilized, although, in accordance with the specific properties to implement, other aggregation functions may be applied. As a result, a consensus matrix CM = (cmi j ) is computed as follows:
cmi j = φ (smkl i j ), k = 1, . . . , m − 1, l = k + 1, . . . , m
(A.3)
3. Finally, the soft consensus measures are computed at the three different levels of a preference relation:
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92
(a) Consensus degree on the pairs of alternatives, cpij . It evaluates the consensus degree among all the agents on a particular pair of alternatives (xi , xj ). It is determined by the consensus matrix CM:
cpi j = cmi j
(A.4)
(b) Consensus degree on the alternatives, cai . It evaluates the consensus degree among all the agents on an alternative (xi ). It is determined via the aggregation of the consensus degrees of all the pairs of alternatives:
cai = φ (cpi j ), j = 1, . . . , n ∧ j = i
(A.5)
As before, the arithmetic mean can be used as aggregation function φ . (c) Consensus degree on the relation, cr. It gives the global consensus degree among the evaluations conveyed by all the agents. It is determined via the aggregation of all the consensus degrees at the level of alternatives:
cr = φ (cai ), i = 1, . . . , n
(A.6)
As before, the arithmetic mean can be also utilized as an aggregation function φ .
91
If erik = rik ∀j, then, the information given is completely consistent. Nevertheless, the agents are not always fully consistent and, therefore, the evaluation expressed by an agent could not verify j (B.2) and some of the estimated values erik could not be placed in the [0, 1] interval. From (B.3), it can be observed that the minij mum value of any of the degrees of preference erik is −0.5 while the maximum one is 1.5. With the aim of normalizing the expression domains, the final estimated value of rik (i = k), cpik , is defined as follows: j
cpik = med{0, 1, erik }
(B.5)
where med is the median function. The error assuming values in [0, 1] between a degree of preference, rik , and its final estimated one, cpik , is:
ε rik = |cpik − rik |
(B.6)
Reciprocity of R = (rik ) implies reciprocity of CP = (cpik ). Hence, ε rik = ε rki . ε rik = 0 means as a situation of total consistency between rik and the rest of entries of R. Of course, the higher the value of ε rik the more inconsistent is rik as regards the remaining entries of R. This observation allows us to measure the consistency level associated with a fuzzy preference relation R (Chiclana et al., 2008):
To measure the consensus achieved among all the agents, cr is used. In particular, the closer cr to 1, the greater the consensus among all the agents’ evaluations.
cl =
Appendix B. Consistency level
The fuzzy preference relation R is fully consistent if cl = 1. Otherwise, the lower cl, the more inconsistent R is.
A number of properties to be satisfied by the preference relations have been proposed to make a reasonable decision (HerreraViedma et al., 2004). Among them, the additive transitivity property has been widely utilized as the verification of consistency can be easily completed. In particular, in the case of fuzzy preference relations, the additive transitivity was shown as the parallel concept of the Saaty’s consistency property for multiplicative preference relations (Herrera-Viedma et al., 2004; Saaty, 1994). Tanino (1984) presented a formulation of the additive transitivity:
(ri j − 0.5 ) + (r jk − 0.5 ) = (rik − 0.5 ), ∀i, j, k ∈ {1, . . . , n}
(B.1)
Additive transitivity implies additive reciprocity (ri j + r ji = 1, ∀i, j ). Therefore, we can rewrite it in the form:
rik = ri j + r jk − 0.5, ∀ ∈ i, j, k{1, . . . , n}
(B.2)
If for every three alternatives present in a decision problem, say xi , xj , xk ∈ X, their related degrees of preference rij , rjk , rik fulfill (B.2), the fuzzy preference relation is considered as “additive consistent”. An estimated value of a preference degree can be obtained via other degrees of preferences utilizing (B.2). Actually, the estimated value of rik (i = k) may be obtained utilizing an intermediate alternative xj as follows (Chiclana, Mata, Martinez, Herrera-Viedma, & Alonso, 2008; Herrera-Viedma et al., 2007; Herrera-Viedma et al., 2004):
erikj = ri j + r jk − 0.5
(B.3)
The overall estimated value erik of the preference degree rik is j computed as the arithmetic mean of all possible values erik :
erik =
n j=1; j=i,k
erikj n−2
(B.4)
The value |erik − rik | is utilized as a measure of the error between a degree of preference and its estimated one (HerreraViedma et al., 2007).
n
(
i,k=1;i=k 1 − n2 − n
ε rik )
(B.7)
References Butler, C. T., & Rothstein, A. (1987). On conflict and consensus: A handbook on formal consensus decision making. Portland, Maine: Food Not Bombs Publishing. Cabrerizo, F. J., Al-Hmouz, R., Morfeq, A., Balamash, A. S., Martínez, M. A., & Herrera-Viedma, E. (2017). Soft consensus measures in group decision making using unbalanced fuzzy linguistic information. Soft Computing, 21(11), 3037–3050. Cabrerizo, F. J., Alonso, S., & Herrera-Viedma, E. (2009). A consensus model for group decision making problems with unbalanced fuzzy linguistic information. International Journal of Information Technology & Decision Making, 8(1), 109–131. Cabrerizo, F. J., Chiclana, F., Al-Hmouz, R., Morfeq, A., Balamash, A. S., & Herrera-Viedma, E. (2015). Fuzzy decision making and consensus: Challenges. Journal of Intelligent & Fuzzy Systems, 29(3), 1109–1118. Cabrerizo, F. J., Heradio, R., Pérez, I. J., & Herrera-Viedma, E. (2010). A selection process based on additive consistency to deal with incomplete fuzzy linguistic information. Journal of Universal Computer Science, 16(1), 62–81. Cabrerizo, F. J., Herrera-Viedma, E., & Pedrycz, W. (2013). A method based on PSO and granular computing of linguistic information to solve group decision making problems defined in heterogeneous contexts. European Journal of Operational Research, 230(3), 624–633. Cabrerizo, F. J., Moreno, J. M., Pérez, I. J., & Herrera-Viedma, E. (2010). Analyzing consensus approaches in fuzzy group decision making: Advantages and drawbacks. Soft Computing, 14(5), 451–463. Cabrerizo, F. J., Morente-Molinera, J. A., Pérez, I., López-Gijón, J., & Herrera-Viedma, E. (2015). A decision support system to develop a quality management in academic digital libraries. Information Sciences, 323, 48–58. Cabrerizo, F. J., Ureña, M. R., Pedrycz, W., & Herrera-Viedma, E. (2014). Building consensus in group decision making with an allocation of information granularity. Fuzzy Sets and Systems, 255, 115–127. Capuano, N., Chiclana, F., Fujita, H., Herrera-Viedma, E., & Loia, V. (2017). Fuzzy group decision making with incomplete information guided by social influence. IEEE Transactions on Fuzzy Systems.. doi:10.1109/TFUZZ.2017.2744605. Chiclana, F., Mata, F., Martinez, L., Herrera-Viedma, E., & Alonso, S. (2008). Integration of a consistency control module within a consensus model. International Journal of Uncertaingy, Fuzziness and Knowledge-Based Systems, 16(1), 35–53. Cutello, V., & Montero, J. (1994). Fuzzy rationality measures. Fuzzy Sets and Systems, 62(1), 39–54. Degani, R., & Bortolan, G. (1988). The problem of linguistic approximation in clinical decision making. International Journal of Approximate Reasoning, 2(2), 143–162. Deza, M. M., & Deza, E. (2009). Encyclopedia of distances. Berling, Heidelberg: Springer-Verlag. Evangelos, T. (20 0 0). Multi-criteria decision making methods: A comparative study. Dordrecht: Kluwer Academic Publishers. Golunska, D., & Kacprzyk, J. (2016). A consensus reaching support system for multi-criteria decision makings problems. In G. de Tré, P. Grzegorzewski, J. Kacprzyk, J. W. Owsinski, W. Penczek, & S. Zadrozny (Eds.), Challenging problems and solutions in intelligent systems. In Studies in Computational Intelligence: 634 (pp. 219–235). Springer International Publishing.
92
F.J. Cabrerizo et al. / Expert Systems With Applications 99 (2018) 83–92
Gou, J., Lei, Y.-X., Guo, W.-P., Wang, C., Cai, Y.-Q., & Luo, W. (2017). A novel improved particle swarm optimization algorithm based on individual difference evolution. Applied Soft Computing, 57, 468–481. Herrera, F., Alonso, S., Chiclana, F., & Herrera-Viedma, E. (2009). Computing with words in decision making: Foundations, trends and prospects. Fuzzy Optimization and Decision Making, 8(4), 337–364. Herrera, F., & Herrera-Viedma, E. (20 0 0). Linguistic decision analysis: Steps for solving decision problems under linguistic information. Fuzzy Sets and Systems, 115(1), 67–82. Herrera, F., Herrera-Viedma, E., & Verdegay, J. L. (1997). Linguistic measures based on fuzzy coincidence for reaching consensus in group decision making. International Journal of Approximate Reasoning, 16(3–4), 309–334. Herrera, F., & Martínez, L. (20 0 0). A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Transactions on Fuzzy Systems, 8(6), 746– 752. Herrera-Viedma, E., Alonso, S., Chiclana, F., & Herrera, F. (2007). A consensus model for group decision making with incomplete fuzzy preference relations. IEEE Transactions on Fuzzy Systems, 15(5), 863–877. Herrera-Viedma, E., Cabrerizo, F. J., Kacprzyk, J., & Pedrycz, W. (2014). A review of soft consensus models in a fuzzy environment. Information Fusion, 17, 4–13. Herrera-Viedma, E., Chiclana, F., Herrera, F., & Alonso, S. (2007). Group decision– making model with incomplete fuzzy preference relations based on additive consistency. IEEE Transactions on Systems, Man, and Cybernetics – Part B, Cybernetics, 37(1), 176–189. Herrera-Viedma, E., Herrera, F., Chiclana, F., & Luque, M. (2004). Some issues on consistency of fuzzy preference relations. European Journal of Operational Research, 154(1), 98–109. Hwang, C.-L., & Lin, M.-J. (1987). Group decision making under multiple criteria: methods and applications. Springer-Verlag Berlin Heidelberg. Kacprzyk, J., & Zadrozny, S. (2010). Soft computing and web intelligence for supporting consensus reaching. Soft Computing, 14(8), 833–846. Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of the IEEE international conference on neural networks: 4 (pp. 1942–1948). IEEE Press, NJ. Liu, W., Dong, Y., Chiclana, F., Cabrerizo, F., & Herrera-Viedma, E. (2017). Group decision-making based on heterogeneous preference relations with self-confidence. Fuzzy Optimization and Decision Making, 16(4), 429–447. Marinakis, Y., Migdalas, A., & Sifaleras, A. (2017). A hybrid particle swarm optimization – variable neighborhood search algorithm for constrained shortest path problems. European Journal of Operational Research, 261, 819–834. Martínez, L., Ruan, D., & Herrera, F. (2010). Computing with words in decision support systems: An overview on models and applications. International Journal of Computational Intelligence Systems, 3(4), 382–395. Massanet, S., Riera, J. V., Torrens, J., & Herrera-Viedma, E. (2014). A new linguistic computational model based on discrete fuzzy numbers for computing with words. Information Sciences, 258, 277–290. Mendel, J. (2002). An architecture for making judgement using computing with words. International Journal of Applied Mathematics and Computer Sciences, 12(3), 325–335. Millet, I. (1997). The effectiveness of alternative preference elicitation methods in the analytic hierarchy process. Journal of Multi-Criteria Decision Analysis, 6(1), 41–51. Montero, J. (2009). Fuzzy logic and science. In R. Seising (Ed.), Views on fuzzy sets and systems from different perspectives (pp. 93–101). Springer-Verlag.
del Moral, M., Chiclana, F., Tapia, J., & Herrera-Viedma, E. (2018). A comparative study on consensus measures in group decision making. International Journal of Intelligent Systems, (in press). Morente-Molinera, J. A., Mezei, J., Carlsson, C., & Herrera-Viedma, E. (2017). Improving supervised learning classification methods using multi-granular linguistic modelling and fuzzy entropy. IEEE Transactions On Fuzzy Systems, 25(5), 1078–1089. Palomares, I., Estrella, F. J., Martínez, L., & Herrera, F. (2014). Consensus under a fuzzy context: Taxonomy, analysis framework AFRYCA and experimental case of study. Information Fusion, 20, 252–271. Pedrycz, W. (2013). Granular computing: Analysis and design of intelligent systems. CRC Press. Pedrycz, W., & Song, M. (2014). A granulation of linguistic information in AHP decision-making problems. Information Fusion, 17, 93–101. Pérez, I. J., Cabrerizo, F. J., & Herrera-Viedma, E. (2010). A mobile decision support system for dynamic group decision-making problems. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 40(6), 1244–1256. Saaty, T. L. (1994). Fundamentals of decision making and priority theory with the AHP. Pittsburg, PA: RWS Publications. Saint, S., & Lawson, J. R. (1994). Rules for reaching consensus: A modern approach to decision making. San Francisco: Jossey-Bass. Shahabinejad, H., & Sohrabpour, M. (2017). A novel neutro energy spectrum unfolding code using swarm optimization. Radiation Physics and Chemistry, 136, 9–16. Song, M., & Pedrycz, W. (2011). From local neural networks to granular neural networks: A study in information granulation. Neurocomputing, 74(18), 3931–3940. Tanino, T. (1984). Fuzzy preference orderings in group decision making. Fuzzy Sets and Systems, 12(2), 117–131. Wang, X., Pedrycz, W., Gacek, A., & Liu, X. (2016). From numeric data to information granules: A design through clustering and the principle of justifiable granularity. Knowledge-Based Systems, 101, 100–113. Xu, Y., Cabrerizo, F. J., & Herrera-Viedma, E. (2017). A consensus model for hesitant fuzzy preference relations and its application in water allocation management. Applied Soft Computing, 58, 265–284. Yager, R. R. (1988). On ordered weighted averaging aggregation operators in multi-criteria decision making. IEEE Transactions on Systems, Man, and Cybernetics, 18(1), 183–190. Yager, R. R. (1996). Quantifier guided aggregation using OWA operators. International Journal of Intelligent Systems, 11(1), 49–73. Yager, R. R., & Filev, D. P. (1999). Induced ordered weighted averaging operators. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cibernetycs, 29(2), 141–150. Zadeh, L. A. (1973). Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(1), 28–44. Zadeh, L. A. (1975). The concept of a linguistic variable and its applications to approximate reasoning. part i. Information Sciences, 8(3), 199–249. Zadeh, L. A. (1983). A computational approach to fuzzy quantifiers in natural languages. Computers & Mathematics with Applications, 9(1), 149–184. Zhang, H., Dong, Y., & Herrera-Viedma, E. (2018). Consensus building for the heterogeneous large-scale GDM with the individual concerns and satisfactions. IEEE Transaction on Fuzzy Systems. doi:10.1109/TFUZZ.2017.2697403. Zhou, D., Gao, X., Liu, G., Mei, C., & Liu, Y. (2011). Randomization in particle swarm optimization for global search ability. Expert Systems With Applications, 38(12), 15356–15364.