Quantitative orness for lattice OWA operators

Quantitative orness for lattice OWA operators

Information Fusion 30 (2016) 27–35 Contents lists available at ScienceDirect Information Fusion journal homepage: www.elsevier.com/locate/inffus Qu...

NAN Sizes 0 Downloads 82 Views

Information Fusion 30 (2016) 27–35

Contents lists available at ScienceDirect

Information Fusion journal homepage: www.elsevier.com/locate/inffus

Quantitative orness for lattice OWA operators D. Paternain a, G. Ochoa c,d, I. Lizasoain c,d, H. Bustince a,b,∗, R. Mesiar e,f a

Departamento de Automatica y Computacion, Universidad Publica de Navarra, Campus Arrosadia s/n, Pamplona 31006, Spain Institute of Smart Cities, Universidad Publica de Navarra, Campus Arrosadia s/n, Pamplona 31006, Spain c Departamento de Matematicas, Universidad Publica de Navarra, Campus Arrosadia s/n, Pamplona 31006, Spain d Institute for Advanced Materials INAMAT, Universidad Publica de Navarra, Campus Arrosadia s/n, Pamplona 31006, Spain e Department of Mathematics and Descriptive Geometry, Faculty of Civil Engineering, Slovak University of Technology, Radlinského 11, Bratislava 813 68, Slovakia f Institute for Research and Applications of Fuzzy Modelling, University of Ostrava, 30. dubna 22, 701 03 Ostrava 1, Czech Republic b

a r t i c l e

i n f o

Article history: Received 18 May 2015 Revised 16 September 2015 Accepted 23 November 2015 Available online 30 November 2015

a b s t r a c t This paper deals with OWA (ordered weighted average) operators defined on any complete lattice endowed with a t-norm and a t-conorm and satisfying a certain finiteness local condition. A parametrization of these operators is suggested by introducing a quantitative orness measure for each OWA operator, based on its proximity to the OR operator. The meaning of this measure is analyzed for some concrete OWA operators used in color image reduction, as well as for some OWA operators used in a medical decision making process.

Keywords: OWA operator Lattice-valued fuzzy sets Orness Image processing Decision making

1. Introduction Ordered weighted average (OWA) operators were introduced by Yager in [1] in order to obtain a global value by aggregating several data on the real interval [0, 1]. Unlike for other weighted average operators, the weight associated to each datum in an OWA operator only depends on the position it takes in the descending arrangement of the data. Hence Yager’s OWA operators are symmetric, i.e., the global value they provide does not depend on the order that the data are considered. In addition, they form a wide family of aggregation functions situated between the AND operator, which provides the minimum of the given values, and the OR operator, which gives the maximum of them. For this reason, OWA operators are commonly used in data fusion or multicriteria decision making [2–7]. The orness of an OWA operator was proposed by Yager in [8] as a measure of its proximity to the OR operator. This way, the orness provides a classification of all the OWA operators defined on the real interval [0, 1], giving the OR operator the maximum value 1 and the AND operator the minimum value 0. This classification is of great help ∗ Corresponding author at: Departamento de Automatica y Computacion, Universidad Publica de Navarra, Campus Arrosadia s/n, 31006 Pamplona, Spain. Tel.: +34948169254. E-mail addresses: [email protected] (D. Paternain), [email protected] (G. Ochoa), [email protected] (I. Lizasoain), [email protected] (H. Bustince), [email protected] (R. Mesiar).

http://dx.doi.org/10.1016/j.inffus.2015.11.007 1566-2535/© 2015 Published by Elsevier B.V.

© 2015 Published by Elsevier B.V.

in order to choose the weighting vector in each practical application. In other words, the orness of each OWA operator provides information about the influence that each concrete choice of weighting vector will have in the aggregation result (see also [9–11]). In some practical applications, the data to aggregate are not either numerical or linearly ordered [12–16]. This is the case, for example, of fuzzy sets and some of their extensions [17]. Moreover, some medical decision making problems require to merge opinions from different experts, such as aggressive surgery, conservative surgery, radiotherapy or chemotherapy, which are not easily arranged. The lattice structure is a suitable way to model the interrelations of these options. A lattice structure also occurs in image processing in RGB system, where each pixel is represented by a tern consisting of three numerical components. In spite of its numerical nature, the set of all of these terns, not totally ordered, forms a complete lattice. As it had been done with other aggregation functions [18], OWA operators were generalized in [19] from the real unit interval to a general complete lattice endowed with a t-norm T and a t-conorm S, whenever the weighting vector satisfied a distributivity condition with respect to T and S. Following the way paved by Yager, a qualitative parametrization of OWA operators defined on an arbitrary finite lattice based on their proximity to the OR operator was given in [20]. In addition, the semantic meaning of this qualitative measure was explained by means of its application to some examples taken from both image processing and medical decision making.

28

D. Paternain et al. / Information Fusion 30 (2016) 27–35

In this paper, a quantitative orness measure is proposed in order to get a new parametrization of OWA operators defined on any complete lattice L endowed with a t-norm T and a t-conorm S and satisfying some finiteness local condition. If we defined it as an aggregation of the weights (α1 , . . . , αn ) ∈ Ln as Yager does in the real case, where 1 n orness(Fα ) = n−1 i=1 (n − i )αi , we would obtain an element of the lattice L as a result instead of a number. So, we propose in this paper to define firstly a qualitative quantifier Q : {0, 1, . . . , n} → L by means of Qα (0 ) = 0L and Qα (k ) = S[α1 , . . . , αk ] for 1 ≤ k ≤ n and then to aggregate certain numerical distances M(k) between Qα (k) and Qα (k − 1 ) to get a quantitative orness measure:

orness(Fα ) =

n 1  ( n − i )M ( i ). n−1 i=1

In the case that L = [0, 1], the distance between each Qα (k ) = α1 + · · · + αk and Qα (k − 1 ) = α1 + · · · + αk−1 is exactly α k . So, our concept of orness is a generalization of Yager’s one. We prove that this concept satisfies the properties imposed in [11] to any orness measure in an axiomatic framework. In the particular case of a distributive complete lattice, it is shown that the OWA operator can be retrieved from the quantifier associated to its weighting vector. The advantage of a quantitative orness over a qualitative one is the possibility of dividing OWA operators into OR-like ones, those with an orness greater (or equal) than 0.5 and AND-like ones, those whose orness is less than 0.5. In contrast, Example 5.1 shows that a qualitative orness gives more information than the quantitative one about the meaning of choosing some weighting vector or another in a decision making process. Finally, the new parametrization of OWA operators is applied to analyze the same examples that were studied in [20] by means of the qualitative orness measure. The first one deals with a decision making problem about the kind of medical treatment to use with a breast cancer patient. The second one consists of an image reduction algorithm in the RGB color scheme performed by means of several lattice OWA operators. Of course, different OWA operators provide different solutions to each problem, which are analyzed on the basis of the orness of the OWA operators. Moreover, we also analyze the effect of OWA operators on images altered by certain type of impulsive noise. The paper is organized as follows. Section 2 is devoted to bring together some preliminary concepts and results about OWA operators defined on a complete lattice. Section 3 suggests a quantitative orness for OWA operators defined on complete lattices. Section 4 focuses on the case of a complete distributive lattice, showing that the OWA operator can be retrieved in this case from the quantifier associated to its weighting vector. Finally, Section 5 applies the orness measure to analyze a decision making problem and Section 6 studies the meaning of the orness in an application to a problem of color image reduction. A final section of conclusions and further research closes the paper. 2. Preliminaries Throughout this paper (L, ≤L ) will denote a complete lattice, i.e., a partially ordered set in which all subsets have both a supremum and an infimum. 0L and 1L will respectively stand for the least and the greatest elements of L. A lattice L is said to be complemented if for each a ∈ L there exists some b ∈ L such that a ∧ b = 0L and a ∨ b = 1L . A subset M of L is called a sublattice of (L, ≤L ) if whenever a, b ∈ M, then both a∧b and a∨b belong to M. Definition 2.1 (see [21]). A map T: L × L → L is said to be a t-norm [resp. t-conorm] on (L, ≤L ) if it is commutative, associative, increasing in each component and has a neutral element 1L [resp. 0L ].

Notation: For any n > 2, S(a1 , . . . , an ) will denote S[. . . (S(S(a1 , a2 ), a3 ), . . . an−1 ), an ]. Note that, for any permutation σ of the elements 1, . . . , n,

S(a1 , . . . , an ) = S(aσ (1) , . . . , aσ (n) ). Throughout this paper (L, ≤L , T, S) will denote a complete lattice endowed with a t-norm T and a t-conorm S. As usual, Ln will denote the cartesian product L ×  × L and LIn will stand for the set of all the n-ary lattice intervals [a1 , . . . , an ] with a1 ≤ L  ≤ L an contained in L. Recall that an n-ary aggregation function is a function M: Ln → L such that: (i) M (a1 , . . . , an ) ≤L M (a1 , . . . , an ) whenever ai ≤L ai for 1 ≤ i ≤ n. (ii) M (0L , . . . , 0L ) = 0L and M (1L , . . . , 1L ) = 1L . It is said to be idempotent if M (a, . . . , a ) = a for every a ∈ L and it is called symmetric if, for every permutation σ of the set {1, . . . , n}, M ( a1 , . . . , a n ) = M ( a σ ( 1 ) , . . . , a σ ( n ) ) . A wide family of both symmetric and idempotent aggregation functions was introduced by Yager in [1] on the lattice L = [0, 1], the real unit interval: Definition 2.2 (Yager [1]). Let α = (α1 , . . ., αn ) ∈ [0, 1]n be a weighting vector with α1 + · · · + αn = 1. An n-ary ordered weighted average operator or OWA operator is a map Fα : [0, 1]n → [0, 1] given by

Fα (a1 , . . ., an ) = α1 b1 + · · · + αn bn , where (b1 , . . . , bn ) is a rearrangement of (a1 , . . ., an ) satisfying that b1 ≥  ≥ bn . It is easy to check that OWA operators form a family of aggregation functions bounded between the AND-operator or minimum, given by the weighting vector α = (0, . . . , 0, 1 ),

Fα (a1 , . . ., an ) = a1 ∧ · · · ∧ an for any (a1 , . . . , an ) ∈ [0, 1]n and the OR-operator or maximum, given by the weighting vector α = (1, 0, . . . , 0 ),

Fα (a1 , . . ., an ) = a1 ∨ · · · ∨ an for any (a1 , . . . , an ) ∈ [0, 1]n . With the purpose of classifying these operators, Yager introduced in [8] an orness measure for each OWA operator Fα , which depends only on the weighting vector α = (α1 , . . . , αn ), in the following way:

orness(Fα )=

 n−2  n 1 1  ,..., ,0 . (n − i )αi =Fα 1, n−1 n−1 n−1

(1)

i=1

It is easy to check that the orness of each operator is a real value situated between 0, corresponding to the AND-operator, and 1, corresponding to the OR-operator. In general, the orness is a measure of the proximity of each OWA operator to the OR-operator. For instance, the orness of the arithmetic mean, provided by the weighting vector (1/n, . . . , 1/n ), is equal to 1/2. In addition, Yager defines, for any weighting vector α = (α1 , . . . , αn ) ∈ [0, 1]n , a function Qα : {0, 1, . . . , n} → [0, 1], called quantifier by means of:



Qα (k ) =

0

α1 + · · · + αk

if k = 0 otherwise

(2)

Notice that Qα is a monotonically increasing function. Moreover, given a monotonically increasing function Q : {0, 1, . . . , n} → [0, 1] with Q (0 ) = 0 and Q (n ) = 1, then there exists a unique weighting vector α = (α1 , . . . , αn ) ∈ [0, 1]n with Qα = Q. Indeed, for any k = 1, . . . , n, put αk = Q (k ) − Q (k − 1 ) and check that Qα = Q. In [19] n-ary ordered weighted average (OWA) operators are extended to any complete lattice endowed with a t-norm T and a tconorm S whenever the weighting vector satisfies some distributivity condition.

D. Paternain et al. / Information Fusion 30 (2016) 27–35

29

Definition 2.3 ([19]). Consider any complete lattice (L, ≤L , T, S). A lattice vector α = (α1 , . . . , αn ) ∈ Ln is said to be a weighting vector in (L, ≤L , T, S) if S(α1 , . . . , αn ) = 1L and it is called a distributive weighting vector in (L, ≤L , T, S) if it also satisfies that for any a ∈ L,

Remark 2.8. Notice that, unlike in the classical case, there can be different distributive weighting vectors in (L, ≤L , T, S) providing the same OWA operator. For instance, if T is the t-norm given by the meet and S is the t-conorm given by the join, then all the weighting vectors

S(T (a, α1 ), . . . , T (a, αn )) = T (a, S(α1 , . . . , αn ))

α k = (1L , . . . , 1L , 0L , . . . , 0L ), with k = 1, . . . , n, give the same OWA

and consequently equal to a. Remark 2.4. If L is the real interval [0, 1] with the usual order ≤, T (a, b) = ab for every a, b ∈ [0, 1] and S(a, b) = min{a + b, 1} for every a, b ∈ [0, 1], then α = (α1 , . . . , αn ) ∈ [0, 1]n with S(α1 , . . . , αn ) = 1 is not necessarily a distributive weighting vector in [0, 1]n . In fact, α is a distributive weighting vector if and only if α1 + · · · + αn = 1 (see [19]). The main difficulty in extending OWA operators from [0, 1] to a more general complete lattice is to get a rearrangement (b1 , . . . , bn ) with b1 ≥ L  ≥ L bn from any given vector (a1 , . . . , an ) ∈ Ln , which may be non totally ordered. To solve this problem, the following construction was introduced. Definition 2.5 ([19]). Let (L, ≤L , T, S) be a complete lattice. For any vector (a1 , . . . , an ) ∈ Ln , an n-dimensional lattice interval [bn , . . . , b1 ] is defined by • •





b1 = a1 ∨ · · · ∨ an ∈ L. b 2 = [ ( a 1 ∧ a2 ) ∨ · · · ∨ ( a 1 ∧ a n ) ] ∨ [ ( a 2 ∧ a3 ) ∨ · · · ∨ ( a 2 ∧ a n ) ] ∨ · · · ∨ [an−1 ∧ an ] ∈ L.   bk = {a j1 ∧ · · · ∧ a jk | j1 < · · · < jk ∈ {1, . . . , n}} ∈ L.  bn = a1 ∧ · · · ∧ an ∈ L.

Remark 2.6. Let (L, ≤L , T, S) be a complete lattice, (a1 , . . . , an ) ∈ Ln and [bn , . . . , b1 ] as defined above. (i) a1 ∧ · · · ∧ an = bn ≤L bn−1 ≤ · · · ≤L b2 ≤L b1 = a1 ∨ · · · ∨ an , i. e., [bn , . . . , b1 ] is an n-dimensional lattice interval as defined in Definition 2.3 (i). (ii) If n is odd and k = n+1 2 , then bk agrees with the n-ary median function of (a1 , . . . , an ). (iii) If the set {a1 , . . . , an } is totally ordered, then [bn , . . . , b1 ] agrees with [aσ (1 ) , . . . , aσ (n ) ] for some permutation σ of {1, . . . , n}. Moreover, in this case, bk is the k-th order statistic of the vector (a1 , . . . , an ) for each 1 ≤ k ≤ n, which means that each order statistic of (a1 , . . . , an ) can be calculated by means of a lattice polynomial function. (iv) An alternative n-dimensional lattice interval [c1 , . . . , cn ] can be constructed from each (a1 , . . . , an ) ∈ Ln by means of

ck =



{a j1 ∨ · · · ∨ a jk | { j1 , . . . , jk } ⊆ {1, . . . , n}}.

(v) If (L, ≤L ) is a complete distributive lattice, then for each vector (a1 , . . . , an ) ∈ Ln , the corresponding n-ary intervals [bn , . . . , b1 ] and [c1 , . . . , cn ] agree. Definition 2.7 ([19]). Let (L, ≤L , T, S) be a complete lattice. (i) Define τL : Ln → LIn by assigning to any vector (a1 , . . . , an ) ∈ Ln , the n-dimensional lattice interval [bn , . . . , b1 ] defined in Definition 2.5. (ii) For each distributive weighting vector α = (α1 , . . . , αn ) ∈ Ln , the function Fα : Ln → L given by

Fα (a1 , . . . , an ) = S(T (α1 , b1 ), . . . , T (αn , bn )) (a1 , . . . , an ) ∈ Ln is called an n-ary OWA operator.

(k )

operator, the OR-one. The important properties of [0, 1]-valued OWA operators remain on any complete lattice (L, ≤L , T, S). Indeed, for each distributive weighting vector α , Fα is an idempotent symmetric n-ary aggregation function lying between the operators given by the meet and the join on L which agrees with a particular case of the discrete Sugeno integral in some cases (see [19] for details). In addition, Definition 2.7 covers some ordered operators given in the literature: If L = [0, 1] with the t-norm and the t-conorm described in Remark 2.4, then, for each distributive weighting vector α = (α1 , . . . , αn ) ∈ [0, 1]n , the OWA operator Fα : Ln → L agrees with that given by Yager, as shown in [19]. In addition, if the t-norm and the t-conorm considered in L = [0, 1] are respectively the meet and the join, Definition 2.7 covers the ordered weighted maximum (OWMax) operator introduced by Dubois and Prade in [23], given by

Fα (a1 , . . . , an ) = (α1 ∧ b1 ) ∨ · · · ∨ (αn ∧ bn ) a1 , . . . , an ∈ [0, 1] where, in this case, (b1 , . . . , bn ) is a rearrangement of (a1 , . . . , an ) satisfying that b1 ≥  ≥ bn . In [20], a qualitative orness of each n-ary OWA operator Fα is defined on any finite bounded lattice L = {a1 , . . . , al } by

orness Fα = Fα (d1 , . . . , dn ), where d1 ≥ L  ≥ L dn is a descending chain which is built starting from τL [a1 , . . . , al ] (see [20]). The next section is devoted to find a quantitative parametrization of lattice OWA operators. 3. A quantitative orness for lattice operators The advantage of a quantitative orness over a qualitative one is the possibility of dividing OWA operators into OR-like ones, those with an orness greater (or equal) than 0.5 and AND-like ones, those whose orness is less than 0.5. In contrast, Example 5.1 shows that a qualitative orness gives more information than the quantitative one about the meaning of choosing some weighting vector or another in a decision making process. In this section, (L, ≤L , T, S) will be a complete lattice satisfying the following condition: (MFC) For any a, b ∈ L with a ≤ L b, there exists some maximal chain with a finite length, named l, between a and b,

a = a0
30

D. Paternain et al. / Information Fusion 30 (2016) 27–35

Definition 3.1. Let (L, ≤L , T, S) be a complete lattice satisfying condition (MFC). For any distributive weighting vector, α = (α1 , . . . , αn ) ∈ Ln , define the qualitative quantifier Qα : {0, 1, . . . , n} → L by means of

Qα (0 ) = 0L , Remark 3.2. Let α = (α1 , . . . , αn ) ∈ Ln be a distributive weighting vector in (L, ≤L , T, S). Then (i) Qα is a monotonically increasing function. Indeed, for any k = 1, . . . , n, Qα (k ) = S(Qα (k − 1 ), αk ) ≥L Qα (k − 1 ). (k )

(ii) For any k = 1, . . . , n, Qα (k ) = Fα (1L , . . . , 1L , 0L , . . . , 0L ). (iii) Consequently, if β = (β1 , . . . , βn ) ∈ Ln is a distributive weighting vector in (L, ≤L , T, S) with Fα = Fβ (see Remark 2.8), then Qα = Q β . This qualitative quantifier associated to any distributive weighting vector in (L, ≤L , T, S) allows us to give a definition of an orness measure of any OWA operator defined on L whenever condition (MFC) is satisfied. Definition 3.3. Let (L, ≤L , T, S) be a complete lattice satisfying condition (MFC). For any distributive weighting vector in (L, ≤L , T, S), α = (α1 , . . . , αn ) ∈ Ln , consider the qualitative quantifier Qα : {0, 1, . . . , n} → L defined in Definition 3.1. For each k = 1, . . . , n, call m(k ) = d (Qα (k − 1 ), Qα (k )). If m = m(1 ) + · · · + m(n ), then define

orness(Fα ) =

1 n−1

i=1

(n − i )

Proposition 3.6. Let (L, ≤L , S, T) be a complete lattice satisfying condition (MFC) and Fα : Ln → L an arbitrary OWA operator. (i) If α = (1L , 0L , . . . , 0L ) ∈ Ln , then orness(Fα ) = 1. (ii) orness(Fα ) = 0 if and only if Fα is the AND-operator.

Qα (k ) = S(α1 , . . . , αk ) for k = 1, . . . , n

n 

The next results show that Definition 3.3 does satisfy all of these properties in some sense:

m (i ) . m

(3)

An example of this concept can be seen in Example 5.1 Remark 3.4. Unlike the classical orness, which can achieve any value from [0, 1], Definition 3.3 can obtain only rational numbers with special denominators related to n and lattice L. However, if we assign any real non-negative number (weight) to each edge of the lattice, as in graph theory, the distance between two elements in L will be the sum of the weights occurring in the shortest maximal chain between them. This way, the orness given in Definition 3.3 can achieve any value from [0, 1] too. Notice that the definition of orness(Fα ) is associated to the OWA operator itself and does not depend on the weighting vector α chosen: Theorem 3.5. Let (L, ≤L , T, S) be a complete lattice satisfying condition (MFC). If α = (α1 , . . . , αn ) ∈ Ln and β = (β1 , . . . , βn ) ∈ Ln are distributive weighting vectors in (L, ≤L , T, S) with Fα = Fβ , then orness(Fα ) = orness(Fβ ). Proof. Since Fα = Fβ , Remark 3.2 (iii) assures that Qα = Qβ . Therefore, the distances {m(k ) = d (Qα (k − 1 ), Qα (k )) | 1 ≤ k ≤ n} are the same than those given by Qβ and consequently orness(Fα ) = orness(Fβ ).  In [11] an axiomatic framework is given for an orness measure. They say that a numerical function of the weighting vectors will be called an orness measure if it satisfies the following axioms: (O1) The orness of the weighting vector (1, 0, . . . , 0 ), corresponding to the maximum aggregation, is equal to 1. (O2) The orness of the weighting vector (0, . . . , 0, 1 ), corresponding to the minimum aggregation, is equal to 0. (O3) The orness of the weighting vector (1/n, . . . , 1/n ), corresponding to the arithmetic mean, is equal to 1/2. (O4) If α and β are two weighting vectors with β = (α1 , . . . , αi + δ, . . . , α j − δ, . . . , αn ), with δ > 0 and i < j, then orness(β ) > orness(α ).

(k )

(iii) If α = (0L , . . . 0L , 1L , 0L , . . . , 0L ) ∈ Ln with 1 < k < n, then n−k , as in the case of standard orness. orness(Fα ) = n−1 Proof. (i) If α = (1L , 0L , . . . , 0L ) ∈ Ln , then Qα (0 ) = 0L and Qα (k ) = 1L for k = 1, . . . , n. In this case, m(2 ) = · · · = m(n ) = 0. Therefore

orness(Fα ) =

n n − 1 m (1 ) 1  m (i ) = · (n − i ) = 1. n−1 m n − 1 m (1 ) i=1

(ii) The only way of getting 0 in formula (3), with some m(k) = 0, is making m(k ) = 0 for 1 ≤ k ≤ n − 1 and m(n) = 0, or equivalently, 0L = Q (0 ) = Q (1 ) = · · · = Q (n − 1 ), i.e., α1 = · · · = αn−1 = 0L and αn = S(0L , αn ) = S(α1 , . . . , αn ) = 1L . In this case, for any (a1 , . . . , an ) ∈ Ln with τL (a1 , . . . , an ) = [bn , . . . , b1 ],

Fα (a1 , . . . , an ) = S[T (0L , b1 ), . . . , T (0L , bn−1 ), T (1L , bn )] = bn = a1 ∧ · · · ∧ an . (k )

(iii) If α = (0L , . . . 0L , 1L , 0L , . . . , 0L ) ∈ Ln with 1 < k < n, then Qα (0 ) = · · · = Qα (k − 1 ) = 0L and Qα (k ) = · · · = Qα (n ) = 1L . In this case, m(1 ) = · · · = m(k − 1 ) = m(k + 1 ) = · · · = m(n ) = 0. Therefore

orness(Fα ) =

n n − k m (k ) 1  n−k m (i ) = · , (n − i ) = n−1 m n − 1 m (k ) n−1 i=1

as in the case of standard orness.  Remark 3.7. In the case that the t-conorm S is that given by the join, item (i) in Proposition 3.6 improves for an arbitrary complete lattice (L, ≤L , ∨, T) satisfying condition (MFC): orness(Fα ) = 1 if and only if Fα is the OR-operator. Indeed, the only way of getting 1 in formula (3) is making m(k ) = 0 for k ≥ 2, or equivalently, Q (1 ) = · · · = Q (n ), i.e.

α1 = S(α1 , . . . , αn ) = 1L . In this case, for any (a1 , . . . , an ) ∈ Ln [bn , . . . , b1 ],

with τL (a1 , . . . , an ) =

Fα (a1 , . . . , an ) = T (1L , b1 ) ∨ · · · ∨ T (αn−1 , bn−1 ) ∨ T (αn , bn ) = b1 = a1 ∨ · · · ∨ an because T(α k , bk ) ≤ L bk ≤ L b1 for k ≥ 2. However, if we consider for instance the lattice L = {0L , a, b, 1L } with 0L < a < b < 1L , the t-norm given by the meet and the t-conorm S defined by

S(0, x ) = x for all x ∈ L, S(a, a ) = a and S(x, y ) = 1L otherwise, then the weighting vector α = (1L , a, a ) satisfies that Q (0 ) = 0L , Q (1 ) = Q (2 ) = Q (3 ) = 1L and hence orness(Fα ) = 1, but

Fα (b, a, a ) = S(1L ∧ b, a ∧ a, a ∧ a ) = 1L , whereas b ∨ a ∨ a = b. The following results are easy to prove. Proposition 3.8. Let (L, ≤L , S, T) be a complete lattice satisfying condition (MFC). If α ∈ Ln is a distributive weighting vector satisfying that m(1 ) = · · · = m(n ), as defined in Definition 3.3, then

orness(Fα ) =

1 . 2

D. Paternain et al. / Information Fusion 30 (2016) 27–35

Theorem 3.9. Let (L, ≤L , S, T) be a complete lattice satisfying condition (MFC) and α = (α1 , . . . , αn ) ∈ Ln a distributive weighting vector. If β = (β1 , . . . , βn ) is another weighting vector with mβ (i ) = mα (i ) + δ for some natural number δ > 0, mβ ( j ) = mα ( j ) − δ for some j > i and mβ (k ) = mα (k ) for all k = i, j, then

orness(Fβ ) > orness(Fα ). 4. The distributive case If a monotonically increasing function Q : {0, 1, . . . , n} → L, with Q (0 ) = 0L and Q (n ) = 1L , is defined on any complete lattice (L, ≤L , T, S), then there is not always some weighting vector α = (α1 , . . . , αn ) ∈ Ln with Qα = Q. Example 4.1. Let L = {0L , a, b, 1L } be a finite lattice with 0L < a < b < 1L . Consider the t-norm given by the meet and the t-conorm S defined by

S(0, x ) = x for all x ∈ L, S(a, a ) = a and S(x, y ) = 1L otherwise. The function Q: {0, 1, 2, 3} → L given by Q (0 ) = 0L , Q (1 ) = a, Q (2 ) = b and Q (3 ) = 1L satisfies the above conditions, but Q = Qα for any weighting vector α = (α1 , α2 , α3 ) in (L, ≤L , ∧, S). Things are different (see Theorem 4.5) if (L, ≤L , T, S) satisfies the following distributive property:

(D ) T (a, S(b, c )) = S(T (a, b), T (a, c )) for any a, b, c ∈ L. For that reason, this section is devoted to complete lattices (L, ≤L , T, S) satisfying the distributive property (D). Remark 4.2 ([24] Propositions 3.5–3.7). If (L, ≤L , T, S) is a complete lattice satisfying the distributive property (D), then S is the t-conorm given by the join on (L, ≤L ). If the t-norm and the t-conorm considered are respectively the meet (greatest lower bound) and the join (least upper bound), we will write (L, ≤L , ∧, ∨). If one of the following equivalent properties (see [25]) is satisfied on a complete lattice (L, ≤L , ∧, ∨), it will be called a complete distributive lattice: (i) a ∧ (b ∨ c ) = (a ∧ b) ∨ (a ∧ c ) for all a, b, c ∈ L. (ii) a ∨ (b ∧ c ) = (a ∨ b) ∧ (a ∨ c ) for all a, b, c ∈ L. Any complete distributive lattice is modular, which means that, for every a, b, c ∈ L with c ≤ L a, then (a ∧ b) ∨ c = a ∧ (b ∨ c ). In order to calculate the orness of a lattice OWA operator, it is interesting the following result, a generalization of that due to JordanDedekind. Theorem 4.3 (see [26]). Let (L, ≤L , ∧, ∨) be a complete modular lattice satisfying condition (MFC). Then, for any a, b ∈ L with a ≤ L b, all the maximal chains between a and b have the same length. Lemma 4.4. Let (L, ≤L , T, ∨) be a complete lattice satisfying distributive property (D). Obviously, any weighting vector α = (α1 , . . . , αn ) ∈ Ln is distributive. In addition, for any k = 1, . . . , n: (i) Qα (k ) = α1 ∨ · · · ∨ αk . (ii) If we call γk = α1 ∨ · · · ∨ αk for k = 1, . . . , n, then the OWA operators Fα and Fγ agree. Proof. It is an easy checking  Theorem 4.5. Let (L, ≤L , T, ∨) be a complete lattice satisfying distributive property (D). For any monotonically increasing function Q : {0, 1, . . . , n} → L with Q (0 ) = 0L and Q (n ) = 1L : (i) There exists some weighting vector α = (α1 , . . . , αn ) ∈ Ln with Qα = Q.

31

(ii) The weighting vector α occurring in (i) is not necessarily unique. However, if both α = (α1 , . . . , αn ) and β = (β1 , . . . , βn ) are weighting vectors in Ln with Qα = Qβ , then the OWA operators Fα and Fβ agree on Ln . Proof. (i) For each k = 1, . . . , n, put αk = Q (k ). We will see inductively that Qα = Q. Firstly, Qα (1 ) = α1 = Q (1 ). Then, for each k = 2, . . . , n,

Qα (k ) = Qα (k − 1 ) ∨ αk = Q (k − 1 ) ∨ Q (k ) = Q (k ) by using the inductive hypothesis in the next-to-last equality and the monotonicity of Q in the last one. (ii) Let α and β be some weighting vector in Ln with Qα = Qβ . For each k = 1, . . . , n, put γk = α1 ∨ · · · ∨ αk = Qα (k ). Since β1 ∨ · · · ∨ βk = Qβ (k ) = Qα (k ) for the hypothesis, then Lemma 4.4 asserts that, for each (a1 , . . . , an ) ∈ Ln ,

Fα (a1 , . . . , an ) = Fγ (a1 , . . . , an ) = Fβ (a1 , . . . , an ).  Example 4.6. Consider the lattice (L, ≤L ) given in Example 5.1 and the increasing map Q : {0, 1, . . . , 5} → L given by

Q (0 ) = 0L , Q (1 ) = C1 , Q (2 ) = C1 , Q (3 ) = C1 R, Q (4 ) = C2 R and Q (5 ) = 1 The weighting vector α ∈ L5 given by

α1 = C1 , α2 = C1 , α3 = C1 R, α4 = C2 R and α5 = 1L satisfies that Qα = Q. In addition, the weighting vector β ∈ L5 given by

β1 = C1 , β2 = W, β3 = R, β4 = C2 and β5 = C2 Q satisfies that Qβ = Q = Qα . Then, Theorem 4.5 (ii) asserts that OWA operators Fα and Fβ agree on L5 . 5. Using OWA operators in decision making problems It is not easy to know a priori the influence that a determinate choice of weighting vector will have in the final data aggregation. In these two sections we show that the orness is able to provide an idea of this influence. In [20] several OWA operators were applied to a decision making problem and the results obtained were analyzed on the basis of the qualitative orness of each OWA operator applied. Next, we will calculate the new quantitative orness of them. Example 5.1 (see [20]). The different kinds of treatment that any breast cancer patient can receive in a hospital cannot be arranged, according to their aggressiveness, in a linear way. The following lattice allows to model the relations between them.

32

N = 0L W C1 C2 R Q C1 R C2 R C1 Q C2 Q CQR = 1

D. Paternain et al. / Information Fusion 30 (2016) 27–35

→ → → → → → → → → → →

No tumour Waiting for further revision Breast-conserving surgery Aggressive surgery Radiation therapy Chemotherapy C1 and R C2 and R C1 and Q C2 and Q C2 , R and Q

Consider now that the decision on the kind of treatment that each breast cancer patient will receive in a certain hospital is made by means of an OWA operator defined on the previous lattice L with all the possible options. After the initial tests, each of three medical teams proposes separately a kind of treatment. The hospital has an established weighting vector in order to aggregate the three proposals by means of the corresponding OWA operator, which will give the treatment to use. In this case, the quantitative orness of this operator is a measure of the hospital preference for a certain aggressiveness degree of the treatment to use. However, unlike the qualitative orness, it does not measure the kind of treatment that is preferred by the hospital. We consider the t-norm given by the meet, the t-conorm given by the join and the ternary OWA operator Fα obtained for the weighting vector α = (C2 , R, Q ) ∈ L3 . The qualitative orness of Fα was shown in [20] to be C2 . In order to calculate the quantitative one, notice that Qα (0 ) = 0L , Qα (1 ) = C2 , Qα (2 ) = C2 R and Qα (3 ) = 1L . Hence mα (1 ) = 3, mα (2 ) = mα (3 ) = 1 and m = 3 + 1 + 1 = 5, whence

orness(Fα ) =

 3 1 3 1 1 m (i ) = (3 − i ) 2· + = 0.7. 2 m 2 5 5 i=1

If we use this OWA operator to merge the following examples of medical proposal aggregations, we obtain:

Fα (W, C1 , R ) = C1 Fα (C2 Q, C2 R, 1L ) = C2 R Fα (W, W, R ) = W Fα (R, C1 , Q ) = C2

Fα (W, C1 , Q ) = C1 Fα (1L , 1L , C2 Q ) = 1L Fα (C1 R, C1 Q, C1 ) = C2

If the hospital chooses a different weighting vector, β = (R, C2 , Q ) ∈ L3 , whose qualitative orness calculated in [20], is C1 R, then Qβ (0 ) = 0L , Qβ (1 ) = R, Qβ (2 ) = C2 R, Qβ (3 ) = 1L . Hence mβ (1 ) = 2, mβ (2 ) = 2, mβ (3 ) = 1 and m = 2 + 2 + 1 = 5, whence

orness(Fβ ) =

 2 2 3 1 1 m (i ) = (3 − i ) 2· + = 0.6, 2 m 2 5 5 i=1

which means that the hospital prefers a less aggressive treatment than in the case of the vector α , as it is shown by aggregating the same examples as before by means of the new OWA operator Fβ :

Fβ (W, C1 , R ) = R Fβ (C2 Q, C2 R, 1L ) = C2 R Fβ (W, W, R ) = R Fβ (R, C1 , Q ) = R

Fβ (W, C1 , Q ) = W Fβ (1L , 1L , C2 Q ) = 1L Fβ (C1 R, C1 Q, C1 ) = C1 R

6. Using OWA operators in image processing Aggregation operators, in general, and OWA operators, in particular, have been used in many areas of image processing. Some examples of recent applications in this topic are image segmentation [27], noise removal or filtering [28], edge detection [29] or image reduction [30,31], among others. In some of these applications, OWA operators are used to fuse information coming from the image, generally pixel

intensities. If we focus on algorithms for processing color images, we need to adapt the definition of OWA operators in order to deal with color images and, therefore, with different color spaces. In this work we consider images in the RGB color scheme, where each pixel is represented by three integer numbers (between 0 and 255). They represent the amount of red, green and blue color, respectively. Then, if we call L = {0, 1, . . . , 255}, we can see an image of M rows and N columns as a mapping {1, . . . , M} × {1, . . . , N} → L × L × L. It is now clear that if we want to fuse several pixel intensities into a single value by means of OWA operators, we can use OWA operators defined on the product lattice L3 (see also [22]). The objective of this section is to study the use of several weighting vectors in the field of image reduction, following the ideas in [31]. The definition of orness proposed in this paper enables us to analyze the behavior of each weighing vector according to its orness. From now on, we will consider as S and T the t-conorm and t-norm given by meet and join, respectively. 6.1. On reducing color images Reducing an image consists in compacting the visual representation of the image trying to preserve its original properties. In other words, we reduce the spatial resolution of an image, so that the number of rows and columns of the resulting image is lower than the original. Suppose we start from an image Q of R × C pixels and we want to obtain a new image Q of R × C pixels, where R = k1 R , C = k2C  and k1 , k2 ∈ {1, 2, . . .}. One of the most simple and effective algorithms consists in dividing the original image into non-overlapping blocks of k1 × k2 pixels and fuse the k1 × k2 pixels into a single one. This new pixel, which represents the group of pixels in the original block, will be located in the new image Q following the same order as in the original. In this way, the image Q will be a reduced representation of the original. In this application we will use OWA operators for fusing the k1 × k2 pixels of each block. Our objective is to apply the same OWA operator to every block and then, to compare several reduced images obtained by each considered OWA operator. Notice that the quality of the reduced image is completely determined by the weighting vector of the OWA operator. In order to analyze the goodness of each OWA operator, we will reconstruct each reduced image into its original size. This process will be done by means of bilinear interpolation, one of the most common and used method to resize (enlarge) images. Concretely, we will use the imresize function of Matlab R2013b. Finally, we will compare the reconstructed image with the original, establishing an error or difference between the images. As the reconstruction method is fixed for every image, the error is dependent on the reduction procedure. This will allow us to establish the quality of each OWA operator. Consider the original RGB color images (321 × 480 pixels) shown in Fig. 1 and let k1 = k2 = 3. This means that we will divide the images into 3 × 3 blocks and aggregate every 9 pixels within each block. The reduced images will be of dimension 107 × 160. Consider now 8 different weighting vectors given by

α1 = ((255, 192, 128 ), (192, 255, 192 ), (128, 192, 255 ), (64, 128, 192 ), (0, 64, 128 ), (0, 0, 64 ), (0, 0, 0 ), (0, 0, 0 ), (0, 0, 0 )) α2 = ((192, 128, 64 ), (255, 192, 128 ), (192, 255, 192 ), (128, 192, 255 ), (64, 128, 192 ), (0, 64, 128 ), (0, 0, 64 ), (0, 0, 0 ), (0, 0, 0 )) α3 = ((128, 64, 0 ), (192, 128, 64 ), (255, 192, 128 ), (192, 255, 192 ), (128, 192, 255 ), (64, 128, 192 ), (0, 64, 128 ), (0, 0, 64 ), (0, 0, 0 ))

D. Paternain et al. / Information Fusion 30 (2016) 27–35

33

Fig. 1. Original RGB color images. Table 1 Orness value for each OWA operator associated with weighting vectors α1 , . . . , α8 . W. vectors

α1

α2

α3

α4

α5

α6

α7

α8

Orness

0.9735

0.9202

0.8400

0.7329

0.5632

0.4382

0.3132

0.1985

α4 = ((64, 0, 0 ), (128, 64, 0 ), (192, 128, 64 ), (255, 192, 128 ), (192, 255, 192 ), (128, 192, 255 ), (64, 128, 192 ), (0, 64, 128 ), (0, 0, 64 )) α5 = ((0, 0, 0 ), (64, 0, 0 ), (128, 64, 0 ), (192, 128, 64 ), (255, 192, 128 ), (192, 255, 192 ), (128, 192, 255 ), (64, 128, 192 ), (0, 64, 128 )) α6 = ((0, 0, 0 ), (0, 0, 0 ), (64, 0, 0 ), (128, 64, 0 ), (192, 128, 64 ), (255, 192, 128 ), (192, 255, 192 ), (128, 192, 255 ), (64, 128, 192 )) α7 = ((0, 0, 0 ), (0, 0, 0 ), (0, 0, 0 ), (64, 0, 0 ), (128, 64, 0 ), (192, 128, 64 ), (255, 192, 128 ), (192, 255, 192 ), (128, 192, 255 )) α8 = ((0, 0, 0 ), (0, 0, 0 ), (0, 0, 0 ), (0, 0, 0 ), (64, 0, 0 ), (128, 64, 0 ), (192, 128, 64 ), (255, 192, 128 ), (192, 255, 255 )) Using the quantitative orness measure proposed in this work, we can classify the OWA operators according to their orness value, which are shown in Table 1. Then, if we have 8 different weighting vectors, we will obtain 8 different reduced images for each original image in Fig. 1 (totally, 4 × 8 = 32 reduced images). The set of reduced images obtained by the different weighting vectors are shown in Figs. 2–5, where each figure corresponds to a determinate original image. Observe that the behavior of each OWA operator is similar for every test image, since it is determined by its orness value: while OWA operators associated with higher orness values produce brighter images, OWA operators with lower orness darken the obtained images. This can be easily observed, for example, in the images of Fig. 4 , where white stripes in the first images (higher orness values) are much wider than black stripes. The same can be observed in Fig. 5, where the brightness of the snow decreases as long as the orness value decreases too. In order to establish an error for each OWA operator, the next step consists in reconstructing each reduced image to its original size and

Fig. 2. Reduced version of image (1) of Fig. 1 using the weighting vectors α 1 , α 2 and α 3 (first row), α 4 , α 5 and α 6 (second row) and α 7 and α 8 (third row).

Fig. 3. Reduced version of image (2) of Fig. 1 using the weighting vectors α 1 , α 2 and α 3 (first row), α 4 , α 5 and α 6 (second row) and α 7 and α 8 (third row).

measuring the error between each reconstructed image and the corresponding original. We will use the PSNR measure, which is given by



PSNR(A, B ) = 10 log10

2552 MSE (A, B )



34

D. Paternain et al. / Information Fusion 30 (2016) 27–35

Fig. 4. Reduced version of image (3) of Fig. 1 using the weighting vectors α 1 , α 2 and α 3 (first row), α 4 , α 5 and α 6 (second row) and α 7 and α 8 (third row).

Fig. 7. Reduced version of corrupted image (3) of Fig. 6 using the weighting vectors α 1 , α 2 and α 3 (first row), α 4 , α 5 and α 6 (second row) and α 7 and α 8 (third row).

Fig. 5. Reduced version of image (4) of Fig. 1 using the weighting vectors α 1 , α 2 and α 3 (first row), α 4 , α 5 and α 6 (second row) and α 7 and α 8 (third row).

Fig. 8. Reduced version of corrupted image (4) of Fig. 6 using the weighting vectors α 1 , α 2 and α 3 (first row), α 4 , α 5 and α 6 (second row) and α 7 and α 8 (third row).

Table 2 PSNR of reconstructed images (columns) with respect to each weighting vector (rows).

On the contrary, OWA operators associated with intermediate orness values obtain better results, since they are able to capture in a better way all the intensities within each block.

α1 α2 α3 α4 α5 α6 α7 α8

Im. (1)

Im. (2)

Im. (3)

Im. (5)

22.2691 23.2132 24.4057 25.5936 26.2842 26.1440 25.4117 24.3591

20.6228 21.6160 23.2483 24.6771 25.4379 25.3410 24.5049 23.3226

16.6117 17.6506 18.9866 20.1902 20.7756 20.5869 19.7464 18.6338

20.2663 21.4769 22.7619 23.6756 23.8470 23.2322 22.0494 20.6969

6.2. On reducing images altered with noise

where

MSE (A, B ) =

480 321   1 (A(i, j ) − B(i, j ))2 321 × 480 i=1 j=1

Notice that the lower value of PSNR, the more different A and B are. The PSNR values of each reconstructed image are shown in Table 2. In general, we can see that in order to reduce images, the worst OWA operators are those associated with very high or very low orness values.

Although noise reduction is not a typical step in image reduction algorithms, in this subsection we are interested in analyzing the effect of each OWA operator when the original images are corrupted by impulsive noise (salt and pepper noise). In order to do this, we have selected only images (3) and (4) of Fig. 1 and we have corrupted them with and impulsive salt and pepper noise (20% probability). This type of noise randomly changes the intensity of one pixel to either 0 (black) or 255 (white). The corrupted images can be seen in Fig. 6, while the reduced ones obtained using the same 8 weighting vectors are shown in Figs. 7 and 8. Observe that in the reduced images obtained with OWA operators associated with higher orness values the white pixels have been highlighted, while the black pixels are filtered. On the contrary, with low orness values, black pixels remain in the reduced image, while white pixels disappear. If we analyze OWA operators with intermediate orness value, we observe that both white and black pixels have

Fig. 6. Original RGB color images.

D. Paternain et al. / Information Fusion 30 (2016) 27–35

been filtered. For this type of noise we can conclude that the best weighting vector are those with low weights (close to 0) both in the beginning and in the end of the vector. This means that centered values are more important in the result, since these values are probably not contaminated by noise. However, notice that in other applications we may be more interested in OWA operators with either high or low orness values. For example, if our application needs to detect or highlight bright pixels, then our OWA operator must be associated with a high orness. 7. Conclusions A quantitative orness measure can be defined for OWA operators with values on a complete lattice endowed with a t-norm T and a tconorm S whenever it satisfies some local finiteness condition. This concept is introduced by means of a qualitative quantifier build starting from the weighting vector associated to the OWA operator. In the particular case of a complete distributive lattice, the OWA operator can be retrieved from the quantifier associated to it. The orness measure gives some idea of the proximity of each OWA operator to the OR-operator, in the line of the orness given by Yager for OWA operators defined on the real interval [0, 1], and allows us to classify all the OWA operators defined on such a lattice. When applying OWA operators defined on a finite distributive lattice to an image reduction problem, it can be seen that the orness of a given operator influences the results obtained (reduced images). The election of the suitable OWA operator for the concrete application can be done attending at this orness value. In the same way, the orness of the weighting vector used to solve a decision making problem gives an idea of the preferences chosen a priori, as it is shown in an example of the kind of treatment to apply to a breast cancer patient. Regarding future lines of research, it seems interesting to compare the use of lattice OWA operators with other methods used in the contexts of the given applications. Moreover, it is also interesting to study how to construct the appropriate weighting vector taking into account the data of the problem. Acknowledgments D. Paternain and H. Bustince have been partially supported by grants TIN2013-40765-P. R. Mesiar was supported by the grant VEGA 1/0420/15 and by the European Regional Development Fund in IT4 Innovations Centre of Excellence project reg. no. CZ.1.05/1.1.00/02.0070. References [1] R.R. Yager, On ordered weighting averaging aggregation operators in multicriteria decision-making, IEEE Trans. Syst. Man Cybern 18 (1988) 183–190. [2] D. Dubois, H. Prade, On the use of aggregation operations in information fusion processes, Fuzzy Sets Syst. 142 (2004) 143–161. [3] S.-M. Zhou, F. Chiclana, R.I. John, J.M. Garibaldi, Type-1 OWA operators for aggregating uncertain information with uncertain weights induced by type-2 linguistic quantifiers, Fuzzy Sets Syst. 159 (2008) 3281–3296.

35

[4] R.R. Yager, G. Gumrah, M. Reformat, Using a web personal evaluation tool-PET for lexicographic multi-criteria service selection, Knowl-Based Syst. 24 (2011) 929– 942. [5] R.R. Yager, N. Alajlan, A generalized framework for mean aggregation: toward the modeling of cognitive aspects, Inf. Fusion 17 (2014) 65–73. [6] E. Barrenechea, J. Fernandez, M. Pagola, F. Chiclana, H. Bustince, Construction of interval-valued fuzzy preference relations from ignorance functions and fuzzy preference relations. application to decision making, Knowl-Based Syst. 58 (2014) 33–44. [7] Z.C. Chen, P.H. Liu, Z. Pei, An approach to multiple attribute group decision making based on linguistic intuitionistic fuzzy numbers, Int. J. Comput. Intell. Syst. 8 (2015) 747–760. [8] R.R. Yager, Families of OWA operators, Fuzzy Sets Syst. 59 (1993) 125–148. ´ A generalization of some functions in continuous mathematical [9] J.J. Dujmovic, logic - evaluation functions and its applications (in Serbo-Croatian), in: Proceedings of the Informatica Conference, Paper d27, Bled, Yoguslavia, 1973. [10] X. Liu, S. Han, Orness and parameterized RIM quantifier aggregation with OWA operators: a summary, Int. J. Approx. Reason. 48 (2008) 77–97. [11] A. Kishor, A.K. Singh, N.R. Pal, Orness measure of OWA operators: a new approach, IEEE Trans. Fuzzy Syst. 22 (2014) 1039–1045. [12] X.B. Li, D. Ruan, J. Liu, Y. Xu, A linguistic-valued weighted aggregation operator to multiple attribute decision making with quantitative and qualitative information, Int. J. Comput. Intell. Syst. 1 (2008) 274–284. [13] H. Bustince, Interval-valued fuzzy sets in soft computing, Int. J. Comput. Intell. Syst. 3 (2010) 215–222. [14] H. Bustince, E. Barrenechea, T. Calvo, S. James, G. Beliakov, Consensus in multiexpert decision making problems using penalty functions defined over a Cartesian product of lattices, Inf. Fusion 17 (2014) 56–64. [15] L. Zou, Y.X. Zhang, X. Liu, Linguistic-valued approximate reasoning with lattice ordered linguistic-valued credibility, Int. J. Comput. Intell. Syst. 8 (2015) 53– 61. [16] N. Agell, M. Sánchez, F. Prats, L-fuzzy sets for group linguistic preference modeling: an application to assess a firm’s performance, in: Proceedings of IEEE International Conference on Fuzzy Systems, 2015. [17] H. Bustince, E. Barrenechea, M. Pagola, J. Fernandez, Z. Xu, B. Bedregal, J. Montero, H. Hagras, F. Herrera, B. De Baets, A historical account of types of fuzzy sets and their relationships, IEEE Trans. Fuzzy Syst. in press. doi:10.1109/TFUZZ.2015. 2451692. [18] M. Komorníková, R. Mesiar, Aggregation functions on bounded partially ordered sets and their classification, Fuzzy Sets Syst. 175 (2011) 48–56. [19] I. Lizasoain, C. Moreno, OWA operators defined on complete lattices, Fuzzy Sets Syst. 224 (2013) 36–52. [20] G. Ochoa, I. Lizasoain, D. Paternain, H. Bustince, N.R. Pal. Qualitative orness for OWA operators. Inf. Fusion. Submitted. [21] B. De Baets, R. Mesiar, Triangular norms on product lattices, Fuzzy Sets Syst. 104 (1999) 61–75. [22] I. Lizasoain, G. Ochoa, Generalized Atanassov’s operators defined on lattice multisets, Inf. Sci. 278 (2014) 408–422. [23] D. Dubois, H. Prade, A review of fuzzy set aggregation connectives, Inf. Sci. 36 (1985) 85–121. [24] G. De Cooman, E.E. Kerre, Order norms on bounded partially ordered sets, J. Fuzzy Math. 2 (1993) 281–310. [25] G. Grätzer, General Lattice Theory, Birkhäuser Verlag, Basel, 1978. [26] G. Grätzer, E.T. Schmidt, On the Jordan–Dedekind chain condition, Acta Sci. Math. (Szeged) 18 (1957) 52–56. [27] N. Alajlan, Y. Bazi, H.S. AlHichri, F. Melgani, R.R. Yager, Using OWA fusion operators for the classification of hyperspectral images, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 6 (2013) 602–614. [28] L.G. Jaime, E.E. Kerre, M. Nachtegael, H. Bustince, Consensus image method for unknown noise removal, Know.-Based Syst. 70 (2014) 64–77. [29] C. Guerra, A. Jurio, H. Bustince, C. Lopez-Molina, Multichannel generalization of the upper-lower edge detector using ordered weighted averaging operators, J. Intell. Fuzzy Syst. 27 (2014) 1433–1443. [30] G. Beliakov, H. Bustince, D. Paternain, Image reduction using means on discrete product lattice, IEEE Trans. Image Process. 21 (2012) 1070–1083. [31] D. Paternain, J. Fernandez, H. Bustince, R. Mesiar, G. Beliakov, Construction of image reduction operators using averaging aggregation functions, Fuzzy Sets Syst. 261 (2015) 87–111.