Study of some algebraical properties of adaptive combination rules

Study of some algebraical properties of adaptive combination rules

Fuzzy Sets and Systems 114 (2000) 391–409 www.elsevier.com/locate/fss Study of some algebraical properties of adaptive combination rules Mourad Ouss...

220KB Sizes 0 Downloads 31 Views

Fuzzy Sets and Systems 114 (2000) 391–409

www.elsevier.com/locate/fss

Study of some algebraical properties of adaptive combination rules Mourad Oussalah CEMIF Complex Systems Group, 40 rue du Pelvoux 91020 Evry, France Received January 1998; received in revised form July 1998

Abstract Dubois and Prade, Data Fusion in Robotics and Machine Intelligence, Academic Press, New York, 1992, have proposed an adaptive combination rule that moves gradually from conjunctive mode to disjunctive mode as soon as the con ict between sources increases. The authors also proposed in Dubois and Prade, Control Eng. Practice 2 (1994) pp. 812–823, a generalization of the previous rule to more than two sources based upon restricting the number of reliable sources between an optimistic case and a pessimistic case. In this paper, we are particularly interested in studying these two rules in the light of some appealing algebraical properties. Particularly, we will study the link of these rules with commutativity, associativity, compromise, convexity, autoduality, ignorance and impossible cases. We also investigate their link with the MICA operators introduced by Yager [Fuzzy Sets and Systems 67 (1994) pp. 129–146]. Finally, we propose some c 2000 Elsevier Science B.V. All rights reserved. extension of the rules. Keywords: Adaptive combination rule; Conjunctive mode; Disjunctive mode; Algebraical properties; Possibility theory

1. Introduction Basically, the combination of information in the framework of possibility theory [15] is supported by three di erent modes: • Conjunctive mode: It is applied when we are searching for a redundancy between sources, or a common zone. It assumes that the sources are reliable. A weaker assumption should be the equal reliability (even if sources are not fully reliable). This mode is modeled by the t-norm operators which are a set generalization of ordinary set intersection. Let i (s), where i sets for the ith source (i = 1; N ), be the possibility distribution supporting the information provided by the source i. Then, the conjunctive combination gives the distribution ∧ : ∀s ∈ S

∧ (s) =

n \

i (s):

i=1

c 2000 Elsevier Science B.V. All rights reserved. 0165-0114/00/$ - see front matter PII: S 0 1 6 5 - 0 1 1 4 ( 9 8 ) 0 0 3 7 1 - 6

(1)

392

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

• Disjunctive mode: It is usually used in the case of di erent source reliability. Thus, the use of the rst combination mode could lead to the lack of information. This mode is modeled by t-conorm operators, i.e. a set generalization of set union operation. Namely, the distribution ∨ such that ∀s ∈ S

∧ (s) =

n [

i (s):

(2)

i=1

• Compromise mode: It is an intermediate case between the two above modes. It indicates an average con ict. It is modeled by average operators. That is, T (a; b) is a compromise combination between a and b if the following inequality holds: min(a; b) ¡ T (a; b) ¡ max(a; b):

(3)

Many average operators could be found in the literature [5] like mean operators, symmetrical sum operators, etc. The combination process could also be seen in the light of con ict between information provided by each source. In fact, the con ict indicates the degree of contradiction between sources. Namely, when the con ict is small, i.e., there is some agreement between sources, thus, the conjunctive mode seems more appropriate. In contrast, when the con ict is larger, the sources disagree, so, the disjunctive mode is more appropriate. However, in the case of conjunctive or compromise mode the resulting distribution is often unnormalized, i.e, “sup  ¡ 1”, the lack of normalization 1 − sup  is interpreted as a con ict measure. The con ict should also be modeled by Jacquard index [1] or any other similarity index. Dubois and Prade [7] have rst proposed an adaptive rule allowing a gradual transformation from a conjunctive mode to a disjunctive one as soon as the con ict between sources increases. The rule reads as follows:   min(1 (s); 2 (s)) ; min(max(1 (s); 2 (s)); 1 − h(1 ; 2 )) ; (4) (s) = max h(1 ; 2 ) where h(1 ; 2 ) = sups min(1 (s); 2 (s)) represents the con ict evaluation, min((1 (s); 2 (s))=h(1 ; 2 )) represents the renormalization of the conjunction distribution, and min(max(1 (s); 2 (s)); 1 − h(1 ; 2 )) allows to restrict the in uence of the con ict inside the support of two initial distributions. The rule means that the combination result is supported by the normalized conjunction, given at a level of certainty at least equals to h(1 ; 2 ), where the in uence of the latter term is restricted inside the support of initial distributions. (Since in general form [6]: 0 (s) = max((s); 1 − h) represents the quali cation “ is at least certain to a degree equal to h”). Another rule also proposed by the authors deals with the generalization of the previous rule to more than two sources. The idea behind their proposed rule is that, instead of assuming “one at least of the two sources is reliable”, we consider that there exists a group of k reliable sources, so, combined conjunctively, without knowing how to specify them; namely, di erent groups of k elements will be combined disjunctively. Indeed, the value of k may be situated between a pessimistic value m and an optimistic one n. The resulting rule is given as follows:   (n) (x) ; min((m) (x); 1 − h(n)) ; (5) (x) = max h(n) where h(n) = max[h(T ); |T | = n]; i h h(T ) = sup min i (x); i ∈T ; x

(6) (7)

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

393

Fig. 1. Example of the combination by the rule (5).

m = sup[|T |; h(T ) = 1];

(8)

n = sup[[T |; h(T )¿1];

(9)

(k) (x) =

max

K⊆N; |K|=k

[min i (x); i ∈ T ] :

(10)

h(T ) indicates the degree of concordance of sources that belong to a set T . m represents the number of sources which fully agree, and n the number of sources which may be in agreement (justi ed by an overlapping between source distributions). It is noticed that in the case of N = 2, the rules (5) and (4) coincide. Fig. 1 illustrates an example of the combination of the distributions 1 ; 2 and 3 by the rule (5). These rules, even if they have been successfully applied in some practical applications like vision [4], localization of a mobile robot [10], robotic processing [3], they have not yet been explored from a viewpoint of their theoretical foundations nor their algebraical properties. In the following, we are only interested in the second point of view (Fig. 2).

2. On some algebraical properties of the rules (4) and (5) 2.1. Commutativity and associativity Proposition 1. The rules (4) and (5) are commutative but not associative. Proof. The commutativity follows immediately from the fact that the con ict factor h for the rule (4) (or h(n) for the rule (5)) are obtained independently of the order of the combination of the distributions, and from the commutativity of the max and min operators. The non-associativity could be seen from the counterexample of Fig. 3, where, the result of the combination of 1 with 3 is combined with 2. The resulting distribution is clearly di erent from one obtained by combining 1 with the result of the combination of 2 with 3. Since, in the case of pairwise combination the rules (4) and (5) coincide, it turns out that both rules are not associative. It could also be noticed that the quasi-associative property is not ful lled. This means a lack of modularity when combining many sources. In others words, we cannot use intermediate results in order to obtain a nal result.

394

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

Fig. 2. Example of the combination with rule (4).

Fig. 3. Illustration of non-associativity.

2.2. Idempotence We recall that an operator ’ is said to be idempotent if the combination of the same entities (a) provides the same entity (a). Namely, if ’ is a pairwise operator, we have ’(a; a) = a. Proposition 2. The rules (4) and (5) are idempotent. Proof. For the rule (4), if ∀s; 1 (s) = 2 (s), then h = 1, and: (s) = max(1 (s); 2 (s)) = 1 (s) = 2 (s). In the same manner, for the rule (5). the condition ∀s; 1 (s) = 2 (s) = · · · = N (s) leads to m = n = N (number of the distributions). Thus, (n) = (m) = i and h(n) = 1. The result is then trivial. Consequently, both rules could not verify Archimedean nor nilpotence properties. It also follows that both the laws of excluded-middle and non-contradiction are not ful lled. 2.3. Link with compromise mode Proposition 3. The rules (4) and (5) are bounded lower by the min operator. Proof. Because the reasoning is quite similar, the proof is restricted to the rst rule (4). We have to prove the following (for N = 2):   min(1 (s); 2 (s)) ; min(max(1 (s); 2 (s)); 1 − h) : ∀s; min(1 (s); 2 (s))6 max h

(11)

One can distinguish two cases for the second member of the inequality (11) min(1 (s); 2 (s)) ¿ min(max(1 (s); 2 (s)); 1 − h): h

(12)

The inequality (11) is equivalent to min(1 (s); 2 (s))6

min(1 (s); 2 (s)) : h

(13)

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

395

Since h61, Eq. (13) always holds min(1 (s); 2 (s)) 6 min(max(1 (s); 2 (s)); 1 − h): h

(14)

Here also one can distinguish two cases depending upon the values of the second member: min(max(1 (s); 2 (s)); 1 − h) = max(1 (s); 2 (s)):

(15)

Thus, Eq. (11) is equivalent to min(1 (s); 2 (s))6 max(1 (s); 2 (s))

(this inequality always holds):

min(max(1 (s); 2 (s)); 1 − h) = 1 − h:

(16) (17)

It comes down to prove the following: min(1 (s); 2 (s))61 − h:

(18)

From condition (14), we have min(1 (s); 2 (s)) 61 − h: h

(19)

Thus, min(1 (s); 2 (s))6(1 − h)h61 − h: This completes the proof. Proposition 4. The rules (4) and (5) are upper and lower bounded in the following manner:   1 (s) ∧ 2 (s) ∧ · · · ∧ N (s) ;1 − h : min(1 (s) ∧ 2 (s) ∧ · · · ∧ N (s); 1 − h)6(s)6 max h

(20)

Proof. Here also, we restrict the proof to the case of N = 2, and the generalization is quite similar. The result comes immediately from the monotony of the max and min operators. In fact, we have min(1 (s) ∨ 2 (s); 1 − h)61 − h. Then   1 (s) ∧ 2 (s) ;1 − h : (21) (s)6 max h In the same manner, we have min(1 (s) ∨ 2 (s); 1 − h)6 max



 1 (s) ∧ 2 (s) ; min(1 (s) ∨ 2 (s); 1 − h) : h

(22)

Since 1 (s) ∧ 2 (s)61 (s) ∨ 2 (s);

(23)

Then min(1 (s) ∧ 2 (s); 1 − h)6 min(1 (s) ∨ 2 (s); 1 − h)6(s): The importance of the inequality (20) is manifested in the setting of -certainty quali cation and ÿ-possibility quali cation [6,12].

396

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

The lower bound means that the conjunction of the possibility distributions is possible at the most at a degree equal to 1 − h. While the upper bound means that the normalized conjunction (the normalization is done by dividing the conjunction result by the factor h) is certain at least at the degree equal to 1 − h. Thus, the inequality could be interpreted as follows: the resulting distribution lies between two bounds. The rst one claims that it is possible at most at a degree 1 − h that the information of interest is supported by the conjunction of the source distributions. The second one asserts that it is certain at the same degree that the concern of interest is supported by the normalized conjunction. Proposition 5. Let 0 be the possibility distribution such that ∀s; 0 (s) =

(n) (s) : h(n)

(24)

Then; the combination of the initial distributions plus the distribution 0 leads to a compromise mode combination. Proof. The distribution  could be rewritten as (s) = max(0 (s); min((m) (s); 1 − h(n))):

(25)

We need to prove the following inequality: min(1 (s); : : : ; N (s); 0 (s))6(s)6 max(1 (s); : : : ; N (s); 0 (s)):

(26)

It follows from Proposition 3: min(1 (s); 2 (s); : : : ; N (s))6(s):

(27)

And, from the monotony of the min operator, min(1 (s); 2 (s); : : : ; N (s); 0 (s))6 min(1 (s); 2 (s); : : : ; N (s))6(s):

(28)

On the other hand, we have min((m) (s); 1 − h(n))6(m) (s)6 max(1 (s); 2 (s); : : : ; N (s))

(29)

max(0 (s); min((m) (s); 1 − h(n)))6 max(0 (s); (m) (s))6 max(1 (s); 2 (s); : : : ; N (s); 0 (s)):

(30)

and,

2.4. Monotony Proposition 6. The rule (4) is non-decreasing if and only if the con ict index h is constant. Proof. It suces to prove the following implication: ∀s; 10 (s)61 (s) & 20 (s)62 (s) & sup min(10 (s); 20 (s)) = sup min(1 (s); 2 (s)) s

⇒ 0 (s)6(s);

s

(31)

where 0 and  are, respectively, the result of the combination of the distributions 10 with 20 and 1 with 2 . Let h be such that h = sup min 10 (s); 20 (s)) = sup min(1 (s); 2 (s)): s

s

(32)

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

We have

(

10 (s)61 (s)

&

20 (s)62 (s)



min(20 (s); 20 (s))6 min(2 (s); 2 (s)); max(20 (s); 20 (s))6 max(2 (s); 2 (s)):

397

(33)

This also implies min(20 (s); 20 (s)) min(2 (s); 2 (s)) 6 ; h h

(34)

min(max(20 (s); 20 (s)); 1 − h)6 min(max(2 (s); 2 (s)); 1 − h):

(35)

Thus,

 min(10 (s); 20 (s)) ; min(max(10 (s); 20 (s)); 1 − h) h   min(1 (s); 2 (s)) ; min(max(1 (s); 2 (s)); 1 − h) ; 6 max h 

max

(36)

i.e. 0 (x)6(x):

(37)

From a geometrical viewpoint, the monotony condition comes down to choose the possibility distributions 10 and 20 inside the support of the distributions 1 and 2 , such that their concordance zone, or equivalently, their intersection still remained unchanged. However, this result is not sucient to ensure the monotony of the rule (5). Proposition 7 (Monotony of rule (5)). Let (1 ; 2 ; : : : ; N ) and (10 ; 20 ; : : : ; N0 ) be two sets of possibility distributions such that: ∀s; 1 (s)¿10 (s) & 2 (s)¿20 (s) & · · · & N (s)¿N0 (s):

(38)

Then; the monotony (non-decreasing) condition ∀s; (s)¿0 (s) is ful lled if and only if the following three conditions are satis ed: (i) m0 = m. (ii) n0 = n. (iii) h(n0 ) = h(n). We call this adaptive monotonic condition (AMC). Proof. We can illustrate the proof through the representation of Fig. 4. In order to achieve condition (38) on the distributions plotted in Fig. 4, we have to choose three normalized distributions, where, each one is included in one of the above distributions (of Fig. 4). It is clear, from the geometrical viewpoint in the same manner as for the previous rule, to have h0 (n0 ) = h(n). In fact, let us assume that h0 (n0 ) ¡ h(n) (the other hypothesis h0 (n0 ) ¿ h(n) is in contradiction with Eq. (38)), the peak of the resulting distribution 0 must be the same as its counterpart of the distribution . This entails that consensus area (n0 ) is included in the consensus area of the rst distribution (n) and both zones are symmetrical with respect to the central axis of (n) area. However, under these conditions, the level 1 − h0 (n0 ) will be increased with respect to 1 − h(n) which means that there is some element s for which 0 (s) ¿ (s). This leads clearly to the violation of the monotony condition. The condition n0 = n is also necessary. To see it, assume that n0 ¡ n (the condition n0 ¿ n could not be ful lled because of the constraint (38)), this means to outstrip

398

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

Fig. 4. Example of combination with rule (5). 0 the extreme points corresponding to the distribution (n 0 ) =h, over owing outside the zone assigned to (n) =h. 0 Consequently, n = n. For the same reasons, we have m0 = m. Since, if m0 ¡ m (m0 ¿ m is impossible), to 0 outstrip extreme points corresponding to (m 0 ) , results in over owing with respect to (m) .

2.5. Reaction with respect to the “impossible case” and the “complete ignorance case” Proposition 8. Adding to the initial distributions a new distribution corresponding to the “impossible case” or to the “total ignorance” case does not result in the combination by means of rules (4) or (5). Both rules ful ll the zero preservation property. Indeed; rule (4) preserves the maximal plausibility, while rule (5) does not. Proof. We rstly prove the result for the rule (4). Let 1 be the distribution corresponding to the “impossible case”. Namely, ∀s; 1 (s) = 0. The con ict factor h sets for zero, then, the rule sets for max combination, i.e.   min(1 (s); 2 (s)) ; min(max(1 (s); 2 (s)); 1 − h) = max(1 (s); 2 (s)) = 2 (s): (s) = max h

(40)

In the same manner, if 1 sets for the complete ignorance case, i.e., ∀s; 1 (s) = 1 we have h = 1. The rule comes down to (s) = min(1 (s); 2 (s)) = min(1; 2 (s)) = 2 (s):

(41)

For the rule (5), we have: Let m and n be the parameters corresponding to the combination of 1 ; 2 ; : : : ; N via the rule (5). If we add a new distribution, ∀s; 0 (s) = 0, then the combination of 1 ; 2 ; : : : ; N and 0 via the rule (5) does not change the values of the parameters m and n. It also follows, that the distributions (m) and (n) still 0 0 remain unchanged ((m) = (m) and (n) = (n) ). Also, we have h0 (n) = h(n). Thus, the resulting distribution is still the same as the combination of 1 ; 2 ; : : : ; N . If we add a distribution: ∀s; 0 (s) = 1, we have: m0 = m + 1 and n0 = n + 1. 0 0 However, (m 0 ) = (m) and (n0 ) = (n) . This comes from the fact that the additional distribution 0 is 0 0 0 0 neutralized by the min operator, that appears in both (m 0 ) and (n0 ) . For the same reasons, we have h (n ) = h(n). Thus, the resulting distribution still remains unchanged. We recall that the zero preservation property is introduced by Cooke [2] in the probabilistic setting. It asserts that, if an element wo is considered as impossible by all the sources, then it is also impossible for the combination result (a strong interpretation). A weaker interpretation consists in considering the element wo as

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

399

impossible by the combination result, if only it is considered as impossible by one of the sources. Thus, for the strong interpretation both rules ful ll the zero preservation. However, for the weaker interpretation, both rules do not ful ll the property. Namely, when the rules are close to the disjunctive mode, all elements (or some of them in the case of the rule (5)) outside the support of one distribution (considered as impossible by this distribution) are taken into account by the resulting distribution. Another appealing property is maximal plausibility [4]. It asserts that, if an element wo is considered as possible by all sources, then it is also possible for the combination result (weak interpretation). A strong interpretation consists in considering wo as possible even if it is possible only for one source. In terms of weak interpretation, both rules ful ll maximal plausibility property, since the common area of the source distributions is always taken into account by the resulting distribution (see for instance, Proposition 3). In contrast, in terms of strong interpretation, only rule (4) ful lls the maximal plausibility, since, in all cases the resulting distribution is restricted by the support of source distributions (which is the smallest restriction on the universe of discourse axis). However, in the case of the rule (5) the restriction is governed by (m) . This is why one can notice that some information could be forbidden by the resulting distribution, and, sometimes a complete distribution is omitted. Thus, only, in the case of m = N , the strong interpretation is preserved. 2.6. Continuity Proposition 9. The rule (4) is continuous for non-zero h. In order to be continuous; rule (4) should be rewritten as      max min(1 (s); 2 (s)) ; min(max(1 (s); 2 (s)); 1 − h) ; if h 6= 0; h (42) (s) =   max( (s);  (s)) if h = 0: 1 2 Proof. The proof seems trivial, since the max and min operator are continuous, and the rule still is continuous and di erentiable if the denominator is a non-zero element. Proposition 10. The rule (5) is everywhere continuous. Proof. It is obvious the problem could only come from the assignment h = 0. However, even in this situation (h = 0), we do not have the same result as for the previous rule. Let us assume h(n) = 0. This means that there is no intersection between the set of N distributions. It follows, m = n = 1. Thus, h(n) = 1 (we of course assume the normalization of all the distributions). Then (n) (s) = (m) (s) = max(1 (s); 2 (s); : : : ; N (s)): Consequently, (s) = max





max(1 (s); : : : ; N (s)) ; min(1 − 1; max(1 (s); : : : ; N (s))) 1

= max(1 (s); : : : ; N (s)):

(43)

In what follows, we denote by F(1 ; 2 ; : : : ; n ) the combination of the distributions (1 ; 2 ; : : : ; and N through the rule (5) (which is the same as the rule (4) for a pairwise combination, i.e., N = 2). Proposition 12. If for some value s; we have 1 (x)6;

(44)

400

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

then F(1 ;2 ) (s)6 max (min(1; =h); 1 − h) :

(45)

Proof. Using the monotony of the max and min operators, we have min(1 (s); 2 (s)) min(; 2 (s))  6 6 : h h h On the other hand, we have 1 (s)6 ⇒

(46)

min(max(1 (s); 2 (s)); 1 − h)61 − h:

(47)

Thus, from Eqs. (46) and (47),     min(1 (s); 2 (s)) ; min(max(1 (s); 2 (s)); 1 − h) 6 max ;1 − h : max h h

(48)

However, the result is always upper bounded by one, which implies the result (45). This result shows that if for some value of s there is an upper bound for one of the initial distributions, then the result of the combination could also be bounded independently of the value of the other distribution (for these elements s). The proposition shows also some robustness of the resulting distribution, which depends only on the con ict factor. 2.7. Autoduality We recall that autoduality property means that the combination of some entities is the same as the inverse of the combination of the inverse of each of the above entity. For an operator ’, the autoduality means: ’(a1 ; a2 ; : : : ; an ) = 1 − ’(1 − a1 ; 1 − a2 ; : : : ; 1 − an ), where the inverse operation is expressed for an element “a” by “1 − a”. We also show that the inverse distributions (1 − i ) are normalized, i.e., inf s [1 − i (s)] = 0 for i = 1; N . Proposition 12. (i) The rule (4) is autodual for h = 0. (ii) The rules (5) and (4) veri es: F(1−1 ;1−2 ;:::;1−N ) (s) + max(1 (s); 2 (s); : : : ; N (s)) = 1:

(49)

Proof. (i) The result follows immediately from Eq. (49). For N = 2, Eq. (49) could be written as F(1−1 ;1−2 ) (s) + max(1 (s); 2 (s)) = 1:

(50)

For h = 0, it is known that the rule (4) is reduced to the max combination, i.e., F(1 ;2 ) (s) + max(1 (s); 2 (s)):

(51)

Thus, Eq. (50) could be written as F(1 ; 2 ) (s) = 1 − F(1−1 ;1−2 ) (s);

(52)

which implies by de nition the autoduality of the operator F or equivalently the rule (4). (ii) In order to prove the equality (49), one can notice that the combination of the distributions (1 − 1 ; 1 − 2 ; : : : ; and 1 − N ) leads to m = n = N (since outside the support of the distributions i (i = 1; N ), all the former distributions are equal to one). Thus, F(1−1 ;1−2 ;:::;1−N ) (s) = min(1 − 1 (s); 1 − 2 (s); : : : ; 1 − N ) = 1 − max(1 (s); 2 (s); : : : ; N (s)):

(53)

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

401

Fig. 5. Non-convex result.

This result means that the combination of the opposite of the distributions 1 ; 2 ; : : : ; and N via the rule (5) (or the rule (4) for N = 2) is completely independent of the con ict factor h. 2.8. Convexity Proposition 13. The distribution corresponding to the rule (4) is not necessarily convex. In order to be convex; the condition (49) has to be ful lled.      1 (s) ∧ 2 (s) 1 (s) ∧ 2 (s) ; 1 (s) ; sup min : ; 2 (s) 1 − h6 min sup min h h s s

(54)

Proof. One notices that the convexity is de ned in the sense of Zadeh [14] for a fuzzy set, which means that all its -cut sets are convex. Fig. 5 below shows an example of a non-convex result. This lack of convexity is due to the factor min(max(1 ; 2 ); 1 − h) that lead to an abrupt switch from the 1 − h level to one of the initial distributions. This comes from the fact that the set-union operation on convex fuzzy sets does not necessarily lead to a convex result (in contrast, the set-intersection operation of convex fuzzy sets always provides a convex set). The condition (49) is easily seen from the plot of Fig. 5, it asserts that if the 1 − h level is situated lower than the two points corresponding to the intersection of min(1 (s); 2 (s))=h with each distribution 1 and 2 , i.e., 1 − h6 min( ; ÿ). Now, we are concerned with reformulating the rule (4) in order to preserve the convexity property, here, we propose three methods to do this. Solution 1: It consists of tolerating the fact that the in uence of the con ict factor h is outside the three distributions 1 ; 2 and 1 ∧ 2 )=h while it remains inside their support. We could rewrite the rule as follows:      max 1 (s) ∧ 2 (s) ; 1 − h h (s) =   max( (s);  (s)) 1

2

if s ∈ (1 ∨ 2 )1−h ;

(55)

if s 6∈ (1 ∨ 2 )1−h :

It means that for an element s belonging to the -cut equal to 1 − h, the result is supported by the normalized conjunction taken as certain at least at the degree equal to 1 − h. However, outside this -cut level the result

402

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

Fig. 6. (a) Solution 1. (b) Solution 2. (c) Solution 3.

is provided by the disjunction of the two initial distributions (where the conjunction and the disjunction are in the sense of min and max operator). It could also be noticed that this solution comes down to the rule (4) when the condition (54) is satis ed. This solution is illustrated by the plot of Fig. 6a. Solution 2: Another solution consists in modifying the level 1 − h in the rule (4) by a level corresponding to the intersection of the normalized conjunction with one of the initial distributions. The rule is rewritten as follows:   1 (s) ∧ 2 (s) (56) ; min(1 (s) ∨ 2 (s)); mo ; (s) = max h      1 (s) ∧ 2 (s) 1 (s) ∧ 2 (s) : ; 1 (s) ; sup min ; 2 (s) mo = min sup min h h s s

(57)

The above formulation is applied only if the condition (54) is not ful lled. It could be interpreted such as the normalized conjunction distribution being certain to a level equals to the complement of the smallest non-zero intersection of this distribution with initial ones (i.e., the level ÿ for the Fig. 5). This solution is illustrated in Fig. 6b. Solution 3: In contrast to the previous formulation, here we take account of both levels no and mo (for Fig. 6) corresponding to the greatest non-zero intersection of the normalized conjunction with each source distribution. The rule is formulated as follows:   1 (s) ∧ 2 (s) ; min(1 (s) ∨ 2 (s)); mo if s ∈ (1 )mo ; (58) (s) = max h 

1 (s) ∧ 2 (s) ; min(1 (s) ∨ 2 (s)); no h

(s) = max

 mo = sup min s

 no = sup min s

 otherwise;

(59)

 1 (s) ∧ 2 (s) ; 1 (s) ; h

(60)

 1 (s) ∧ 2 (s) ; 2 (s) : h

(61)

It assumes that the degree of certainty with respect to the normalized conjunction is situated between two levels mo and no. The result of this solution is plotted in Fig. 6c.

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

403

In the same manner, one may extend the above condition to the case the rule (5), saying that all the intersection points between (n) (s)=h(n) and (m) (s) must be higher than the 1 − h(n) level. Namely, we have  A=

s;

(n) (s) = (m) (s) & (m) (s) ¿ 0 h(n)

 (54a)

1 − h6 inf (m) (s): s∈A

In Eq. (54a), the set A contains all elements of the universe of discourse where the distributions (n) (s)=h(n) and (m) (s) take the same non-zero value (intersection points). Also, the proposal rules that preserves the convexity can be extended in the same manner. 2.9. Link with MICA operators Deÿnition (Yager [11]) (MICA operators). Let B[0;1] be a set of collections of elements without order called bag taking their values in [0, 1], example: ha1 ; a2 ; a6 ; a2 ; a10 i. If A and B belong to B[0;1] , we say that A¿B, if A and B have the same cardinal and if we can order their elements such that ∀i; ai ¿bi , A ⊕ B contains elements of A and B. For example, A = ha1 ; a4 i and B = hb3 ; b1 ; b1 i then A ⊕ B = ha1 ; a4 ; b3 ; b1 ; b1 i: An application M from B[0; 1] in [0, 1] is a MICA operator if and only if: (i) A¿B ⇒ M (A)¿M (B); (ii) for every A ∈ B[0;1] , there exists an element g, called identity element, such that: M (A) = M (A ⊕ hgi). (iii) M (A) is independent of the order of the elements in A. One can distinguish those having a xed identity element called xed identity MICA (FIMICA), and those having their identity element as a variable, depending upon the result of the application M , called SIMICA (self-identity MICA) such that M (A) = M (A ⊕ M (A)). Proposition 14. The rule (5) ful lls the following property (where F denotes the combination via the rule (5)): ∀s; F(1 (s); 2 (s); : : : ; p (s); F(1 (s); 2 (s); : : : ; p (s))) = F(1 (s); 2 (s); : : : ; p (s)):

(62)

Proof. Let m and n be the parameters corresponding to the combination of the p distributions by means of the rule (5), which corresponds to the second member (A2) of the equality (57). Let m0 and n0 be their counterparts corresponding to the rst member (A1) of Eq. (57). Let us denote  and 0 the possibility distribution from A1 and A2, respectively. It is easy to see that n0 = n + 1, since if there are n distributions having a non-zero intersection, then if we add the resulting distribution of A2 which contains necessarily the consensus zone, the number of these distributions (having a non-zero intersection) incremented by one unit. Also, this consensus still remains 0 0 0 unchanged, i.e., ∀s; (n) (s) = (n 0 ) (s). This leads to h (n ) = h(n). For the m0 , it is easy to see that 0 ∀s; (m 0 ) (s) = min((s); (m) (s)):

(63)

404

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

This result, even if it seems trivial from construction, means that since the initial distributions still remain 0 unchanged, the distribution (m 0 ) di er from the previous result (m) only if it is in contradiction with the information supported by the nal distribution . Thus, we have  0  (n0 ) (s) 0 0 ; min( (s); 1 − h ) ; (64) ∀s; 0 (s) = max (m0 ) h0   (n) (s) 0 0 ; min(min((m) (s); (s)); 1 − h ) ; (65)  (s) = max h   (n) (s) 0 ; min(min((m) (s); 1 − h); (s)) ; (66)  (s) = max h      (n) (s) (n) (s) 0 ; min((m) (s); 1 − h) ; max ; (s) ; (67)  (s) = min max h h 0 (s) = min((s); (s)) = (s):

(68)

The importance of this result is manifested in the combination process, where the output is re-injected to the system as an additional input, the result of the combination still remains unchanged. The result still remains unchanged even if the operation is repeated many times again. It is noticed that this result is not available for rule (4). Proposition 15. Let P[0;1] be a set of collections (bag) where the elements are represented by possibility distributions (1 ; 2 ; : : : ; N ), then if we add to the de nition of the monotony the constraints AMC, the resulting distribution  is a Self-Identity Monotonic Identity Commutative Aggregator (SIMICA) operator. Proof. Referring to the previous de nition of MICA operator, where the elements are possibility distributions, and the de nition of monotony will be enlarged to encompass the constraints AMC. Let U and V be two elements of P [0;1] . Let M (U ) be the result of the distributions contained in U collection by means of the rule (5). We have: (i) If U ¿V and AMC(U; V ) then M (U )¿M (V ) from Proposition 7. (ii) For every U; M (U ) = M (U ⊕ M (U )), from Proposition 14. (iii) M (U ) is independent from the order by which the elements are combined since the rule (5) is commutative. This entails by de nition that the resulting distribution acts as a SIMICA operator 3. On some other formulations of rule (5) We notice that the proposal rules are still in the same philosophy as Dubois’ rule. We emphasize only on other formulations of m and n parameters. In other words, the new rules deal with the generalization of rule (4) to more than two sources. 3.1. Relaxation of a constraint on m We preserve exactly the same notation and semantics as Dubois’ rule. Another rational manner to extend rule (4) is to assume that the certainty level h(n) (or equivalently, the con ict factor 1 − h(n)) as the greatest

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

405

Fig. 7. Combination by relaxing m constraint.

non-zero intersection of the largest number of sources. Moreover, we assume that the in uence of the con ict factors being limited by the disjunction of all the sources. In other words, it comes down to relax the m constraint such that m is always equal to one. Consequently, the rule could be rewritten as follows:   (69) h = sup (n) (s) ; s



(s) = max

 (n) (s) ; min(1 (s) ∨ 2 (s) ∨ · · · ∨ N (s); 1 − h) : h

(70)

It is also obvious, that, the resulting distribution is less speci c than the distribution induced by rule (5). This comes from the inequality: For all s, (m) (s)61 (s) ∨ 2 (s) ∨ · · · ∨ N (s):

(71)

An example of such a combination is given in Fig. 7. 3.2. Restricting the conjunctive combination In this case, we emphasize the greatest intersection (denoted “h”) which may exist at least between two distributions. In other words, if there exists a non-zero intersection between two cores pertaining to two initial distributions, then h takes a value one. Let h0 be the h(n) value, i.e., the greatest non-zero intersection of the largest number of sources. The rule could be rewritten as follows: (h) (s) =

max

min i (s);

i∈I I ⊆N supx mini∈I i (s)¿h



(s) = max

 (h) (s) ; min(1 (s) ∨ · · · ∨ n (s); 1 − h0 ) : h

(72)

(73)

(h) designates the result of the disjunction of the conjunction between all sources having their greatest intersection at least equal to h. While, the certainty level h0 is supposed to be restricted by the whole support of sources. Further, this allows to omit the renormalization step once there is only two sources that fully agree. In other words, once the condition m ¿ 1 is ful lled, there is no need for renormalization. In fact, if we have, for instance, m = 2, then h = 1. Thus, (h) is a normalized distribution. One should notice that a weaker version consists in letting the restriction of h0 on the (m) distribution. Fig. 8 gives an example of a combination with the rule (73).

406

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

Fig. 8. Combination by restricting the conjunction.

3.3. Choice of m and n as variables In this case, we consider that m and n are not crisp values. This means to tolerate some errors on m and n. The idea is to suppose that m and n are approximate values. Namely, assume that m and n are two variables that we have to characterize, and let mo and no be the values of m and n obtained by applying the rule (5), i.e., the largest number of sources having, respectively, one and non-zero values as the greatest intersection. We propose to take account of the values around mo and no. Namely, for instance, to obtain the m value, we take account of the values mo − 1; mo and mo + 1 where greatest importance is allowed to the central term. The problem comes down to the aggregation of these three values into a single one, or, in other words, how to modify (m) and (n) such that they take account of the 3-uplet (mo − 1; mo; mo + 1) and (no − 1; no; no + 1). Two solutions [9,10] have already been proposed, where (m) and (n) in the rule (5) 0 0 and (n) . The rst one uses Yager’s OWA (ordered weighted averaging) are replaced, respectively, by (m) operators [11] as a setting of aggregation. In the second one, a similarity index was used. Voie 1 (OWA operators) For instance, for the m parameter, we take into account for the distributions (mo+1) ; (mo) ; (mo−1) , and their aggregation is carried out using OWA operators. We recall, that the aggregation F(a1 ; a2 ; : : : ; an ) of elements ai , with respect to some weighting vector W such that: ∀i; 06wi 61 &

n X

wi = 1

(74)

i=1

and, the aggregation is done given by: F(a1 ; a2 ; : : : an ) =

n X

wi bi ;

(75)

i=1

where bi is the ith greatest element of the family (a1 ; a2 ; : : : ; an ). For, our case, the weight vector is chosen such that the central term is favored, and the same importance is accorded to the extreme terms: 

 0:25 W =  0:5  ; 0:25 i.e., the aggregation becomes F((mo+1) ; (mo) ; (mo−1) ) and is processed in the same manner.

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

407

Fig. 9. Resulting distribution for m and n taken as variables.

Voie 2 (Similarity index) Another solution consists in building the result by summing the pair-comparisons between each two distributions (for m; (mo) and (mo−1) ; (mo) and (mo+1) ) where the comparison for each pair is carried out using some similarity index. Also, the aggregation is weighted such that the central term is always favored. The following result was proposed: 0 = (mo−1) · SM((mo) ; (mo−1) ) · 0:3 + (mo+1) · SM((mo) ; (mo+1) ) · 0:3 (m)

+ (mo) · (1 − 0:3 · SM((mo) ; (mo−1) ) − 0:3 · SM((mo) ; (mo+1) )):

(76)

We use a small modi ed version of Kaufmann index, such that we take into account the separation between distributions centers. It is given, in general case for two fuzzy sets A and B, using -cut representation n

SM(A; B) =

 X E(A k ; B k ) + (1 − ) · E(A1 ; B1 ); n+1

(77)

k=0

where  is only some parameter that models the importance given to the separation between the centers with respect to those allowed for the overlapping. Fig. 9 provides an example of such a combination. When one can notice that the latter distribution is lightly taken into account, while it is completely ignored when we use only the rule (5). This kind of combination is interesting if we want to avoid some forgetting phenomenon, where we do not have to lose any information, except in the case where one distribution is quite far from the others. In the latter case, the separation could be modeled by an appropriate choice of  parameter, which a zero value for the similarity index when one distribution exceeds some threshold. One may for example reformulate the similarity index such as if max(A11 ; A21 ; B11 ; B12 ) − min(A11 ; A21 ; B11 ; B12 ) ¡ A21 − A11 + B12 − A11 + ; SM(A; B) still remains unchanged, while it takes zero otherwise where A11 and A21 set for the two extreme points of the -cut representation (with = 1) of a fuzzy set A.  sets for the threshold, beyond it the distribution is not taken under consideration. However, the obtained distribution is always less speci c than those obtained from the rule (5).

408

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

Table 1 Summary of the properties corresponding to the rules (4) and (5) Properties

Rule (5)

Rule (4)

Commutativity Associativity Quasi-associativity Idempotence Zero preservation Preservation of the maximal plausibility Nilpotence

Yes No Yes Yes Yes No No

Yes No No Yes Yes Yes No

Compromise

Yes, if we add

Monotony increasing Adding impossible case Adding total ignorance case Continuity Autoduality Convexity MICA

(n) (s)

h(n) Yes, if the conditions AMC are ful lled No e ect on the result Sans e et sur le resultat Continuous everywhere Yes, for m = n = 1 Yes, if the condition (54a) is satis ed Yes, if we add to the monotony condition the constraint AMC

min(1 (s); 2 (s)) h Yes, if h is still unchanged No e ect on the result No e ect on the result Continuous if h is a non-zero element Yes, for h = 0 Yes, if the condition (54) is satis ed No Yes, if we add

4. Conclusion and summary In this paper, we have investigated some appealing properties of the two adaptive combination rules introduced by Dubois and Prade in the context of data fusion process. We have particularly emphasized on the following questions: • Is it possible to subdivide the combination into a set of sub-results? The response is no, because of the lack of associativity or quasi-associativity. • Could the rules (4) and (5) act as a compromise combination mode? The response is yes if we consider an additional possibility distribution corresponding to normalized conjunction. • How is the behavior of the combination result if we replace the initial inputs by their complements? The response, that seems surprisingly completely independent with the con ict factor, corresponds to the complement of the max combination. • Is the result of the combination monotone? The response is armative under some additional conditions corresponding to “constant con ict index h” for the rule (4), and AMC for the rule (5). • Does the resulting distribution preserve the convexity property? The response is usually no, however, three reformulations of the rule (4) are given such that the convexity is still preserved. • Is there some link between these rules with MICA class operators introduced by Yager? The response is yes, for the rule (5) under the condition of adding AMC condition to Yager’s de nition of monotony of MICA operators. The resulting class is a particular SIMICA operator. Table 1 summarizes these properties. Finally, we have proposed some extensions of the rule (5) based upon new concepts for the generalization step. Particularly, we consider the case of relaxing the constraint on m, restricting the conjunction combination so as to avoid the renormalization step as soon as possible, and quantifying the statement, the values of m and n are approximately mo and no, where, the approximation is carried out taking account for the values around mo and no and aggregating each result in a weighted manner. Two weighted procedures were proposed, the rst one is based on the OWA operator and the second one uses some similarity index.

M. Oussalah / Fuzzy Sets and Systems 114 (2000) 391–409

409

Acknowledgements The author is grateful to Professor Didier Dubois of IRIT Toulouse for his helpful comments on this work, which helped me to improve the ideas behind this paper. References [1] H. Bender, W. Nather, Fuzzy Data Analysis, Kluwer Academic Publishers, Dordrecht, 1992. [2] R.M. Cooke, Experts in Uncertainty, Dept. of Mathematics, Delft University of Technology, Delft, Holland, 1990. [3] M. Delplanque, A.M. Desoldt, D. Jolly, Fuzzy calculus for robotic data processing: problems linked with implementation, Proc. EUFIT’96, Aachen, Germany, September 1996. [4] S. Deveughele, Etude d’une methode de combinaison adaptative d’informations incertaines, Ph.D. Thesis, University of Technology, Compiegne, 1993. [5] D. Dubois, H. Prade, A review of fuzzy set aggregation connectives, Inform. Sci. 36 (1985) 5–121. [6] D. Dubois, H. Prade, Fuzzy sets in approximate reasoning: Part 1: inference with possibility distributions, Fuzzy Sets and Systems (25th Anniversary Memorial Volume) 40 (1991) 143–202. [7] D. Dubois, H. Prade, Combination of Fuzzy Information in the Framework of Possibility Theory, in: M.A. Abid, R.C. Gonzalez (Eds.), Data Fusion in Robotics and Machine Intelligence, Academic Press, New York, 1992. [8] D. Dubois, H. Prade, Possibility theory and data fusion in poorly informed environment, Control Eng. Practice 2 (1994) 812–823. [9] M. Oussalah, H. Maaref, C. Barret, Adaptive combination rule: in uence of t-norm operators, Proc. EUFIT’96. Aachen, Germany, September 1996, pp. 100 –104. [10] M. Oussalah, H. Maaref, C. Barret, Positioning of a mobile robot with landmark-based method, Proc. IROS’97, Grenoble, France, September 1997. [11] R.R. Yager, On ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Trans. Systems Man, Cybernet. 18 (1988) 183–190. [12] R.R. Yager, Expert systems using fuzzy logic, in: R.R. Yager, L. Zadeh (Eds.), An Introduction to Fuzzy Logic Applications in Intelligent Systems, Kluwer Academic Publishers, Dordrecht, 1992. [13] R.R. Yager, Aggregation operators and fuzzy systems modeling, Fuzzy Sets and Systems 67 (1994) 129–146. [14] L.A. Zadeh, Fuzzy sets, Inform. and Control 8 (1965) 338–353. [15] L.A. Zadeh, Fuzzy sets as a basis for a theory of possibility theory, Fuzzy Sets and Systems 1 (1978) 3–28.