Paths of least resistance in possibilistic production systems

Paths of least resistance in possibilistic production systems

Fuzzy Sets and Systems 19 (1986) 121-132 North-Holland 121 P A T H S OF LEAST RESISTANCE IN POSSIBILISTIC P R O D U C T I O N SYSTEMS R o n a l d R...

569KB Sizes 1 Downloads 64 Views

Fuzzy Sets and Systems 19 (1986) 121-132 North-Holland

121

P A T H S OF LEAST RESISTANCE IN POSSIBILISTIC P R O D U C T I O N SYSTEMS R o n a l d R. Y A G E R Machine Intelligence Institute, Iona College, New Rochelle, N Y 10801, USA

Received February 1984 Revised February 1985 We introduce the idea of possibilistic production systems. We provide a general algorithm for finding a path from an initial node to a goal node. We show that the use of a heuristic function can help in the efficient implementation of the search process. We see that a significant role in determining potential optimal paths is played by concentrating on the most difficult activity to accomplish. Keywords: Search, Artificial Intelligence, Possibility theory, Optimal paths.

1. In~oduction In t h e c o n s t r u c t i o n of artificial i n t e l l i g e n c e b a s e d c o m p u t e r m o d e l s a crucial r o l e is p l a y e d b y t h e i d e a of a p r o d u c t i o n s y s t e m a n d the r e l a t e d i d e a of a g r a p h s e a r c h [1]. T h e e s s e n t i a l p r o b l e m in t h e s e t y p e s of s i t u a t i o n s is t h a t of finding a p a t h f r o m s o m e initial s t a t e to a n y o n e of a set of g o a l states. In t h e a p p l i c a t i o n of t h e s e m o d e l s a c o n c e r n of c o n s i d e r a b l e i n t e r e s t is to m i n i m i z e t h e cost o r effort in d e v e l o p i n g t h e a s s o c i a t e d s e a r c h t r e e . In [1] Nilsson discusses an a l g o r i t h m , d e n o t e d A * , which uses a h e u r i s t i c m e t h o d to h a n d l e this p r o b l e m . In this p a p e r we c o n s i d e r t h e s i t u a t i o n in which t h e r e exists s o m e u n c e r t a i n t y a s s o c i a t e d with the a p p l i c a b i l i t y of the p r o d u c t i o n rules a n d o u r m a i n c o n c e r n is with the d e v e l o p m e n t of a p a t h f r o m initial s t a t e to goal s t a t e which has the m a x i m u m p o s s i b i l i t y of i m p l e m e n t a t i o n . In p a r t i c u l a r we shall d e v e l o p a heuristic s e a r c h m e t h o d a n a l o g o u s to t h e A * a l g o r i t h m to solve this p r o b l e m . O f p a r t i c u l a r i m p o r t a n c e in this s e a r c h m e t h o d is the significant r o l e p l a y e d by t h e m o s t difficult activity to be a c c o m p l i s h e d .

2. Possible production system A possibilistic p r o d u c t i o n s y s t e m can b e d e f i n e d in t e r m s of states a n d p o s sibilistic p r o d u c t i o n rules for t r a n s v e r s i n g b e t w e e n states. T h e r e exists o n e special state called the initial state, d e n o t e d So a n d a n o n - e m p t y set of special states 0165-0114/86/$3.50 © 1986, Elsevier Science Publishers B.V. (North-Holland)

R.R. Yager

122

called the goal sets. By a possibilistic production rule, denoted S~

) Si,

we m e a n to indicate that given we are in state Si the possibility or ease with which we can go to state Si is measured by al i, where a l i a [ 0 , 1]. The concept of possibility used here is similar to the type of uncertainty introduced by Z a d e h [2]. Furthermore, if P~ and Px are two production rules such that PI:

S1 a > 82,

P2:

S2

b ) S3

the possibility of going from $1 to $3 via the application of P1 followed by P2 is seen to be Min[a, b ] = a A b . M o r e generally if Pi:

S~

ai

• Si÷l

then the possibility of going P1, P2, P3 . . . . . . /an is seen to be Min

i = 1.2,....n

from

81

to

an+ 1 via

the

application

of

[ai].

Possibilistic production systems can be seen to arise in m a n y situations especially those related to problem solving, planning and robotics. The p r o b l e m of concern to us is that of starting in the initial state and finding a path via application of the production rules which leads to a goal state such that the overall possibility measure on this path is maximal. We shall call such a path a path of least resistance. A crucial computational aspect in the determination of such a path is the combinational explosion. W e shall try to provide some heuristic methods which reduces this difficulty.

Search algorithm The following algorithm which we shall call search will play a crucial role in our determination of a path of least resistance. The basic elements in this algorithm are: 1. A set called open consisting of the nodes which have not been expanded or searched. 2. A set called closed consisting of all the nodes that have been expanded. 3. An evaluation function f from the set open to the unit interval. 4. An ordered list L, of the elements in open based upon the evaluation function f. L is in decending order of f. T h e following algorithm is based upon those suggested in [1, 3]. 1. Put the start node So in the set open, assign it an f value one and form the ordered list L. 2. Create the set called d o s e d that is initially empty.

Paths of least resistance

123

3. If open is empty, exit with failure else go to four. 4. Select the first node on the list L~, denote this node n. R e m o v e n from the set o p e n and put it into the set closed. 5. If n is a goal node exit, if not go to 6. 6. Using the production rules of the system expand node n, by creating a set E " of all the states that have non-zero direct links from n. 7. F o r m the set E . = E ' - c l o s e d , consisting of the states that are in E " that have not been expanded. 8. Form a new set open consisting of the union of old set open and the set En: new open -- (old open) U E~. 9. Calculate / value for each element in new open according to some scheme. 10. R e o r d e r the list elements in open according to these f values forming a new list L. 11. G o to 3.

3. A heuristic evaluation tunction We recall that our problem is to use this algorithm to find a path of least resistance, highest possibility, from the initial state to any goal state. We shall call the function f which is used to determine which node to expand next our evaluation function. A primary property for any function f to be used in this algorithm is that it leads to path of least resistance from the initial state to a goal state. Following the terminology of Nilsson [1] we shall call any function which satisfies this property an admissable evaluation function. In the following we shall indicate that there exists at least one admissable evaluation function for solving the problem of path of least resistance using the search algorithm. Before suggesting this function we provide some definitions. Definition. Let n be any node in open. An established path from the start node So to n is any non-repeating sequence of nodes So, $1, $2 . . . . . Sn, where Sn = n with the set {So, $1, $2, S,_1} contained in closed such that for each pair Si and Si+~ in this sequence there exists a production rule Si ~ S,+1 with non-zero possibility a $~,$1+ i "

W e note that for any node in open there exists at least one established path from the initial node. Definition. For any established path P = So,

S1,

P o s s [ P ] = a ..... /xa ..... A ' ' ' A a ....... =

Sn-1, Sn, from So to Sn we denote

Min

[a ...... ]

i = 1,2,...,h

as the possibility value for path P. Definition. Assume

P(n) is the set of all established paths from So to n. Then

f(n) = Max [Poss(P)]. P~P(n)

R.R. Yager

124

Thus we see that f(n) is the established path of m a x i m u m possibility (established path of least resistance) from So to n. We note that f(n) is a monotonically non-decreasing function of the set closed. L e t C1 and C2 be two manifestations of the set closed, let fl(n) and fg(n) be established based upon C1 and C2 respectively if C x c C 2 then fl(n)<~f2(n). We shall now state and prove later that there exists at least one evaluation function using the above algorithm which leads to a path of least resistance from So to a goal node. T h e o r e m 1. Assume there exists a path from the initial state So to some goal state.

Then the use of the evaluation function f such that f(n) = ~(n) will always terminate with the path of least resistance. This theorem says that if we use as our evaluation function in the algorithm search the value of the best established path f r o m the initial node to the node in o p e n we shall terminate with a path of least resistance. In essence this approach says expand the node in o p e n which has the maximal established path f r o m the initial node. While we have indicated that there exists at least one admissable evaluation function in m a n y instances we m a y be interested in finding a better admissable evaluation function in the sense of reducing the n u m b e r of nodes expanded in step 6 and hence reducing the problem of computational explosion. In the following we provide for a whole family of admissable functions [1]. In motivating the derivation of this new family of evaluation functions we m a k e certain observations about our previously presented approach. In that approach in order to determine which node to expand next we essentially decided to use the open node which has the highest established possibility value, path of least resistance, from the start node. In particular we took no account of what is going to happen in the remainder of the search from the o p e n nodes to the goal node, we just looked at the possibility of getting to a node in open. It appears obvious that a great i m p r o v e m e n t in the search procedure would be obtained if in addition to using this information we used, if available, some information about the resistance of the remainder of the path from the open nodes to a goal node. This idea provides the bases of our approach which is analogous to that used by Nilsson [1]. Let n be any arbitrary open node. L e t g*(n) be the actual optimal value of the path of least resistance (PLR) f r o m the initial node to the node n. Let h*(n) be the actual optimal value of the possibility value on the P L R from the node n to any goal node. In this situation,

f*(n) = Min(g*(n), h*(n)) is a measure of the actual value of the P L R for any path constraint to go through node n.

Paths of least resistance

125

N o t e . The value of the path of least resistance from the initial node ~o to a goal node is f*(So). N o t e . If n' is any node on a path of least resistance from So to a goal node, then

f*(So) = f*(n'). N o t e . If n' is any node not on a path of least resistance, then

f*(So) >>-f*(n'). Ideally if we have f* we could use this function as our evaluation and get a direct path f r o m So to a goal node without opening any unnecessary nodes. However, in the expansion of the search algorithm we do not have f*. The following t h e o r e m shows that an evaluation function f which is an approximation to f* always provides an admissable evaluation function. T h e o r e m 2. The evaluation function f, where

f(n) = Min(g(n), h(n)) with g(n) = f(n), the value of the best established path to n, and h(n) is a guess of the value of P L R from n to a goal node, such that h(n)~ h*(n), always leads to an admissable evaluation function. P r o o L First we show that there always exists in open a node n' that is on the optimal path from So to a goal node with f(n')>I f*(So). Let P be an optimal path, where P = no, nl, n2 . . . . . n k. In this case no = So and nk is a goal node. Any time before our algorithm terminates let n' be the first node in this sequence that is in open, f(n') = Min(g(n'), h(n')). Since P is any optimal path to nk, it must be an optimal path to any ni on P. Since all the ancestors of n' are in closed then it must follow that g(n') = g*(n') and

f(n') = Min(g*(n'), h(n')) Furthermore, since we have assumed h(n')>~h*(n) then it follows that f ( n ' ) = g*(n') ^ h(n') >! g*(n') ^ h *(n') = f*(n'), thus

f(n')>~f*(n'). H o w e v e r as we have noted for any n' on the optimal path f*(n')= f*(So), thus

f(n')>~f*(So) Now we show that this algorithm using f only terminates by finding a P L R from So to a goal node. Suppose we terminate at some goal node t without it being the optimal. T h a t is f(t)=g(t)I

126

R.R. Yager

f*(So), hence f(t)~

f*(So). L e m m a . For any node n selected for expansion, f(n) ~ f*(So). Proof. L e t n be any node selected for expansion. If n is a goal node then f(n) = f*(So). Suppose n is not a goal node; if n is on the optimal path there exists a node n' on the optimal path such that f(n')>I f*(So). H o w e v e r since we selected n, it must be the case that f(n)>~f(n')>~f*(So). We m a k e one observation in the application of this algorithm. If at any point in the process there exists two or m o r e nodes in o p e n tied for the highest value of f, Min(g(n), h(n)), we expand the one of these nodes which has the highest Max(g(n), h(n)). In the application of the above evaluation function, f(n)= g(n)/x h(n), we note that since g(n) is the value of the best established path from So to the o p e n node n at the time of evaluation, g(n) is always available and hence its determination causes no problem. The determination of h(n) for each node in open generally requires some knowledge about what is going to happen in the future developm e n t of the tree. Knowledge a b o u t h(n) is called heuristic information and hence h(n) is called the heuristic function. In essence, h(n) is a guess of the value of the path of least resistance from n to a goal node and requires some expert knowledge of the situation. H o w e v e r , since h*(n) is the value of some path between two points it must always have a value less than or equal to one. Since h(n)>~h*(n), the choice of h ( n ) = l for all n always leads to an admissable evaluation function. We note that when h(n)= 1 we get

f(n) = g(n) ^ 1 = g(n) = f(n), which is the evaluation function that we used in our first theorem. Thus the proof of T h e o r e m 2 implies the proof of T h e o r e m 1 as a special case. As we indicated earlier our interest in finding this m o r e general evaluation function g(n)^h(n) is to help provide a m o r e effective search procedure for finding the path of least resistance. By a m o r e effective search procedure, we m e a n one in which we would have to expand less nodes to find the optimal path from So to a goal node. It appears natural that the knowledge a b o u t h*(n) manifested in h(n) will determine the search effort required to find our desired path. In particular, it would a p p e a r the closer h(n) is to h*(n) the better and m o r e informed the search. Defmitlon. h(n) is called an admissible heuristic function if h(n)>I h*(n) for every o p e n node in the search.

Paths of leastresistance

127

Dellnition. Assume ha(n) and h2(n) are two admissible heuristic functions, ha(n) is said to be a more informed heuristic about h*(n) if for each non-goal node n, ha(n) is closer to h*(n) than h2(n). This definition is analogous to the one provided by Nilsson [1]. L e m m a . If ha(n) is a more informed admissible heuristic about h*(n) than the admissable heuristic h2(n) it follows that ha(n) < h2(n) for each non-goal node n. Proof. Since ha(n)~ h*(n) and h 2 ( n ) > ~ h*(n) and Iha(n)-h*(n)l < Ih2(n)-h*(n)l, it follows from ha(n)- h*(n) < ha(n)- h*(n) that ha(n) < h2(n).

Detlnifion. We say a heuristic function h is well ordered if for all pairs of nodes na and n2 such that h*(na)>h*(n2) it follows that h(nO>h(n2) and if h*(na)= h*(n2) that h(na)= h(n2). T h e o r e m 3. Assume ha and

h 2 a r e two admissable heuristics for a given problem. Assume h 2 is a more informed heuristic than hi. Furthermore, a s s u m e h 2 is a well ordered heuristic. Then the search procedure guided by h E will not expand any node that is not on an optimal path that is not expanded by h i .

Proof. Our proof will be based upon showing that if we open such a node we get a contradiction. Let ao, aa, a2 . . . . . ak-a, ak = P be the sequence of all nodes expanded by the search procedure guided by hi in getting to the goal node ak, with ao the initial node So. Let n be the first node not expanded by hi which h2 attempts to expand. Let bo, ba, b2. . . . . bp = P' be the sequence of nodes expanded by h2 prior to expanding n. We note that all the nodes in P' appear in the sequence P, bo is the initial node S, none of the nodes in P' are goal nodes and P' is not empty. Let us consider the search procedure under hi just after ak-1 has been expanded. We first note that n must be in the open set under hx at this point since all nodes expanded under h2 up to bp have been expanded under ha and n is in open under h2. We shall let gl(n) indicate the best established path to n, under ha, at the point after ak-a is expanded. We shall let g2(n) indicate the best established path to na under h2, at the point after bp is expanded. Since up to these points ha has expanded at least every node expanded by h2, we have gl(n) >/g2(n). From a previous lemma, the fact that we are now expanding n under requires that

f2(n) = g2(n) ^ h2(n) >1f*(S). Since g a ( n ) >~ g2(n) and h a > h2 if follows that fl(n) = ga(n) ^ ha(n) >I g2(n)/x h2(n) >1f*(S).

h2

R.R. Yager

128

Since under h~ we did not expand n after expanding nk-~ this requires gl(n)A hl(n)<~f*(s). From this it follows that

gt(n)Ahl(n)=f*(S) and

g2(n)Ah2(n)=f*(S).

(1) Assume g2(n)> h2(n). This requires that g2(n)Ah2(n)= h2(n)=f*(S). Since gl(n)>~g2(n) we have g l ( n ) > f * ( S ) . Furthermore, since ht(n)>h2(n) (h2 is m o r e informed), h i ( n ) > f*(S). These two conditions require that h~(n)^g~(n)>f*(S), which is a contradiction of the fact that ha(n)Agl(n)=f*(S). (2) Assume g2(n) ~< h2(n). Then gR(n)/x h2(n) = g2(n) = f*(S). F u r t h e r m o r e since n is not on the optimal path, s*(n)/x h*(n)f*(S). However, if f(n')>f*(S) then we would not expand n, t h u s / 2 ( n ' ) = f*(S) and hence g2(n')/x h2(n') = f(S). F u r t h e r m o r e since n' is on the optimal path we have

g*(n')Ah*(n')=f(S). From these observations we have the following conditions:

g*(n)Ah*(n)
g*(n')Ah*(n')=f*(S), g2(n')Ah2(n')=f*(S), g2(n) ~< h2(n) = g2(n) = f*(S). Since g * ( n ) ~> g2(n), we have h*(n)~f*(s). F u r t h e r m o r e since g*(n')/x h*(n')= f*(s), it follows that h*(n')>~f*(S) and hence h*(n')>

h*(n).

(i) Assume g2(n')/> h2(n'). Then h2(n') = f*(S), but as h2(n)~ g2(n) = f*(S) we get h2(n)>~h2(n'), and since h*(n')>h*(n) this contradicts our assumed well order property on h2. (ii) Assume h2(n')>~ g2(n'). Then g2(n')=f*(S). Either h 2 ( n ) ~> h2(n'), which contradicts our well ordered r e q u i r e m e n t specified by h*(n')>-h*(n), or h 2 ( n ' ) > h2(n). H o w e v e r in this second case since h2(n)/x g2(n) = h2(n')/x g2(n') = f*(S), and g2(n)= g2(n')= f*(S), we must use our tie breaking procedure, thus since h2(n')>h2(n) we would select n' to expand and hence this contradicts our assumption. As a result of the previous t h e o r e m we see that for the efficient implementation of our search algorithm we require a good guess as to the value of the heuristic h(n). We recall that h(n) is the possibility of going from node n to some goal node. Of considerable practical importance in the determination of a good guess for h(n) is the following observation. Assume P = nln2" • • nk is any path from nl to nk; then the possibility value of this path is Poss(P) =

Min

i = 1, 2 , . . . , k - - 1

(ai.i+l)

Paths of least resistance

129

where ai.i÷l is the possibility of going from n~ to n~+l. As a result we note that Poss(P) is simply equal to the value of the possibility associated with the most difficult leg on this path. Since h(n) is the possibility value for the best path from n to a goal node we see that

h(n) =

Max

(value of the most difficult leg on this path).

all p a t h s f r o m n to a goal

More informally the above observation can be used to get some insights on how to get a good estimate of h(n) as well as some insights on how experts efficiently search for solutions. Consider that we are faced with the performance of some task, in our terminology the process of going from a node to a goal node. In reality, we can see that the solution of this task may require the solution of some set of subtasks (A1, A2 . . . . . Ap). In many instances one of these subtasks is the crucial one in the sense it is the most difficult to accomplish. Then in essence the value of h(n) is our guess of how difficult it is to accomplish this most difficult subtask. Thus we can associate with each node in a search tree a most difficult thing we have to do from there to get a goal and then get an estimate of how difficult it is to do this crucial task as h(n). We can see from this how an expert may indeed solve a problem. The expert would always be looking at the most difficult thing he has to do and would eliminate potential solution paths which involve low possibilities of success of this most difficult task.

4. O n the m e a s u r e m e n t of our heuristic

In trying to implement a search procedure of the type described previously one is faced with the problem of determining the value of the heuristic for each node. Rather than requiring that one supplies a precise estimate for the heuristic function we can allow for less precise estimates by approximating the values for h(n) in terms of fuzzy subsets [4, 5]. We note in some cases the values for the aij's can also be supplied in terms of fuzzy subsets. Assume h(n) is estimated by the fuzzy number A and g(n) is the fuzzy or precise number B. We recall that a precise number is just a special case of a fuzzy number. Thus in all cases A and B would be fuzzy subsets of the unit interval. Since

f(n) = g(n)/xh(n)

(A =min),

f(n) is also a fuzzy number, call it C, such that C=A/xB where C is defined via fuzzy arithematic [5] to be C ( z ) = Max [A(x)/xB(y)]

for all z e [ 0 , 1].

f o r all XAy~Z

Having obtained f(n) for each node in open as a fuzzy number in the unit

130

R.R. Yager

interval we are then faced with the problem of selecting one of these nodes as the best to open. This step then requires the comparison of fuzzy numbers to find the biggest. Yager [6] has suggested one approach for solving this problem. We shall briefly describe this method. Assume C is a normal fuzzy number, let C,~ be the a-level set of C, that is

c~ ={xlc(x)>~a}. C~ is simply the ordinary set of numbers whose membership grade is at least a. Let M ( C , ) be the mean value of the numbers in C,. Then we define M =

M(C,~) da.

Yager [6] has shown that this M has some good properties for comparing fuzzy numbers. Thus if we have a collection of fuzzy numbers C1, C2 . . . . . corresponding to t h e / ( n ) ' s for all the nodes in open we can calculate M for each of these numbers and the select to expand the node in open with the largest M value. We should note that a number of other methods have been suggested for comparing fuzzy numbers see [7] for a comparison of these methods. A method involving even less informational requirements is available to solve the estimation of heuristic problem. Since the only calculation involved in the algorithm is F(n) = g ( n ) ^ h(n),

we can simply use an ordinal scale to measure the h(n)'s. That is, we could simply express values for the h(n)'s in terms of high, low, etc. While this approach would require less information with respect to the h(n) we would also have to provide the information for the g(n)'s (i.e. the ali's) on this same scale. This approach would have the drawback of sacrificing the more precise information available in the g(n)'s for the computational ease of working with the ordinal scale. A compromise method may be used. In this approach we initially use simply an ordinal scale to quickly eliminate bad alternatives and then use the other method to select between remaining alternatives.

5. Generalized paths of least resistance In the previous part we indicated that if we have a path consisting of branches with possibility values al.2, a2.3, a3.4 an_Ln the value of the possibility of the path from I to n is . . . . .

Min

i = 1,2,...,n--1

[al, i+l].

The selection of using the min to implement the implied 'anding' is just one choice from among a whole family of possible implementation. The most general choice would be a t-norm [8].

Paths of least resistance

131

A t-norm T is a mapping T : [ 0 , 1]x [0, 1]-->[0, 1] such that (1) T(a, 1) = a, (2) T(a, b) = T(b, a), (3) T(a, b) >I T(c, d) if a >I c and b ~> d, (4) T(a, T(b, c)) = T(T(a, b), c). Thus in this more general situation the possibility value associated with a path from 1 to n is denoted POSST(P) = T/=1.2 ...... -1 [ai.i+l]. It can easily be seen that Min is a special case of the t-norm operators. Another special case is the product operation, here

T(a, b) = a • b. Dubois [9] provides a very comprehensive study of the properties of t-norm operators. It can be easily shown that if T is any arbitrary t-norm then

T(a,b)<~Min(a,b)

for all a,b~[O, 1].

DefmiUon. A t-norm T is called a strict t-norm if a > c and b > d

implies

T(a, b) > T(c, d).

In the following we shall generalize the theorems to the situation where we use an arbitrary t-norm instead of the min operator to aggregate the possibilities along a path. Theorem 4. For any arbitrary t-norm T, the evaluation function fT, where fT(n) = T[gT(n), hT(n)]

with gT(n)= IT(n) the value of the best established path to n under aggregation T and hT(n) the guess of the value of the P L R from n to a goal node, under aggregation T, such that hr(n)/> h~(n), always leads to an admissable evaluation [unction. Proof. Let P be any optimal path from the initial node So to a goal node n'. Any time before our algorithm terminates let n be the first node in the sequence P that is in open, fT(n) = T[gT(n), hT(n)]. AS in the previous version of the theorem, gT(n)= g*(n) and hence fT(n) = T[gT(n), hT(n)]. Since for any t-norm T, T(a, b) ~ T(a, c) if b ~ c, it follows from the assumption

132

R.R. Yager

hT(n)>~ h~(n) that f(n) = T(gT(n), hT(n)) >~T(gT(n), hT(n))=fT(n) and hence

fT( n ) >~f~( So) since for any node n on the optimal path fT(So)=/T(n). Suppose we terminate at some goal node t without it being optimal, that is fT(t)= gT(t)
a given problem in which h2 is a well ordered heuristic which is more informed than h~, then the search procedure guided by h2 will not expand any node that is not on an optimal path that is not expanded by h 1. The p r o o f of this t h e o r e m is very similar to the one with the min operator.

6. Conclusion We have investigated the problem of finding the path of least resistance in possibilistic production systems. We have shown that a heuristic function based upon the difficulty of going from any point in the tree to a goal node plays a significant role in the reduction of the computational explosion.

Reterences [1] N. Nilsson, Principles of Artifical Intelligence (Tioga Press, Palo Alto, CA, 1980). [2] L.A. Zadeh, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and Systems 1 (1978) 3-28. [3] G. Williams, Tree searching, Part 1: Basic techniques, Byte Magazine, Vol 6 (9) (1981) 72. [4] L.A. Zadeh, Fuzzy sets, Inform. and Control 8 (1965) 338-353. [5] D. Dubois and H. Prade, Fuzzy Sets and Systems: Theory and Applications (Academic Press, New York, 1980). [6] R.R. Yager, A procedure for ordering fuzzy subsets of the unit interval, Inform. Sci. 34 (1981) 143-161. [7] R.T. Degani and G. Bortolan, Ranking of fuzzy alternatives in electro-cardiography, Proc. IFAC Svmp. on Fuzzy Information. Marseille (1983) 397--402. [8} E.P. Klement, Some remarks on t-norms, fuzzy sigma-algebras and fuzzy measures, Proc of the Second Int. Seminar on Fuzzy Set Theory, Linz (1981) 125-142. [9] D. Dubois, Triangular norms for fuzzy sets, Proc. of the Second Int. Seminar on Fuzzy Set Theory, Linz (1981) 39--68.