or graphs

or graphs

ARTIFICIAL INTELLIGENCE 243 Generalized And/Or Graphs Giorgio Levi and Franco Sirovich Istituto di Elaborazione della Informazione, Pisa, Italy Reco...

974KB Sizes 0 Downloads 72 Views

ARTIFICIAL INTELLIGENCE

243

Generalized And/Or Graphs Giorgio Levi and Franco Sirovich Istituto di Elaborazione della Informazione, Pisa, Italy Recommended by Nils Nilsson ABSTRACT A generalization of AND/OR graphs is introduced as a problem solving model, in which subproblem interdependence in problem reduction can be explicitly accounted for. An ordered-search algorithm is given to fred a solution. The algorithm is proven to be admissible and optimal Examples are given which show the application o f the formalism to problems which cannot be modelled by AND[OR graphs. Generalized AND[OR graphs are finally shown to he equivalent to type 0 grammars. Finding a solution of a generalized AND[OR graph is shown to be equivalent to deriving a sentence in the corresponding type 0 grammar.

1. Introduction AND/OR graphs are widely used as a model of problem solving through problem reduction [11]. The assumption underlying both the formulation and the existing search algorithms is that subproblems can be solved independently. The independence assumption yields a nice and clean yet not sufficiently general model. Examples are known of problems for which the AND/OR graph model is inadequate because the independence assumption is not valid. One possible source of problem interdependence is the presence of variables in problem descriptions. If two problems share a variable, then the way of solving one problem may affect the solution to the second one. Actually, any problem is solved with respect to a given environment, which is defined by the variable bindings. One particular solution to a problem can bind variables thus modifying the environment of other problems. Problems in which interdependence is crucial are also those whose solution involves the expenditure of bounded available resources. The amount of currently available resources on one hand defines the environment in which the solution of a problem is attempted, and on the other hand can be modified by such a solution. Finally in robot planning tasks the current environment consists of the current states of the world description components. A solution to a state transformation subproblem depends on the current environment and obviously modifies it. Difficulties arising from subproblem interdependence and the resulting AND[OR graph inadequacy can occasionally be duped by means of problem dependent tricks which lack any systematic basis. Subproblem interdependence can instead Artificial Intelligence 7 (1976), 243--259

Copyright ~ 1976by North-HollandPublishingCompany

244

G. LEVI AI~O F. ~ Y I C H

be modelled by the formalism that will be introduced in the next section. Examples of applications to the above described class of problems will be shovm in Section 3.

2. c,e.mUz

Am>/O2 Gr

t.

A Generalized AND/OR Graph (GAG) is defined as Y = (0, T, A, E, S), where O is a set of nonterminal OR nodes, T is a set of terminal OR nodes, A is a set of AND nodes, E is a subset o f ( O u T)x A u A x (O u T) whose elements are directed arcs, $ belongs to O and is called the start node.

FIG. 1. A generalized AND/OR graph.

The graph in Fig. 1 is a GAG. Nodes labelled by capital letters are nonterminal OR nodes (problems), while nodes labelled by lower case letters are terminal OR nodes (solved problems). Finally, nodes labelled by digits are AND nodes, ~nd denote problem reduction operators. (Note that our convention for AND/OR node denotation is different from the one given in [11].) A reduction operator (AND node) a~ applies to a set of problems (OR nodes), called input problems, and reduces them simultaneously to a set of problems called output problems. Note that in standard AND/OR graphs, reduction operators are only allowed to apply to a single input problem. We will show in the next section that problem interdependence can be handled exactly because of this generalization. GAG's are searched in order to find a solution graph which shows that the start node is solved. Solution graphs are defined over the (generally infinite) cycle-free GAG ~' which is obtained by unfolding the OR nodes of the original GAG y. Art~dal IntelOgence7 (1976),243--259

G~uz.J~

~--,~lox oL~s,v.s

245

The recursive definition of solved node is the following. (i) Terminal nodes are solved nodes. (ii) An AND node is solved iff all of its successors are solved. (iii) A nonterminal OR node is solved iff at least one of its successors is solved.

l®ol 1

.

-

.

. .

FIG, 2. Psg's generated by Ae for the GAG in Fig. 1.

Artificia! Intelligence 7 (1976), 243-259

246

G. ~

A N D F. SmOVICH

A potential solution graph (psg) is any subgraph s of 7' such that (~ The start node S is in s. (ii) If node n is in s, then (a) if n is a nonterminal OR. node, then exactly one successor n" of n is in s, and all input problems of n' are in s; (b) if n is an AND node, then all successors of n are in s. A psg s in which node $ is labelled solved is a solution graph. According to the definition of solved node, node S can be labelled solved only if all s's tip nodes are terminal, where a tip node is either a terminal node or an OR node having no successors in s. A GAG ? is implicitly defined by a start problem S, a set of reduction operators and a set of terminal problems. A GAG search algorithm generates the unfolded graph 7' by searching the space of psg's of ? for a solution. The basic operation of such an algorithm is the expansion of a psg s by applying all applicable operators. A reduction operator ax is applicable to a pso s iff the set of a~-input problems is contained in the set of tip nodes of s. A successor psg is obtained from s by connecting the set of a,-input problem nodes to the set of nodes consisting of as-output problems, through an AND node a,. The expansion of a psg yields the set of all its successor psg's. If the set of successors of a psg s is empty, s is called a failure. Consider as an example the GAG shown in Fig. 1. The space of its peg's is shown in Fig. 2. For instance, the operators numbered 2 and 3 are the only reduction operators applicable to the psg numbered 2. Hence, *.he expansion of psg 2 yields the psg's numbered 4 and 3. Because of problem interdependence, a search algorithm must be able to find diffrent solutions to the same subproblem, t An algorithm which searches the spaces of nodes 7' (as in [8, 10]) must then be able of undo/n0 a solution to a subproblem in order to restore the solution state and of searching for a different subproblem solution. Undoing is a quite complex process which can be problem dependent. Nondeterminism can be more conveniently coped with in a systematic way by state say~nO, thus obtaining a class of algorithms which search the space of solution states. Within this class, there are Chang and Slagle's algorithm [1] and the algorithm given in Section 4, which searches the space of psg's. It is worth noting that the space of psg's is analogous to the space of contexts introduced by QA4 (see [12]) aud ND-LtSP (see [9]) to handle nondeterminism. 3. GAG'S and Problem Solving We will consider in this section the applications of GAG's to a class of problem solving tasks that cannot be modelled by standard AND/OR graphs. As we mentioned in the Introduction, we will first consider the case of problem descriptions that contain variables. In the sequel, a reduction operator a~ is given in the 1 This capability is not needed when only independent subpmblems are gemerated, since a particular solution to a subpmblemcan never preventfmdinga solution m another subproblem. ArtificialIntdllgence 7 (1976), 243--259

(;Emm,.AL~J~D

A;,;D/ORGRAPHS

247

form of a production, whose left-hand side denotes the set of as-input problems and whose fight-hand side denotes the set of a~-output problems. The first example is the following trivial and well-known problem [5]. (1) S ~ Fallible(x), Greek(x), (2) Fallible(x) --, Human(x), (3) Human(Turing) ~ T,

S

~ 1

Greek(X)

Fallible (X)

5IiX'S°crates)}

2 Human IX) 3

((X,Turing)l T

a)

~ Fallible(X) ( ~

~ Greek(X)

" I

Human(X)~

ing)}

,,/./" unsolvable , ~

3111X,Trlung)"}~"-...04 TO b) Fro. 3. The tree in (a) is not a solution while the tree in (b) does not imply the tmsolvability of the start problem. Arti~clal latelligenee 7 (1976), 243-259

248

G. ! ';VI AND F. SIROVICH

(4) Human(Socrates) -, T, (5) Greek(Socrates) -, T. The only terminal problem is denoted by T and x denotes the universally quantified variable x. A production applies to a problem description iff it~ left-hand side matches (can be nnified with) the problem description. Matching may result in an instantiation of the production obtained by applying the unifying substitution. Substitutions are thus associated to the AND nodes representing redt~--tion operators. The subproblems thus generated are generally non independent. For instance, the reduction operator 1 reduces the start problem S to the subproblems Fallible(x) and Greek(x) which are not independent because they share the variable x. The solution to each subproblem requires a substitution, i.e. a binding of variable x to some term. Problem S is solved only if subprob!ems Fallible(x) and Greek(x) are solved with exactly the same substitution. If problems are solved independently by a standard AND/OR graph search algorithm, then thetree in Fig. 3(a) would be erroneously considered a solution. One could modify a standard AND/OR graph search algorithm by letting it back-up the substitutions associated to subproblem solutions. Even such an algorithm could fail to find a solution. Consider for instance Fig. 3(b). The solution to the problem Human(x) by production 3 and substitution 0 ffi {(x, Turing)}, implies the solution to the problem Fallible(x) with substitution 0. 0 is now backed-up, i.e. associated to node labelled 1. The subproblem Greek(x) is now "Greek(x) with x - Turing", which is unsolvable (no reduction operator is applicable). Such an uusolvability would imply the unsolvability of the start problem S. Note, however, that there is one alternative way of solving Human(x). The algorithm should therefore have the already mentioned ability to "undo" a subproblem solution, in order to select alternative subgraphs even if rooted at already solved nodes. t~n the other hand, the above problem is trivially solved when modelled by a GAG. Fig. 4 shows the space generated by a depth-first algorithm searching the GAG that models the above problem. The psg numbered 4 is the failure corresponding to the situation shown in Fig. 3(b). However, because of the state saving mechanism inherent in GAG searching an alternative psg (numbered 5) is available and leads to a solution. In this example, reduction operators still apply to a single problem and the source of subproblem interdependence (i.e. variable bindings) is handled by an implicit and ad-hoc mechanism for variable binding backing up. This mechanism is not needed when the variable bindings are explicitly represented by the productions in the form of suitable input and output problems. The TuringSocrates example can then be represented as follows. (1) S --, Fallible(x), Greek(x), x - - ?, (2) Fallible(x) --, Human(x), (3) Human(x), x -- ? --, x -- Turing, (4) H u m a n ( x ) , x = ? --, x - Socrates, (5) Human(x), x = Turing --, x = Turing, (6) Human(x), x = Socrates --, x = Socrates, ArtO~cialIntelligence 7 (1976), 243--259

~

~

~IoR

249

OP,A P m

2

:allibl~ Greek(8ocratet

FallibldI

)

t~~ ,,

Greek(Turlng) Human(x)

FAILURE



i

,

! I

i

i,

FIG. 4. The psg space generated by a GAG depth-first search algorithm for the Turing--Socrates problem.

Artificial Intelligence 7 (1976), 243~259

250

G. LEVI AND F. SIROVICH

(7) Greek(x), x ~- ? --, x --- Socrates, (8) Greek(x), x = Socrates ~ x = Socrates. Variable bindings are represented by problem descriptions containing the assignment symbol " - - " (the constant symbol " ? " denotes the undefined value), which are terminal problems and provide the environment for problem reduction. The explicit representation of the environment obviously leads to multi-input reduction operators. Fig. 5 shows the solution psg.

Hurnunlx)(

Jc:80CrgteO

~ , ) x : Socrates

Fro. 5. The solution to the Turing--S~rates problem with explicit environment representation.

We will now consider another example of problems in which the explicit representation of the environment is crucial to finding correct solutions, namely problems whose solution requires the expenditure of (bounded) resources. In this case, the environment is the set of available resources (represented by subproblems) and it is modified by those reduction operators which require resource expenditure. Let us discuss the following example. (1) S ~ PI, P2, R,

(2) P], R-~ T, (3) Pl -* 1)3, (4) P2, R --, P3, (5) P3 --, T, (6) R - , T. The start problem consists here in solving problems P1 and P2, possibly making use of resource R. Productions 2 and 4 show that both Pl and P2 can be reduced Arlb~ia[ Intelligence 7 (1976), 243-259

OENERAUZED AND/OR GRAPHS

251

if R is available. The problem has a unique solution, in which only P2 makes use of (and consumes) resource R. Problems of this class cannot be correctly modelled by AND/OR graphs. Since the application of a reduction operator cannot explicitly depend on the environment, i.e. on the state of resources, it would be necessary to control the correct

:REWED

(s,,.)

PA'r(h,,¢) ~'~.)WT(h~,¢)

(s2,b)

=b

)m(,~,b)

(h=,b)

l*c,-,(*,,,~a'* AT(h,,-,.,)(

-~,.),,,:. ()so(h,m,.) ,N(SI,z.=)

:d~z2=d

FtG. 6. The solution to the robot planning task.

ArtiJieial Intelligence 7 (1976), 243-259

252

O. ~

A~

F. S m O W C H

use of resources by an ad-hoc mechanism to ensure that each resource is used only once in a solution. As afinal example of GAG applications we will now con~ider the class of robot planning problems which deal with environment states that are modified by reduction operators. Each environment state component [13], which is represented by a subproblem, describes a property of the environment and may act as a "context" for, and be modified by, the appl'gation of a reduction operator. The start problem is the oescription of the desired goal environment state, while the descTiption of the initial environment is treated as a terminal problem. This representation turns out to be similar to the state space representation proposed in [13], and similarly it is not subject to the frame problem. On the other side, our representation does not constrain an operator to have a single input problem. Start

so(ha,c,,)! U

(~)so(h,,©,d)

)convoy(sl~d,s) SCrOWq j gO(bbS,d)

) g onvey (ss~d, b) screw(sz~b)

so(hi, b,©) so(h=,b,o)

Stop Flo. ?. The plan obtained from the solution in Fig. 6.

Consider the following robot planning task. stop: S --~ SCREWED(sl, a), SCREWED(s2, b), AT(hi, ¢), AT(h2, c), go(h, w, x): AT(h, x) ~- AT(h, w), screw(s, x): SCREWED(s, x), AT(h2, x) -* IN(s, x), AT(h2, x), convey(s, z, x): IN(s, x), AT(hl, x) --, IN(s, z), AT(hi, z), start: AT(hi, c), AT(h2, c), IN(sl, d), IN(s?., d) -* T. The initial environment state is represented by production start and consists of Arti~lai httelSgence 7 (1976), 2A3-259

GENERALIZED AND]OR GKAPHS

253

two mechanical hands hl and h2 at location c and of two screws sl and s2 at location d. Both hands can move around, while hi can convey screws and h2 can screw them. The reduction operator go(h, w, x) describes the action of hand h moving from location w to location x. (Note that since the task is solved working backwards, the reduction operators are inverted descriptions of the actual action~.) The reduction operator screw(s, x) describes h2's action of screwing screw s at location x. Reduction operator convey(s, z, x) describes hl's action of conveying screw s from location z to location x. Finally, production stop describes the goal environment state in which sl and s2 are screwed in a and b, and both hands are back at the initial position. Note that the action describing productions are similar to STRIPS operators [2]. Fig. 6 shows a solution to the above planning task, The plan is obtained by "'reading back" the solution from node To The plan is generally represented by an action partial ordering, thus allowing subplans to be executed concurrently. The plan obtained from the solution .in Fig. 6 is shown in Fig, 7. More examples of applications of Generalized AND/OR Graphs to problem solving can be found in [7], where our formalism is also contrasted with first order predicate calculus, and its application to theorem proving is discussed. In the next section we wiHb~ concerned with the problem of searching Generalized A N D / O R Graphs. 4. Heuristic Search of Generalized AND/OR Graphs Depending on the problem described by a Generalized AND/OR Graph ?, costs may be associated to arcs and ~o terminal nodes. Let *.he function c(n, m) give the cost to be associated to the ~rc connecting node n with one of its successoxs m, and let h(t) be the cost associated to the terminal node t. The cost function h(n) is computed for each nontermiual node n of a solution graph, as follows. (i) h(n) = c(n, m)+h(m), if n is an OR node and m is its successor. (ii) h(n) = maxj[c(n, mt)+h(ml)], if n is an AND node and m l , . . . , ma are its successors •

The cost of a solution graph s is defined as h(s) = h(S). The cost of a psg s can be estimated if a heuris:icfunction ~(n) is provided for each tip node n. ~(n) is an estimate of the cost h(n) of node n in a solution graph, if it exists, having s as a subgraph. (If n is a terminal tip node they ~(n) = h(n).) The cost estimate ~(n) c,f any nontip node can be computed as follows. (i) ~(n) - c(n, m)+~(m), i f n is an OR node and m is its successor. Of) ~(n) =~ maxl[c(n, mt)+~(m~)], if n is an AND node and the m{s are all the successors of n~ if n is solved, its nonsoived successors otherwise. The estimated cost of a psg s is defined as ~(s) - ~(S). The following algorithm is ¢ubnissible, i.e. it finds a solution graph of minimal cost, if a solution graph does exist. The ordered-search algorithm Ao Step 1. Put the psg consisting in the start node S on a list called OPEN, and compute ~(S).

Artificial Intelligence 7 (1976), 243--259

254

o. LEVl AND F. SmOV~C~

Step 2. If OPEN is empty, exit with fhilure. Otherwise continue. Step 3. Remove from OPEN the psg so whose estimated cost is minimal, and put it on a list called CLOSED. (Resolve ties arbitrarily, but always in favgr of any solution graph.) Step 4. If so is a solution graph, then exit with so, otherwise continue. Step 5. Expand so generating the successor psg's s t , . . . ,st. If no successor psg is found, go to Step 2, otherwise continue. Step 6. For each successor psg sj, if sj is on either OPEN or CLOSED, remove it from the set of so's successors, otherwise (0 Compute the estimated cost ~(S) on the basis of the tip node heuristic function values. (ii) If some sj's tip node is terminal, apply the solved-labelling procedure to s~. (iii) Put sj on OPEN. Step 7. Go to Step 2. Consider for example the G A G in Fig. 1. Assume arc costs are unity, and $(n) -- 0 ff n is a terminal tip node, $(n) - 2 if n is a nonterminal tip node. Fig. 2 shows all the psg's generated by Ao. The psg~ are numbered in the order they were expanded. To each node n in a psg the value of ~(n) is associated, possibly together with the label S for solved. The psg numbered 6 is a failure. The psg numbered 9 is a solution graph, and, as we will show later, it is an optimal solution graph. We will now prove that if the estimate ~ is a lower bound on h for tip nodes, then Ao is admissible. Our proof will follow the one given in [4]. We first establish the following Lemma. LeMMA 1. I f ~(n) <~ h(n) for all tip nodes of a psg, then at any time before termination and for any optimal solution graph s*, OPEN contains a pso s' such that h(s*).

Proof. We will first prove that if h(n)

(1)

holds for all tip nodes of a psg s", then (1) holds for each node n of s". We will assume that (1) holds for all successors of node n and prove that (1) holds for n as well. In fact, if n is an OR node, then , ' li(n) = fi(m) +e(n, m) h(m) + c(n, m) = h(n). Finally, if n is an A N D node, then ~(n) = maxfft(m~) + c(n, mb] <. max~[h(m.r)+ c(n, m~)] = h(n). (Note that {ms} is a proper subset of {mi} if node n is not solved in psg s".) Therefore, (1) holds for the start node also, i.e./i(S) ~< h(S). Then, ~(s") = ~(S) <~h(S) = h(s"), (2) i.e. the estimated cost of any psg s" does not exceed the cost of a solution graph ~" having s" as a subgraph. If s* is an optimal solution graph, a sequence s i , . . . , sk of psg's exists such that st consists in node S only, s~ is s*, and each s~ is a suce/~ssor Arti/i¢ial Intelligence7 0976), 243-259

O ~ a A L I Z ~ A~/OX GRAPHS

255

of s~_ I- Initially s~ is on OPEN. Whenever an s~ of the sequence is moved from OPEN to CLOSED (Step 3) before termination, aH s{s successors are put on OPEN (Step 6(iii)), with the exception of those successors which are rejected. The only rejected successors are those which are identical (estimated costs included) either to psg's already ~u OPEN, or to psg's ~lready expanded. Hence, at least one psg of the sequence is on OPEN, unless Ao has terminated. Let s' be the first of such psg's. Then we obtain from inequality (2) that/[(s') ~< h(s*).l We can now state and prove the admissibility of Ao.

TtImRE~ 1. I f ~(n) <~ h(n) for all tip nodes, and all arc costs are greater than 6 > O, then Ao is admissible. Proof. The theorem is proven by contradiction. Namely, we will prove that the three following cases cannot occur. Case 1: Termination without finding a solution. This case can only occur if OPEN becomes empty. However, we know from Lemma 1 that OPEN cannot be empty before termination if an optimal solution exists. Case 2: No termination. Let s* be an optimal solution graph of minimal cost h(s*). Since the cost of any arc is at least 6, then for any psg s" whose maximal path length ~xceeds M--h(s*)/6, we have ~(s ~) > M . 6 = h(s*). 6/6 = h(s*). Clearly, no psg with maximal path length exceeding M is ever expanded for, by Lemma 1, there will be some other psg s' on OPEN, such that ~(s') <<.h(s*) < h(s~). Since the number of psg's whose maximal path length does not exceed M is finite, Ao must terminate. Case 3: Termination with a non optimal solution graph. Suppose Ao terminates with a solution graph s ~, such that ~(s ~) = h(s ~) > h(s*). But by Lemma i there existed ~fore termination a psg s' on OPEN, such that ~(s') ~ h(s*) < h(s~). Thus at this stage, s' would have been selected for expansion rather than f . I I If inequality (1) holds, algorithm Ao can also be proved to be optimal, in the sense that Ao never expands more psg's than does any other admissible algorithm A~ such that Ao is more informed than A~. We shall say that Ao is more informed than AI if the heuristic information used by Ao permits computing a lower bound on h(n) that is everywhere strictly larger, for any nonterminal tip node n, than that permitted by the heuristic information used by A~. We will first prove the following lemmas.

LmMA 2. If Ao is more informed than At, then for any node n of a psg s we have ~ao(ti) -- ~m(n) if n is a solved node, (3) > otherwise. Proof. (3) is true for tip nodes by hypothesis. We assume that (3) holds/'or all Artificial Intelligence 7 (1976), 243-259 19

256

O. LeVl AND F. SiXOVlCtl

successors of node n, and prove that it holds for node n as well. Assume that n is an OR node. Then if its successor m is not solved, n is not so!veal and we have

~a.,(n)

=

~ao(m)+c(n,m) > ~Ai(m)+c(n,m)

-----

~a~(n),

If node m is solved, node n is also solved and we have

~ao(n)

~ao(m)+c(n, m) -- ~a.(m)+c(n, m) = ~at(n).

Assume now n is an AND node. If n is not solved, let m l , . . , m~ be its non solved successors. T h e n

= max,t

Ao(mt)+c(n, m,)l > raax~at(mi) + c(n, mj)] -- ~,~,(n).

Finally, if n is solved, all its successors m r , . . . , mt are solved. Then ~Ao(n) --- maxl[~Ao(m~)+ c(n, m~)] = max~a,(mt) + c(n, mi)]

=

A,(n).ll

L~cJA 3. If Ao is more informed than AI, then ~xo(s) >I ~A~(s)for ~my psg s, with equality holding for solution graphs only. Proof The proof follows immediately from I.emma 2, since cost estimates for psg's are defined to be the cost estimates Jl in the start node,S, If $ is solved, then the psg's are solution graphs and the equality h o l d s . I Tm~on_r~M2. Let Ao and Al be admissible algorithms such that Ao is more informed than At. Then for any GAG, if psg s was expanded by Ao, it was also expanded by Al.

Proof By contradiction. Assume there exists a psg s which was expanded by Ao and not by Ax. At must have used information that any solution graph having s as subgraph would have had a cost larger than or equal to h(s*), the true cost of an optimal solution graph, i.e. t h e / i function used by At must have satisfied the inequality

>i h(s*).

(4)

On the other hand, ifs was expanded, the fi function used by Ao must have satisfied the inequality

iAo(S)

h(s*).

(5)

In fact, by I.emma 1, jubl before Ao expanded s, OPEN contained a psg s' such that fi(s')<, h(s*). Since s was selected for expansion the inequality //(s) fi(s') g h(s*) was necessarily satisfied. From (4) and (~) we derive the inequality ~ao(S) ~< h,(s). (6) On the other hand, since psg s was expanded by Ao, it cannot be a solution graph. Therefore, from Lemma 3 we obtain ~ao(s) > ~al(s), which contradicts inequality

(6). II It is worth noting that algorithm Ao is still admissible and optimal even if the ArtO[eial Intelligence 7 (1976). 243-.2.59

G~'ERAUZ~ AND/ORGI~APnS

257

backed-up ~ value for AND nodes is computed as =

|

where the m~'s are all the successors of n. The corresponding cost definition for a solution graph is a generalization of the cost definition given in [8] for AND/OR graphs. 5. The Equivaleaee Betweea GAGs and Type 0 Grammars AND/OR graphs we~ shown to be equivalent to context-free grammars [3, 6]. Namely, there exists a gibe-to-one correspondence between AND/OR graphs and context-free grammars. Moreover, a one-to-one correspondence exists between (suitably defined) solutions of an AND/OR graph and derivation trees of the corresponding grammar. An analogous correspondence exists between finite GAG's and type 0 grammars. In fact, given anytype 0 grammar G -- (VN, Vr, P, S), the corresponding GAG is ? -- (VN, Vr, A, E, S), where for each production r~ : ~ --,/~, belonging to P, there is a unique corresponding element in A, and for each a ¢ VN u Vrin a, the set E contains an arc directed from a to p, and conversely for each b ~ VN U ;IT in/~, E contains an arc directed from p to b. We have then defined an injection from the set of type 0 grammars onto the set of GAG's. Since productions map strings onto strings, the injection induces an ordering on the arcs leaving and leading to any AND node. On the other hand, given any "ordered" GAG, a corresponding type 0 grammar can be obviously defined. Therefore, only "ordered" GAGs will be considered in the sequel. It is worth noting, however, that a "non-ordered" GAG ~1 -'- 40, T, A, £, S) is equivalent to an ordered GAG 72 -- (0, T, A', E', S) where At and E' are obtained as follows. For each a ¢ A, let M, N be the sets of parent and successor nodes of a respectively. A set of reduction operators is obtained by combining any permutation of the elements of M with any permutation of the elements of N. The set of all reduction operators thus obtained determines A' aud E'. Note that arc ordering affects the definition of reduction operators. Obviously, arc ordering induces an ordering on the tip nodes of any psg. Thus, the set of psg tip nodes can be mapped on~.o a string. A reduction operator applies to a psg only if the string of its parent nodes is a substring of the tip node string. Because of the definition of correspondence, if we have a psg w~ose tip node string st is equal to a string ~j over Vs u VT, and we apply to st a sequence of reduction operators, and to cj the corresponding sequence of productions, we eventually obtain a psg s and a string ¢, such that the string of tip nodes of s is exactly ¢. Let us define a string history (psg history) as a sequence of strings over Vs v VT (of psg's) such that the first element is S, each element is obtained from the predecessor by applying a production in P (a reduction operator in A) and the last element is a string ov~.~ V2. (a solution graph). Given a string history ~G -(¢ro,. •, ct), let S~ -- (so,..., st) be the sequence of psg's such that So - $, and Artificial Intelligence 7 (|976), 243-259

258

G. LEVI AND F. $1ROVIC

each s t is obtained from st_ t by applying the reduction operator corresponding to the production appfied to ¢~_t in ~_,e. Sv is a psg history. In fact, the tip node string of st is ct- Since ¢t is a string of terminal symbols, st is a solution graph. Moreover, an analogous correspondence maps ST onto ~a- Therefore, a one-to-one correspondence between string histories and psg histories is derived from the correspondence between G and 7. Although, in general, several different psg histories terminating with a solution graph s may exist, they differ only as to the order of reduction operator applications. Hence, we consider them to be equivalent and their equivalence class is s. We can thus associate to any string history a solution graph, which can be considered a "derivation graph" of the last string of the history. Note that several string histories are represented by t'le same derivation graph. All such string histories can be considered to be equivalent because they differ as to the order of production applications only. Let us consider an example. The GAG shown in Fig. 1 is equivalent to the following grammar. VN - { S , A , B , C , D } ;

Vr=

{a,b,c,d,e,f,g},

P consists of the following productions. (1) S ~ ABC, (2) A B -~ DB, (3) B C ~ ab, (4) aC --, ac, (5) Da ~ da, (6) D B --, ea, (7) D B -., f g , ( 8 ) D f - - , f ,

( 9 ) A a ~ Da, (lO) A f - , D f

Finally, the start symbol is S. The arcs of the G A G in Fig. 1 are meant to be ordered as follows. The arcs directed to an AND node are ordered clockwise, while the outcoming arcs are ordered counterclockwise. The optimal solution graph is a derivation of the string eac of the language generated by the grammar. Because of the one-to-one correspondence existing between solution graph., of ordered GAGs and string derivations of type 0 grammars, well known results concerning type 0 grammars can be appropriately applied to GAGs. For instance, the set of solutions of a finite G A G is recursively denumerable and it is recursive if the G A G corresponds to a type 1 grammar. On the other hand, GAGs provide a structure which can be exploited to devise efficient algorithms for grammars. REFERENCES 1. Chang, C. L. and Slagle, J. R. An admissible and optimal algorithm for searching AND/OK graphs. Artificial Intelligence 2 (1971), 117-128. 2. Fikes, R. E. and Nilsson, N. J. STRIPS: A new approach to the application of theorem proving to problem solving. Proe. 2nd InternationQ! Joint Conference on Artificial Intelligence

0970, 608-620. 3. Hall, P. A. V. Equivalence between AND/OR graphs and context-free grammars. Comm. A C M 16 (I973), A.A~A.~.5" 4. Hart, P., Niisson, N, J. and Raphael, B. A fo~fml basis for the heuristic determination of minimum cost lyaths; IEEE Trans. SCCA (1968), 100-107. Am'f~ial Intelligence 7 (1976), 243--259

GENERALIZED AND/OR GRAPHS

259

5. Hewitt, C. Description and theoretical analysis (using schemata) of PLANNER: A language for proving theorems and mauipulating models in a robeL AI Memo No. 251, MIT Project MAC (April 1972). 6. Levi, G. and Sirovich, F. Some remarks on the equivalence between AND/OR graphs and context-free grammars. IEI Internal Report 1373--12, IEI, Pisa, Italy (October 1973), 7. Levi, G. and Sirovich, F. A problem-reduction model for non-independent subproblems. Proc. 4th International Joint Conference on Artificial Intelligence (1975), 340-344. 8. Martelli, A. and Montanari, U. Additive AND/OR graphs. Proc. 3rd International Joim Conference on Artificial Intelligence (1973), 1-11. 9. Montangero, C., Pacini, G. and Turini, F. Two-level control structures for nondeterministic programming. Nota Interna B74-37, Istituto di Elaborazione della Informazione, Pisa, Italy (October 1974). 10. Nilsson, N. J. Searching problem-solving and game-playing trees for minimal cost solutions. Proc~ IFIP Congress (1968), 125-130. 11. Nilsson, N. J. Problem Solving Methods in Artificial Intelligence. McGraw-Hill, New York (1971). 12. Rulifson, J. F., Derksen, J. A. and Waldinger, R. J. QA4, a procedural calculus for intuitive reasoning. SRI AI Center Technical Note 73 (November 1972). 13. VanderBrug, G. J. and Minker, J. State-spaee, problem-reduction and theorem proving-Some relationships. Comm. ACM 18 (1975), 107-115.

Received June 1974; revised version received October 1975

Artificial Intelligence 7 (1976), 243-259