Artificial Intelligence 48 (1991) 143-170 Elsevier
143
Constraint relaxation may be perfect* Ugo Montanari and Francesca Rossi Dipartimento di lnformatica, Universitgl di Pisa, Corso Italia 40, 1-56100 Pisa, Italy Received April 1989 Revised February 1990
Abstract Montanari, U. and F. Rossi, Constraint relaxation may be perfect, Artificial Intelligence 48 (1991) 143-170. Networks of constraints are a simple knowledge representation method, useful for describing those problems whose solution is required to satisfy several simultaneous constraints. The problem of solving a network of constraints with finite domains is NP-complete. The standard solution technique for such networks of constraints is the backtrack search, but many relaxation algorithms, to be applied before backtracking, have been developed: they transform a network in an equivalent but more explicit one. The idea is to make the backtrack search have a better average time complexity. In fact, if the network elaborated by the backtrack algorithm is more explicit, the algorithm backtracks less. In this paper we describe relaxation algorithms as sequences of applications of relaxation rules. Moreover, we define perfect relaxation algorithms as relaxation algorithms which not only return a more explicit network, but also exactly solve the given network of constraints by applying every relaxation rule only once. Finally, we characterize a family of classes of networks on which certain perfect relaxation algorithms are very efficient: the exact solution of each network in a class is found in linear time.
1. Introduction Networks of constraints are very important in AI as, for instance, problems like description of physical systems, scene interpretation and specification of software can be often naturally expressed as networks of constraints. Networks of constraints (also called "constraint satisfaction problems") are described here as labelled graphs. Nodes in the network represent variables to be assigned, while arcs (i.e. hyperarcs connected to one, two or more variables) are constraints to be satisfied by the adjacent variables; constraints * Research supported in part by Selenia S.p.A., Roma. 0004-3702/91/$03.50 © 1991 - - Elsevier Science Publishers B.V.
144
U. Montanari, F. Rossi
connected to k variables are labelled by relations specifying the acceptable k-tuples of values of the variables. A solution of a network of constraints is defined as an assignment of values to some (say k) connection variables which can be extended to the remaining ( n - k ) variables in order to obtain an assignment of all the variables (n) in the network which satisfies all the constraints. There is a distinct connection arc which is adjacent to the connection variables. A network of constraints is solved if the label of its connection arc coincides with the set of its solutions. Note that this notion of network of constraints, where a particular arc is selected, is not restrictive. Moreover, it is convenient for defining networks generated through arc-rewriting productions. Similar notions have been introduced in [1, 7]. Note also that in this paper we only consider problems with finite domains, while in general a constraint satisfaction problem (CSP) can have variables with infinite domains. Typical examples are problems dealing with time points or intervals and, in general, problems expressing constraints over the integer or real numbers. Unfortunately, both the problem of finding one solution and all the solutions of a network of constraints are NP-complete. This means that solution algorithms for general networks of constraints have exponential worst-case behaviors. But some solution algorithms can have acceptable performances in certain cases. For example, instantiating all the variables in all the possible ways in order to find those instantiations satisfying all the constraints (generate and test algorithm) will always he exponential (in the best, in the average and in the worst case). On the contrary, the backtrack algorithm, which is the standard solution algorithm for networks of constraints, is linear in the best case and can be acceptable on the average. More precisely, the more explicit the constraints of the network to be solved are (they reflect the global constraint induced by the whole network), the better the backtrack algorithm works, since blind alleys are discovered earlier. For this reason, many relaxation algorithms have been developed: given a network, they make some local transformations on it, and return a new more explicit network. In this paper a relaxation algorithm is defined as the application of some relaxation rules in some order, until no more changes can be done (we have reached the stable network). Each relaxation rule computes the solution of a subnetwork and possibly modifies, accordingly to this solution, a constraint of the network, making it more restrictive. As an example, the arc-consistency algorithm [8] works on binary networks, where only unary constraints R i (specifying the domains of variables) and binary constraints Rij exist. Its relaxation rules state that a value v~ in the domain of variable x i is arc-consistent iff, for all xj, there exists a value vj in the domain of xj such that the constraint Rij between x, and xj, R i and Rj are all
Constraint relaxation may be perfect
145
satisfied by the pair (vi, vj). If not, R i is modified by eliminating vi. This rule is applied to all variables and, for each variable, to all arcs adjacent to it, until all values in all domains are arc-consistent. Another well-known relaxation algorithm is the path-consistency algorithm [8, 12]. It works on complete binary networks and its relaxation rules state that a pair of values v i and vj (for two variables x i and xj respectively) is path-consistent iff, for each third variable Xk, there exists a value u k such that these three values satisfy Rij, Rik, R k and Rkj. Pairs (vi, vj) which are not path-consistent are erased from Rij, if present. This rule is applied to all arcs (x~, xj) and, for each arc, to all variables x~, until all pairs of values in all arcs are path-consistent. Dechter and Pearl [4] propose to choose the order in which the variables of the CSP will be instantiated (by the backtracking search) before the application of any relaxation algorithm. In this way, we can use a "directional" version of a relaxation algorithm, which produces the same effects of its more general version but it is obviously more efficient. For example, in the directional version of arc_consistency, an arc is able to "transmit" the constraint of one connected variable to another only in one direction. They define directional_ arcconsistency and directional_path_consistency, where the direction of each arc is the reverse of the order imposed by the backtracking search on its extremes. In this framework, they also prove that a CSP with a tree structure can be completely solved by the directional-arc-consistency algorithm, i.e. that the backtracking search on such a preprocessed CSP never backtracks. Freuder [5, 6] extends both arc- and path-consistency algorithms giving an algorithm for achieving (strong and weak) k-consistency on networks of constraints. In his notation, arc-consistency can be seen as strong 2-consistency and path-consistency as strong 3-consistency. While the arc-consistency algorithm is linear with the number of arcs of the network [9, 11] and the path-consistency algorithm is cubic with the number of variables [11], the k-consistency algorithm is exponential with k. Seidel [18] gives an algorithm for finding all the solutions of a binary network of constraints. It is inspired by dynamic programming and eliminates at each step one variable of the problem by solving the subproblem involving this variable and all its adjacent variables. Its time complexity is O ( m ( # D ) I + l ) , where rn is the number of arcs in the network, D is the domain of each variable, and f is an integer depending on the structure of the problem and on the sequence in which the variables are eliminated. In this paper, we develop a convenient theory for representing networks with k-ary constraints and general relaxation rules. We define relaxation algorithms in a general way and show that the stable network depends only on the set of rules and not on their order of application. The set of rules used by a relaxation algorithm is called adequate if the stable network is solved, and
146
U. Montanari. F. Rossi
perfect if it is adequate and the stable network can be achieved applying the rules only once, in an order called a perfect strategy. Perfect strategies are thus very convenient, provided that the rules are few, and small. We also characterize particular classes of networks and relaxation rules of bounded complexity such that the number of rule applications needed by any network in the class is smaller or equal than the number of arcs in the network. Thus the relaxation algorithm has linear complexity for every such class, even if the linearity coefficient may vary from class to class. It is shown that every class can be generated by a graph production system (a kind of graph grammar) and that any derivation of a network C in the system provides a perfect strategy for C. This paper does not examine the problem of finding a derivation of a given network in a given graph production system, or of finding the best derivation in the best production system for a given graph. We have two reasons for this omission. The first, technical, is that searching for a derivation is a kind of parsing problem; thus it is rather different from those examined in the paper. The second reason is that in those cases where the constraint model is generated by a problem decomposition methodology, the derivation is usually already available. However, for the interested reader, some of the techniques used to solve this problem can be found in the literature about the secondary optimization problem of dynamic programming [2]. In Section 2 we define networks of constraints in terms of connection graphs. In Section 3 we define a general relaxation algorithm, with its fundamental properties, and in Section 4 we propose a more efficient version of it, which, in particular cases, yields well-known practical algorithms. Section 5 defines perfect relaxation algorithms as a special case and introduces graph production systems. Finally, Section 6 concludes the paper summarizing the results and discussing their relationship with logic programming and metaprogramming. The link between networks of constraints and graph grammars was recognized for the first time in [13]. An initial, reduced version of this paper appeared in [17].
2. Networks of constraints
In this section we introduce connection hypergraphs and networks of constraints. For a wider review of the literature on networks of constraints, and for a more detailed discussion on networks of constraints as hypergraphs, see [14]. In th!s paper, we will heavily deal with hypergraphs and hyperarcs, which are a natural generalization of graphs and arcs respectively. Hyperarcs may connect one, two or more than two nodes. From now on we will use the word graph to denote both graphs in their normal definition and hypergraphs, and the word arc to denote both arcs and hyperarcs.
147
Constraint relaxation may be perfect
Definition 2.1 (Connection graphs). A connection graph ( N , A, a, c) consists of:
• a set of nodes N; • a set of arcs A ; A = U k A k is a ranked set, i.e. h E A k implies that h is an arc connected to k nodes; • a connection arc a E A ; • a connection f u n c t i o n c: U k (Ak--~ Nk), where c ( h ) = (x 1. . . . , Xk) is the tuple of nodes connected by h, h E A k, x i ~ xj when i ¢ j. ifc(a)=(xl ' . , X. k ) , A. _ a = {. a ~ . ' . , a. m } a. n d c (. a i ) = ( x i l , , Xk( ) , f o r i = 1 , . . . , m, then we will represent the connection graph (N, A, a, c) writing a(xl ' .
. Xk. ) ~__al(Xll, . .
.
Xktl)), ." . 1.
.
,
am(X 7 . . . .
, X km (m))
,
or simply a ~-- G with G = ( N , A, c). In the above definition, the variables connected by the same hyperarc must be distinct. This condition is not restrictive, since in the case of a variable connected more than once (say n times) by a hyperarc, it is possible to create n - 1 new variables, plus an identity constraint connecting them and the original variable. The hyperarc now connects the original variables plus the new ones. As an example, let us consider the simple connection graph a(x, y, w, z) ~-- b(x, y), c ( y , z), d(z, w), e(w, x) ,
which can be seen in Fig. 1. A binary arc is represented as usual with a line. A hyperarc of arity k is a box plus k lines between the box and the connected variables. The order in which the nodes are connected is expressed by an arrow for binary arcs and by numbers 1 , . . . , k for k-arity arcs, k > 2 . In the following we will omit both arrows and numbers when the order is not significative, or when the spatial disposition of nodes suggests the right order. 2.2 (Subgraphs). Given two connection graphs a <---G and b ~ F, b ~-- F is a subgraph of a ~-- G if: • A F C A c (remember that b E A v); Definition
v
1
2
Fig. 1. A simple connection graph.
148
U. Montanari, F. Rossi
• N F C N(.;
• c F coincides with c~; on A F' Definition 2.3 (Networks o f constraints). A network of constraints C = a <---G [ 1
is a finite connection graph a ~--G, where nodes are called variables and arcs are called constraints, plus a labelling function 1:1.3 k(Ak---> ~ ( U k)), where U is a finite set of values for the variables of C and ~ ( U k) is the set of all the k-relations on U. Definition 2.4 (Solution o f a network o f constraints). Given a network of constraints C = a<---G[l, where G = (N, A, c) and n = # N , let us consider any ordering of the variables of N, say (x 1. . . . . xn). Moreover, given an n-tuple v = (vt . . . . . v n ) of values of U, let us set v[ (x~,...... ,.,~ = ( Uil . . . . . Uim )" Then, we define the solution of network C as Sol(C) = { ( v l , . . . b C A, (v, . . . . .
, v,,)lc(a ) such that, for all
v~)]c(b) E l ( b ) } .
In words, the solution of a network of constraints is the set of all the assignments of the variables connected by the connection arc such that every such assignment can be extended to an assignment of all the variables in N which satisfies all the constraints in A. An obvious, exhaustive search algorithm to obtain the solution Sol(C) of a network of constraints C = a ~-- G/I is as follows.
Algorithm ES.
• Initialize Sol(C) to the empty set. • For every possible assignment v of values to all the variables in N: • If, for every constraint b in A, vlcth)E l(b), then add vl,~a~ to Sol(C). The worst-case time complexity of this algorithm can be estimated to be of the order of # A # N ( # U ) #N. In fact, the number of possible assignments of the variables on U is # U ' N ; for each assignment, # A constraints are verified, and each check involves, in the worst case, # N variables. Definition 2.5 (Solved networks o f constraints). A network of constraints
C = a ,,-- G ll is solved iff the label of its connection arc is equal to its solution, i.e. l(a) = Sol(C). Using Algorithm ES, an obvious way of solving a network of constraints C = a +--GIl is obtaining Sol(C) with this algorithm and then setting l(a) to Sol(C).
Constraint relaxation may be perfect
149
In fact, let us introduce the notation lib~a], where I:A---~B is a total function, a ~ A and b ~ B, for indicating the new function l' such that l'(a) = b, and l'(c) = l(c) for all c E A, c ~ a. Now, the network C~oI = a~---Gll[Sol(C)/a] is a solved network, and
Sol(Cso,) = Sol(C).
3. The general relaxation algorithm In this section we give a general f r a m e w o r k for describing relaxation algorithms. We obtain a scheme which can be instantiated to particular, practical relaxation algorithms, and we point out properties which hold for every instantiation. First of all we define relaxation rules, which are the basic mechanisms of relaxation algorithms.
Definition 3.1 (Relaxation rules). Given a network of constraints C = a ~ G[ l, a relaxation rule is any subgraph r = b ~ F of a ~-- G. Applying the relaxation rule r to C produces a new network of constraints Relax(C, r) = Relax(a ~-- G [ l, b ~-- F ) = a *-- G
b
I l[Sol(b ,--- F i t ) / b ]
In words, applying a relaxation rule b ~ F to C = a ~ G [ l means solving then setting l(b) to the solution of b ~--F[ l.
*--F[ 1 and
Given a network o f constraints C = a ~ G I l and a relaxation rule b ~-- F, we have: T h e o r e m 3.2 (A relaxation rule returns an equivalent network).
Sol(a ~-- G
I l)
= Sol(a ~-- G ll[Sol(b ~-- F I l) /b])
or equivalently Sol(C) = Sol(Relax(C, r)). Proof. Let us set
C' = a ~ G ll[Sol(b ~ F [ I ) / b ]
=a~--G[l'. T h e n , C ' = C except for the label of arc b. In fact, l'(b) = Sol(b ~---FI l). This means that l'(b)C_ l(b) and so S o l ( C ' ) C Sol(C) too. Now, if we have an
U. Montanari, F. Rossi
150
n-tuple of values v such that vl,.~a) C Sol(C), then vice,n E l(d) for all d E A and in particular for all d of F, so vl,:~h)E Sol(b-¢--F I l). This means, by definition of l', that vlc~b ) E l'(b), thus Viced~ E l'(d) for all d E A', and so vl,l~ ) E Sol(C'). So, Sol(C)C_ SoI(C'). In conclusion, we have Sol(C')C_ Sol(C) and S o l ( C ) C Sol(C'), and thus we can deduce S o l ( C ) = Sol(C'). [] As a trivial example of application of a relaxation rule, let us consider the network of constraints a * - - G l l and the relaxation rule a <---G. The network Relax(a *-- G I l, a ~ G ) = a *-- G[l[Sol(a *-- G [ l ) / a ] is obviously solved. Definition 3.3 (Stable network). Given a set R of relaxation rules, a stable network with respect to R is a network C such that, for all r in R, r applied to C returns C. Lemma 3.4 (Stability and relaxation rules). Given a network o f constraints C = a ~ G [ l and a relaxation rule r = b *-- F, the network C' = Relax(C, r) = a ~ G ll[Sol(b ~-- F ll )/b] is stable with respect to the singleton set R = {r} = {b ~--F}. Proof. Let us set C',=a~--GIl,, = a~Gll'[Sol(b,--Fll')/bl
= a ~-- G ll[Sol(b ~-- F ll )/b][Sol(b <---F ll' ) / b ] . Now, C" and C' differ only on the label of b. But, l"(b) = Sol(b
~FI l')
= Sol(b ~-- F]l[Sol(b .-- El b]) and l'(b) = Sol(b <-- F I l ) , which coincide by T h e o r e m 3.2. Thus, C' is stable with respect to R.
[]
If we consider relaxation rules as u n a r y o p e r a t o r s on networks of constraints,
Constraint relaxation may be perfect
151
then the previous lemma states that they are idempotent. It is also very easy to see that they are associative but not commutative.
Definition 3.5 (Strategies). Given a set R of relaxation rules, a strategy for R is a string S E R* U R ~. An infinite strategy S is infinitely often.
fair if each rule of R occurs in S
Definition 3.6 (Relaxation algorithms). Given a network of constraints C = a ~-- G I l, a set R of relaxation rules, and a strategy S for R which is either finite or fair, a relaxation algorithm applies to C the rules appearing in S until • a stable network (with respect to R) is obtained, or • S terminates (in this case, the network obtained is more explicit than C but may be not stable with respect to R). A PASCAL-like description of a relaxation algorithm which receives as input a network of constraints C, a set R = { r l , . . . , rn} of relaxation rules, and a strategy S = {s~, s 2. . . . } for R, is given below.
Relaxation Algorithm RA (C, R, S); begin **mark all the rules in R**; i:=1; while i ~< # S do
begin if **s i is marked** then
begin **unmark si**" if C ~ Relax(C,si)
then begin **mark all the rules in R**; C: = Relax(C,si) end; else
begin if **all the rules of R are unmarked** then return "stable";
end; end; i:=i+1; end; return "possibly unstable"; end.
152
U. MontanarL F. Rossi
Note that the marks are used for recording the rules of R which have to be applied (because they might produce some changes in the network). The marking of all the rules in R when one of the rules changes the current network is a naive solution that we will improve in the next section. Let us now give some properties of Algorithm RA. Theorem 3.7 (Properties of Algorithm RA). Given a network of constraints
C = a ~-- G I l, a set R of relaxation rules, and a finite or fair strategy S for R, let us call C' the network computed by the relaxation algorithm. We have: (i) Algorithm R A always terminates; (ii) if Algorithm R A returns "stable", then C' is stable; (iii) if S is infinite fair, then C' is stable; (iv) Sol(C') = Sol(C). Proof. (i) S is either finite or infinite fair. If S is finite, then Algorithm R A terminates, because the only loop of the algorithm depends on # S . If S is infinite and fair, let us consider the partial order ~< on labelling functions for the graph G, defined as 1~ ~< l 2 iff 12(h ) D__lj (h) for all h. Also, let us denote by H the set of all labelling functions on G. Now, (H, ~<) is well-founded because of the finiteness of G and U. Moreover, let S i be the suffix of S where the first i rules have been eliminated. It is easy to see that, since S is fair, S i is fair for all i. Strategy Si can be written in a unique way as the concatenation at. ~r [3r,i for every rule r, where sequence at, i does not contain r. Let k~ = maXr(#ar.~) + 1, Now, ki is obviously finite, and for all i, if C is the current network at step i, after at most k~ steps either Algorithm R A terminates or the current network will be C' strictly smaller than C. So the number of steps of Algorithm R A between one change and another of the network C is always finite. The number of changes must be finite due to the well-foundedness of (H, ~<), so we can conclude that Algorithm R A terminates in a finite number of steps. (ii) If Algorithm R A returns "stable", this means that all the rules of R are unmarked at the time of termination. So, no rule can change the network. Thus the network is stable. (iii) If S is infinite fair, then Algorithm R A cannot return "possibly unstable" because it cannot exit the loop. So, if Algorithm R A terminates (and we know it from (i)), it returns "stable", and thus computes a stable network because of (ii). (iv) This part immediately follows by repeated applications of T h e o r e m 3.2. []
Note that the only way for Algorithm R A to return "possibly unstable" is when S is finite. In fact it can happen that the strategy is over but the current network is not yet stable.
153
Constraint relaxation may be perfect
We will now prove that, in the presence of any infinite fair strategy, the network obtained by Algorithm R A does not depend on the given strategy. Theorem 3.8 (C' does not depend on S). Given a network of constraints C and
a set R of relaxation rules, let us consider any two infinite and fair strategies S' and S" for R. Then algorithms R A ( C , R, S') and R A ( C , R, S") compute the same network, which will be called closure(C, R). Proof. First we need some intermediate results.
• Improvement property. Let us consider the partial order (H, ~<) introduced in the proof of T h e o r e m 3.7. As noticed there, each relaxation rule applied to a network with labelling function l returns a new labelling function l' such that l ' ~< l. • Monotonicity property. If we have two labelling functions l~ and l 2 such that l 1 ~< 12, and if we apply to both of them the same relaxation rule, we obtain two new labelling functions l'~ and l~ such that l'1 ~< l~. As noticed in the proof of T h e o r e m 3.7, any infinite fair strategy S produces a chain {li} of labelling functions, where l 0 is l (the labelling function of C) and which has a limit labelling function llirn reachable in a finite number of steps. Let us now consider two infinite fair strategies S' and S" and the corresponding chains of labelling functions {I'/} and {l'~}. It can be shown that for each l'~ there exists j(i)>~i such that lj(i)<~ l i. In fact, let us give an inductive proof. We have j(0) = 0 since l;~ = l 0 = l (the labelling function of the given network C). If we are at step i and we have l~/> lj~), we apply step i + 1 (i.e. the relaxation rule s~+~) to l~ obtaining li+ 1 and we want to show that there exists ~< l'~+~. Let us take j(i + 1) as the first an index j(i + 1) ~> i + 1 such that f j(~+l) ' step in S", after step j(i), such that rule s~+ ~ has been applied, i.e. s~+, = sj~i+ 1). Such a step exists since S" is fair. Then, f'j~+~)-i ~< l'~ because, by induction, lj(~) <~ l~, and lj(i+ 1) 1 ~< lj(~) due to the improvement property of the relaxation rules. Thus, li¢i+1)<~ li+ 1 due to the monotonicity property of the relaxation rules (in this case applied to rule s~+l = s~i+,)). The converse is also true: for each l~ there exists i(j) >~j such that l~j) <~ l v. Thus both chains {l'i} and {l'~} have the same limit. [] t
!
¢
t
¢
t
r
It
¢
p!
II
tt
rt
!
t
t
ct
r
rr
Let us now define two particularly convenient sets of relaxation rules. Definition 3.9 (Adequate sets of rules). Given a network of constraints C and a set R of relaxation rules, R is adequate for C iff closure(C, R) is solved. From T h e o r e m 3.2 and Definition 3.9, it follows that, given a network of constraints C and an adequate set R of relaxation rules, the label of the connection arc in closure(C, R) is the solution of C.
154
U. Montanari, F. Rossi
Definition 3.10 (Perfect sets of rules). Given a network of constraints C, a set R of relaxation rules, and a finite strategy S for R which is a total ordering of R, R is perfect for C iff the network computed by Algorithm RA(C, R, S) is solved. S is called a perfect strategy for R. Algorithm RA, together with S, is called a perfect relaxation algorithm. Thus, if S is a perfect strategy for R, then S is a sequence containing each rule of R exactly once. Note that Algorithm RA, when applied to a perfect set of rules R and a perfect strategy S, can be simplified and reduced to the sequential application of all the rules of R to C in the sequence given by S.
Theorem 3.11 (Algorithm RA perfect is linear in #R). Given a network of constraints C, a set R of relaxation rules, and a perfect strategy S for R, Algorithm RA(C, R, S), has a worst-case time complexity O(
2 i
I .....
#A,#N,(#U)#N') #R
where # N i and # A i are the numbers of variables and arcs in rule ri respectively. Proof. Obvious from the definition of Algorithm RA, which applies the rules in S (they are as many as the rules in R) and then stops. []
4. Special algorithms The literature on relaxation algorithms has shown several cases in which the strategy is not explicitly given but is built during the execution, thus avoiding useless steps (see [4, 5, 8, 11, 12, 18] for some examples of such relaxation algorithms). However, such flexible algorithms need a formal justification, which may be provided by our formal results concerning the termination of any relaxation algorithm on any fair strategy (Theorem 3.7) and the independence of the network obtained by a relaxation algorithm on the given strategy (Theorem 3.8). Thus it is clear that, for efficiency reasons, we can choose any particular, convenient infinite fair strategy. Therefore we give now a new more efficient version of the general relaxation algorithm, let us call it Algorithm ER, whose parameters are only the network C and the set R of relaxation rules. Its strategy is dynamically constructed during the execution by adding to a set, whenever the current rule r makes some changes to C, those rules which are adjacent to r.
Relaxation Algorithm ER(C, R); begin Q:=R; while **Q not empty** do
Constraint relaxation may be perfect
155
begin **choose a rule r in Q** O := O - {r}; if C ~ Relax(C, r)
then begin C := Relax(C, r); Q := Q u adj(r); end; end; end. Function adj is defined as adj(b ~- F) = {b' ~--(N', A', c') ~ R l b ~ A'}. Note that Algorithm ER always terminates computing a stable network, which, due to Theorem 3.8, coincides with closure(C, R). The algorithm stops the construction of S (and its execution) only when all the rules in the current set Q have been used without producing any change in C. Instantiating parameter R of Algorithm E R we can obtain different practical relaxation algorithms. Here we will show this instantiation in two cases extensively described in the literature: the arc- and path-consistency relaxation algorithms. Consider the arc-consistency algorithm developed by Mackworth [8] for binary networks of constraints; in our notation, this means that the network C = a *-- G ll is such that G contains all unary constraints a~, with c(a~) = (x~), plus some binary constraints aq, with c(aq) = (xi, xj). The relaxation rules are of the form
ai(xi)~---aq(xi,xj), aj(xj)
for i, j = 1 . . . . .
#N.
The general relaxation rule for the arc-consistency algorithm can be seen in Fig. 2 (the connection hyperarc is represented by a thicker line). Let us now consider the path-consistency algorithm developed by the first author in [12]. It was written for complete binary networks of constraints, thus G is a complete binary graph containing all the binary constraints aq plus all the unary constraints a i. The rules are of the form
aq(xi, xj)
<--aik(Xi, Xk) ,
ak(Xk), akj(X k, Xj) for i, j, k = 1 , . . . , # N . a ij
Fig. 2. The relaxation rule for the arc-consistency algorithm.
156
U. Montanari, F. Rossi
The application of this rule can be seen also as the multiplication of three boolean matrices, each representing one of the constraints. The general relaxation rule for path-consistency algorithm can be seen in Fig. 3. It is clear that these sets of rules are very convenient to use. However, both arc- and path-consistency relaxation rules are in general not adequate. As an example of a network of constraints for which the arc-consistency rules are not adequate, let us consider the network in Fig. 4 where all arcs are labelled with the non-identity relation and the domain of the variables is U---{0, 1}. It is easy to see that this network represents the problem of colouring a complete three-node graph with two colours. This problem has no solution, so the solution of the network is the empty relation as label of arc a. However, arc-consistency does not change anything, because each arc is already consistent (for each value for node x there always exists a value for node y such that the arc (x, y) is satisfied). On the contrary, in this case path-consistency is enough for solving the network; thus the part-consistency rules are adequate. More surprisingly, in Fig. 5 we see an example where neither the arc- nor the path-consistency algorithms change anything in the network, even if the network is not solved. In this case U = {1, 2, 3} and all arcs are labelled by the non-identity relation: it is the problem of coiouring a complete four-node graph with three colours.
( ~
aij'
aik"~,,~a
~ ) ~
k~) ak "~ Fig. 3. The relaxation rule for the part-consistency algorithm.
Fig. 4. Arc-consistency rules may be not adequate.
Fig. 5. Arc-consistency and path-consistency rules may both be not adequate.
157
Constraint relaxation may be perfect
Let us now see a network of constraints together with a set of perfect relaxation rules for it. Consider the network of constraints C = a ~ G I I where a ~--G is the following:
a(x, y)*--bo(x, ul), bl(ul, u2) , bz(u2, u3) , b3(u3, y), Co(X, vl), Cl(V,, v2), c2(v2, v3), c3(v3, Y), dl(Ul, Vl), d2(u2, v2), d(u3, v3),
e,(x, u l, v,), ez(x, u 2, v2), e3(x, u3, v3) and can be seen in Fig. 6. Function l is: /(el) : l(e2) =/(e3)
=
l(a) = U 2 ;
U 3 ,
l(bi), l(ci) and l(di), i = 0 . . . . . 3 are defined in some way. For this network of constraints, let us consider the following list of relaxation rules:
el(x, ul, v~)e--bo(x, /~1), Co(X, Vl), ei+l(x, ui+,, vi+l)e--ei(x, ui, vi), bi(ui, /2i+1), ci(vi, Vi+l), di(u i, vi) for i = 1 , 2 ,
a(x, y)~--e3(x, U3, 03), b3(u 3, y), c3(v3, y), d3(u 3, u3). It can be shown that this set of relaxation rules is adequate and, moreover, that the stable network can be obtained by applying each rule once, in the order in which they are written. Thus, such a list of relaxation rules is perfect. The same property holds for all the networks obtained by extending 3 to a generic n. Theorem 3.11 states that Algorithm R A applied to a perfect set of relaxation rules has a complexity linear in the number of rules. However, from the time a
bl
b2
b
b3
¢2 Fig. 6. The graph of a network of constraints.
U. Montanari, F. Rossi
158
complexity formula we can also deduce that it is convenient to use a perfect set of relaxation rules only if the subnetworks represented by each rule is small with respect to the size of the entire network. In the next section we will characterize certain families of networks for which it is possible to find a perfect set of relaxation rules with this property. A perfect strategy is provided together with the rules, so from now on we will consider only Algorithm RA.
5. Perfect relaxation problems and graph production systems In this section we want to point out a method for obtaining a perfect strategy for a given network. Some definitions and theorems support the central result, given by Corollary 5.7. Let us first define a notion of replacement of an arc (in a connection graph) with an entire new connection graph, where the connection arc has the same rank as the replaced arc.
Definition 5.1 (Replacement). Given two graphs a ~ G and a ' ~-- G', and an arc b such that b E A, r a n k ( b ) = r a n k ( a ' ) and b ~ a, the replacement of b with a ' ~ - - G ' in a *--G is the new graph
a~---H= a<--G[a' *--G'/b] where: • N H = (N t3 N')W where c(a')i E c(b)i, i = 1 . . . . . • A H = (A U A')I L where a ' L b; • c . is the union of c and c'.
r a n k ( a ' ) = rank(b);
The notation Q1E denotes the set obtained from Q by taking the quotient with respect to the equivalence relation E, and ( x ~ . . . x k ) i = x i , 1 ~< i<~k. Thus N , contains all the nodes in N plus all the nodes in N', where the nodes in c(a') and those in c(b) are merged. The same holds for A , . Below, Fig. 7 gives an example of a connection graph a ~ G where arc e is replaced by the connection graph e ' ~ - - G ' . The resulting graph is
a~--H = a~--G[e' ~--G'/e].
a ~-- G
e',-- G' Fig. 7. A replacement.
a~-- G [e'~--- G'/e]
Constraint relaxation may be perfect
159
Let us now return to the networks of constraints, where the above notion of replacement is applied to the underlying graph. The following theorem expresses a basic property of networks of constraints. Theorem 5.2 (Replacement and solution) Given a network of constraints
C = a~--G[c~--H/b]ll, we have: Sol(a <-- G[c <---H/b]ll ) = Sol(a <-- G Il[Sol(c ,-- H i t ) / b l ) . Proof. Let l' = / [ S o l ( c <---H I l)/b] and a ~-- G' = a ~-- G[c <-- H/b]. If we have an assignment of all the variables of a<---Gll', v, such that vl~(o~ E Sol(a < - - G l l ' ), then Viced) E l'(d) for all d in A, and so
v[ ~(b)~ l'(b ) = Sol(c <-- H I l). This means that o can be extended to an assignment of all the variables of a <-- G', say w, such that w[ c(d)E l(d) for all d in A H, thus w[ ¢(d)E l(d) for all d in A' because l'(d) = l(d) for all d in A. So, o[ c(d)E Sol(a <-- G'[l) and thus Sol(a ~-- G[c <-- H/a]
Il)
Sol(a <-- G lltSol(c *-- H II )/b]).
Conversely, if we have an assignment o of all the variables of a <---G ' I l such that ol c
Sol(a~--G[c<--H/blll)C_Sol(a<---Gll[Sol(c<---Hll)/b]).
[]
In words, T h e o r e m 5.2 says that solving network C = a <---G[c <---H/b] I l is equivalent to solving the network C' = c <---H I l and then solving the network C" obtained by setting, in the network a ~--G I l, the label of arc b to the solution of C'. That is, replacement and solution satisfy a commutative property, in the sense that we can first replace and then solve or, equivalently, first solve and then replace. It must be expected that the latter alternative is less expensive, because we have to solve two networks (C' and C") of smaller size.
u. Montanari, F. Rossi
160
T h e o r e m 5.2 suggests an efficient technique for solving a network of constraints. We first generate its graph by a sequence of replacements, and then, starting from the last replacement, we solve every replaced network, and substitute it with its solution. In what follows, we want to express the above incremental technique as a relaxation algorithm. Definition 5.3 (Perfect relaxation problems). A perfect relaxation problem a ~-- G I l. S is a network of constraints C = a ~-- G[ l plus a perfect strategy S for a set R of relaxation rules for C. Definition 5.4 (Perfect uniform relaxation problem), a ~ G. S is a perfect uniform relaxation problem if a ~-- G [ l. S is a perfect relaxation problem for any labelling function I. L e m m a 5.5 (A simple perfect uniform relaxation problem), a ~ G . a ~-- G / s a
perfect uniform relaxation problem. The following t h e o r e m gives an inference rule for obtaining, given a perfect uniform relaxation problem a ~-- G . S, a new perfect uniform relaxation problem a~---G[c~--H/b].(c~--H)S where a replacement is p e r f o r m e d on the network and the corresponding rule is appended to the strategy. Theorem 5.6 (Obtaining a new perfect uniform relaxation problem).
a ~-- G . S is a perfect uniform relaxation problem implies a ~-- G[c ~-- H/b]. (c ~-- H ) S is a perfect uniform relaxation problem. Proof. If a ~-- G . S is a perfect uniform relaxation problem, then a ~-- G[ l ' . S, where l ' = l [ S o l ( c ~ - - H l l ) / b ] and l is a p a r a m e t e r , is a perfect relaxation problem. Now, notice that a ~ G is a subgraph of a ~-- G[c ~--H/b], that a ~-- G and c ~--H have only arc b in c o m m o n , and that Sol(a ~
G lr)
= Sol(a
~
G [ c ~-- H / b ] [ l' ) .
This means that also a ~ G[c ~---H/b][l'. S is a perfect relaxation problem, because S applies to the same subnetworks as in a ~-- G[ l' and thus computes the same solution. Finally, if a ~--G[c~---H/b]ll'. S is a perfect relaxation
Constraint relaxation may be perfect
161
problem, then also a ~-- G[c ~-- n / b ] ] l . (c ~-- H ) S
is a perfect relaxation problem, because the application to a*--G[c ~--H/b][l of the relaxation rule (c *--H) produces the network a ~-- G[c ~ H/b][ l' .
So a ~--G[c *--H/b]. (c ~---H)S is a perfect uniform relaxation problem, since l was a parameter. [] Corollary 5.7 (Obtaining a perfect strategy). Given a network C in the structured form C = a 1 *--Gl[a2 ~ - - G z / b t ] . . . [a, ~--G,/b~_l]]l , then S = (a n ~-- Gn)(an_ 1 ~-- G n _ l ) " " (a 1 ~-- Gl) is a perfect strategy for R.
Proof. The proof is constructive: we start with the relaxation problem a~ *--G~. (a~ ~--G~), which is a perfect uniform relaxation problem by Lemma 5.5, and we iteratively apply the inference rule of Theorem 5.6. [] Once a perfect strategy S for a given network C is known, the corresponding perfect relaxation problem C. S can be solved by applying the relaxation algorithm RA, with the strategy S, to the network C. In fact, the following theorem states that we always obtain a solved network. Corollary 5.8 (Solving a structured network with Algorithm RA). Given a network o f constraints C = a 1 ~-- Gl[a 2 ~-- G J b l ] . . .
[a n ~--
Gn/bn_~][l,
and the finite strategy S = (a n ~ G , ) ( a , _ ,
~ - - G n _ , ) . . . (a 1 ~ G , )
for rules R = ( ( a l "- O , ) , (a2
G2) . . . . .
(an "-- O n ) ) ,
the relaxation algorithm RA applied to parameters ( C, R, S) obtains a network C' such that:
162
U. Montanari, F. Rossi
• Sol(C') = Sol(C);
• C' is solved. Proof. Immediate from Theorem 3.7(iv), Definitions 3.9 and 3.10, and Corollary 5.7. [] Due to the above theorem, it is enough to take the label of the connection arc of C' to have the solution of the given network C. To give a complexity result for our algorithm, when applied to perfect strategies, we now define the notion of graph production systems. They are similar to context-free (hyper)graph grammars: for a survey and some related results on graph grammars, see [1, 3, 7, 13]). Definition 5.9 (Graph production systems). A graph set P of connection graphs, called productions.
production system is a finite
As an example of a graph production systems, consider the productions
Pl : a(x, y)<---b(u, y), c(v, y), d(u, v), e(x, u, v) ; P2: e(x, u, v)~--b(u', u), c(v', v), d(u', v'), e'(x, u', v') ; P3: e(x, u, v)~--b(x, u), c(x, v). In Fig. 8 we can see the productions of this graph production system. Definition 5.10 (Language of a graph production system). Given a graph production system P, its language L(P) is the (possibly infinite) set of all connection graphs of the form
a~ *--G~[a2,c--G2/b,]... [a~ ~--G~/b, ~], where a i ~ Gi, i = 1 . . . . , n, is a production in P. Note that there are some classes of graphs which cannot be included in the language of any graph production system. For example the class of all complete
a
Fig. 8. Three productions.
Constraint relaxationmay be perfect
163
graphs, and also some classes of sparse graphs such as the class of all rectangular lattices of all sizes [10], have this property. Let us assume to deal with graph production systems in which there are no productions like a~---(N,A,c)
with
A={a},
i.e. productions which do not add any arc to the given graph. This restriction is necessary to avoid an infinite number of steps to produce a finite graph, and thus to have a linear relationship between the number of steps used to generate a graph and the number of arcs in this graph. Let us call this kind of graph production system a Greibach graph production system. The following theorem states that Algorithm RA, applied to the class of networks whose graphs are generated by a Greibach graph production system, has a worst-case time complexity linear in the number of arcs of each solved network. Theorem 5.11 (Algorithm R A is linear in the number of arcs). Given a Greibach graph production system P, a class o f connection graphs G G such that G G C L ( P ) , and a set o f values U, Algorithm RA, applied to a network C = a * - - G [ l , where a <-- G = a <---Gl[a 2 ~- G 2 / b l ] . . . [ a n ~ Gn/bn_l] is in G G and 1 defines labels over U, with the corresponding perfect strategy (an <----Gn)(an_ 1 ~-- G n_ l ) " " ( a ~-- Q ), has a worst-case time complexity linear with the number o f arcs o f G. Proof. When Algorithm R A uses a perfect strategy, its complexity is
O(i_l Z -
,
. . . .
# A i # N i ( # U ) #Ni) # R
by Theorem 3.11. This value is bound by O ( # R M N ( # U ) N ) , where M = max i # A i and N = maxi #N;. Since U is fixed and, given P, also N and M are fixed, the complexity is linear with # R , i.e. with the number of replacements needed to obtain a <---G. But this number is proportional to the arcs of a ~-- G since P is a Greibach production system. [] Let us now consider the class of all networks with a tree structure, i.e. whose underlying graph is a tree. Such networks have extensively been studied in the literature, obtaining a variety of results. In particular, in [4] it is shown that directional_arc_consistency on trees is linear and enough to have a backtrackfree search.
U. Montanari, F. Rossi
164
It is easy to see that the graph production system consisting of only the following production p is able to generate all the trees (where we allow binary as well as unary arcs):
p: a(x)~--b(x, y), c(y). The graphical representation of p can be seen in Fig. 9. Let us now consider a particular tree T, as in Fig. 10(b). Such a tree can be seen as the application of production p for four times to the initial graph consisting of only one node and a unary arc, let us call it T 0 (see Fig. 10(a)). Thus
T= To[plail[pla,l[pla4l[p/a4]. This means that the perfect strategy is S = pppp, where the applications of p are now considered in the reverse order. It is easy to see that for a general tree with n arcs the perfect strategy is S = p". Thus the time complexity of the corresponding perfect relaxation algorithm is O ( n ( # U ) 2 ) , because each relaxation rule involves two variables. Such result corresponds exactly to the above-mentioned result in [4], and in fact our perfect relaxation algorithm coincides with their directional-arc-consistency algorithm. T o see how the algorithm really works, we can now assume U = {0, 1}, and the following definition of the constraints ai: a 6 = a 7 = a 2 = a 3 ~-~ ( ( 0 ,
1), (1, 0)}
;
Fig. 9. A tree production system.
a7 a)
b)
Fig. 10. The tree in (b) is obtained by applying four times the production p in Fig. 9 to the initial graph in (a).
Constraint relaxation may be perfect
165
a t = a4 = a s = a 8 = ((0), (1)) ; a9=((1)). In this case our perfect relaxation algorithm works as follows: Step 1. The relaxation rule p is applied to the subtree with a7, obtaining the elimination of the value 1 from the definition of a 4. Step 2. The relaxation rule p is applied to the subtree with a6, and nothing is changed in the current definition of a 4. Step 3. The relaxation rule p is applied to the subtree with a3, and nothing is changed in the definition of a 1. Step 4. The relaxation rule p is applied to the subtree with a:, obtaining the elimination of the value 0 from the definition of a~. Thus at the end the definition of a~ is {(1)}, and this is the solution of our network (a 1 being the connection hyperarc). Note, however, that a consistent instantiation of all the variables can be obtained by a backtrack-free search (which instantiates the variables following any traversal of the tree where variables in lower levels are instantiated later), thus matching the results in [4]. More general examples of application of Algorithm R A follow. These are compared, from the time complexity point of view, with the arc- and the part-consistency algorithms. Let us consider the class of graphs {a < - - G n } which can be obtained from Fig. 6 extending 3 to a generic n. It can be easily seen that this class is included in the language of the graph production system {p~, P2, P3} in Fig. 8. For instance, the graph in Fig. 6 coincides with the structured graph p l [ P 2 / e ] [ p 2 / e'][p3/e' ]. More precisely, since we must provide nonconflicting indexes for all nodes and arc instances, we can write the graph in Fig. 6 as follows: a ~ G 3 = p~ 03[pzOJe3l[pzO~/ez][p3Oo/ea], where the new nodes and arcs are renamed as follows: 03 = u----~u 3, v ----~v 3 , b---~ b3, C-----~C3, d---~ d3, e---~ e 3 ; 0 2 : U'--'~ /g2, U' "-----)U2, b----~b2, c----) C2, d---~ d2, e'---~ e 2 ;
O~ = u'---~u~, v'---~v~, b - - o b 1, c---~cl, d---}d~, e'---~e~ ; Oo = b---~ bo, c---~ Co . Suppose in particular that we specify a network C n = a ~ G n [ I. d e f i n i n g follows: U : { O , 1} ;
1. as
166
U. Montanari, F. Rossi
I = { ( 0 , 0 ) , (1, 1)} is the identity relation and R = U 2 - I ;
ln(a ) = U 2 ; l.(bo) = l . ( b . ) = I ; ln(Co)
=
l.(c.) = R ;
l . ( b i ) = l.(ci) = R
fori=l,...,n-1;
l.(d i)=R
.....
fori=l
l.(ei)=U 3 fori=l
.....
n; n.
Figure 11 shows C 3. Notice that Sol(Cn) is I if n is odd, and R if n is even. Thus in our case S01(C3)--I. Corollary 5.7 says that a perfect strategy for a*--G n is S n = p3P2-1pt. M o r e precisely, for the graph in Fig. 6 we can say for instance that a ~-- 6 3 . S 3 is a perfect uniform relaxation problem, and for the network in Fig. 9 that a ~-- G3]/3 . S 3 is a perfect relaxation problem, where S 3 = (p300)(p20t)(p202)(P,03).
Notice that S 3 coincides with the list of rules given in Section 4 for the graph in Fig. 6. T h e o r e m 5.11 says that Algorithm R A , on this class of networks and with these perfect strategies, has a worst-case time complexity O ( n M N ( # U ) N ) , i.e. O(n) because # U , M and N are fixed (in this case # U = 2 and M = N - - 5). We will now consider other well-known relaxation algorithms (arc- and path-consistency), on the same kind of networks (but without arcs {ei, i = 1 . . . . , n}), and we will c o m p a r e them with Algorithm RA. The arc-consistency algorithm is useless; in fact, if we consider the relaxation rule for this kind of algorithm (shown in Fig. 2), we will see that ai (which defines the domain of variable xi, initialized to U = {0, 1} in our case) will never change, and so the stable network coincides with the initial one. U2 i
Fig. 11. A network of constraints.
Constraint relaxation may be perfect
167
On the contrary, the path-consistency algorithm is adequate (the stable network is solved). If we use the path-consistency algorithm as in [8], i.e. without exploiting the structure of the particular network to which it is applied, then its time complexity is O(n3). However, it is possible to build an ad hoc algorithm which does not consider a complete graph. For example, we may add to the given network, which is very sparse, only the arcs connecting x to vi, for i = 2 . . . . . n. In this case, the complexity of the path-consistency algorithm may become linear. Thus, for this class of connection networks, Algorithm R A is more accurate than the arc-consistency algorithm, and equally accurate but more efficient than blind path-consistency. On the contrary, when guided by the structure of the graph, path-consistency may be as accurate and as efficient as our algorithm. However, this is not always possible, as shown by the following example. Consider another class of networks, which have the same form of the graph in Fig. 6, plus new arcs f/, i = 1 . . . . . n - 1, such that c ( f / ) = ( u i, vi+l). Note that this class of graphs is included in the language of the graph productions of Fig. 8, where P2 has been changed to include the new arc: P2: e(x, u, v ) ~ - - b ( u ' , u), c(v', v), d(u', v ' ) , e'(x, u', v ' ) , f ( u ' , v ) . The topology of these new networks (without arcs ei) can be seen in Fig. 12 for n = 2. If we assume U={1,2,3}, l(a) = U 2 , l(ai) = U 2 - I
l(ei) = U 3 '
for all a i in A - {a} -
{ei li
=
1,...,
n} ,
then the solution of any network of this form is the identity relation I if n = 1 , 4 , 7 , 10 . . . . and U 2 - I i f n = 2 , 3 , 5 , 6 , 8 , 9 ..... It is easy to see that, for these networks, neither arc- nor path-consistency algorithms do anything. On the contrary, Algorithm RA, when applied to this class of networks, is perfect and linear (but with a larger constant than in the previous example, since now # U = 3 ) . Thus, for this class of networks, a
Cl Fig. 12. A connection graph.
168
U. Montanari, F. Rossi
(a)
(b) Fig. 13. (a) A graph production system. (b) A connection graph generated by the graph production system in (a).
Algorithm RA is more accurate than arc- and path-consistency, and has a linear complexity. Let us now give an idea of the generative power of graph production systems, in order to convince the reader that they can generate classes of networks much more complex than those considered in the previous examples. Here we consider a set of two productions:
a(x, y, z, w) ~-- b(x, y, u, v), c(u, v, z, w) ; a(x, y, z, w) ~-- b(x, y), c(y, z), d(z, w), e(w, x). These productions can be seen in Fig. 13(a) and one of the generated graphs can be seen in Fig. 13(b).
6. Conclusions
In this paper we have described relaxation algorithms as sequences of applications of relaxation rules. Moreover, we have defined perfect relaxation algorithms as relaxation algorithms which not only return a more explicit network, but also exactly solve the given network of constraints. Finally, we have shown that perfect relaxation algorithms are very efficient when applied to particular classes of networks: those included in the language of some graph production system (a kind of context-free hypergraph grammar). Each network in a class can be solved by a perfect relaxation algorithm whose
Constraint relaxation may be perfect
169
relaxation rules are directly generated from the syntactic derivation of the network, are of bounded complexity and in number less or equal than the number of arcs of the network. This algorithm returns the exact solution of each network in the class with a time complexity which is linear with its size. Of course, if the derivation of a network were given in the form of a syntactic tree, our algorithm might have a parallel implementation and, for balanced trees, a time complexity logarithmic with the size of the given network (see [13]). Note that the basic notation a ~ G used in this paper to represent connection graphs, networks of constraints, relaxation rules and productions is reminiscent of Horn clauses. In fact, it is possible to develop our results in the logic programming context, and it is quite natural to implement relaxation algorithms using PROLOG. For instance, a ~--G can be seen as a Horn clause whose semantics is Sol(a ~-- G I I) (if we add the definition of l as an extensional database). Nevertheless, the correspondence is not complete, and we have chosen not to base the paper on logic programming to avoid confusion. For example, a set of relaxation rules for a network does not constitute a logic program from a semantic point of view; moreover, the notion of application of a relaxation rule is similar to the assignment of a predicate (assert in PROLOG) and thus is a second-order concept [16]. In fact, an implementation of our relaxation algorithm R A has been easily developed in PROLOG using metaprogramming techniques [15].
References [1] M. Bauderon and B. Courcelle, Graph expressions and graph rewriting, Math. Syst. Theory 20 (1987) 83-127. [2] U. Bertel6 and F. Brioschi, Nonserial Dynamic Programming (Academic Press, New York, 1972). [3] V. Claus, H. Ehrig and G. Rozenberg, eds., Proceedings International Workshop on Graph Grammars and Their Application to Computer Science, Bad Honnef, FRG, Lecture Notes in Computer Science 73 (Springer, Berlin, 1978). [4] R. Dechter and J. Pearl, Network-based heuristics for constraint-satisfaction problems, Artif. lntell. 34 (1988) 1-38. [5] E.C. Freuder, Synthesizing constraint expressions, Cornmun. ACM 21 (1978) 958-966. [6] E.C. Freuder, A sufficient condition for backtrack-free search, J. ACM 29 (1982) 24-32. [7] A. Habel and H.J. Kreowski, Some structural aspects of hypergraph languages generated by hyperedge replacement, in: Proceedings STACS 1987, Lecture Notes in Computer Science 247 (Springer, Berlin, 1987). [8] A.K. Mackworth, Consistency in networks of relations, Artif. lntell. 8 (1977) 99-118. [9] A.K. Mackworth and E.C. Freuder, The complexity of some polynomial network consistency algorithms for constraint satisfaction problems, Artif. Intell. 25 (1985) 65-74. [10] A. Martelli and U. Montanari, Nonserial dynamic programming: on the optimal strategy of variable elimination for the rectangular lattice, J. Math. Anal. Appl. 40 (1972) 226-242. [11] R. Mohr and T.C. Henderson, Arc and path consistency revisited, Artif. Intell. 28 (1986) 225-233.
I70
U. Montanari, F. Rossi
[12] U. Montanari, Networks of constraints: fundamental properties and application to picture processing, Inf. Sci. 7 (1974) 95-132. [13] U. Montanari and F. Rossi, An efficient algorithm for the solution of hierarchical networks of constraints, in: Proceedings International Workshop on Graph Grammars and Their Application to Computer Science, Warrenton, Lecture Notes in Computer Science 291 (Springer, Berlin, 1986). [14] U. Montanari and F. Rossi, Fundamental properties of networks of constraints: a new formulation, in: L. Kanal and V. Kumar, eds., Search in Artificial Intelligence (Springer, Berlin, 1988) 426-449. [15] F. Rossi and U. Montanari, Relaxation in networks of constraints as higher order logic programming, in: Proceedings International Workshop on Metaprogramming in Logic (META90), Leuven, Belgium (1990). [16] F. Rossi and U. Montanari, Hypergraph grammars and networks of constraints versus logic programming and metaprogramming, in: Proceedings International Workshop on Metaprogramming in Logic Programming (META88), Bristol (MIT Press, Cambridge, MA, 1988) 117-131. [17] F. Rossi and U. Montanari, Exact solution in linear time of networks of constraints using perfect relaxation, in: Proceedings First International Conference on Principles of Knowledge Representation and Reasoning, Toronto, Ont. (Morgan Kaufmann, Los Altos, CA, 1989). [18] R. Seidel, A new method for solving constraint satisfaction problems, in: Proceedings" IJCA1, Vancouver, BC (1981) 338-342.